Recently, we were tasked to strategize and design an iOS and Android app which would utilize Augmented Reality (AR) technology to provide a unique and interactive experience for the customer. It would give the customer a realistic glimpse of how the product would impact their work.
Due to the nature of the project, I am (currently) not allowed to share any details, hence referring to the enterprise as Blank, and their product as the product. Talking about nature: Let’s just say that Blank has products to be used outside in the nature.
Of course, leaving all the details out, takes away the why and the what, so I am focusing on how we approached it.
The project presented a few challenges, of course starting with the implementation of AR in general, the possibilities and restrictions of implementation in iOS and Android, as well as getting an MVP (using WebAR) out as soon as possible to get the app to the customer and iterate quickly.
We wanted the AR experience to be seamless and intuitive, even for people who didn’t have previous experience with AR.
Due to many interdependencies, we took an object-oriented UX approach. This allowed us to start small and be able to scale quickly in the future.
Sunny – Shady
Watered – Dry
Objects are for example 3D objects that we place in the environment using AR.
Attributes are parameters that change the appearance of the objects, some of which we can control, some of which we can’t.
Actions can be obvious, like moving around in space using the AR app, clicking on objects, but can be a lot more sophisticated. Imagine it would start raining, what would happen to the object?
In order to create an AR experience that felt natural, intuitive and simple, we took into account many considerations designing the interaction within the space. Considerations were broken down into three categories.
The most obvious interaction is between the user and the device, which in our case is focused on the smartphone. The interaction is similar to the use of a camera: movement (dolly, truck, pedestal) and rotation (pan, tilt, roll).
The next category is the app frame itself with common components like menus, navigation, dialogue, etc. We want to make sure the app’s UI isn’t in the way of the AR experience.
Of course, one could use learned components, like buttons for clicking, rotation, and progression. And they have their place, but since we are in a three-dimensional space, we prefer a more direct way of manipulation, if possible.
For example, we don’t need a way to zoom in on the object, we can just get closer to it using our device. But we need tap indicators in order to add further information at certain parts of of the 3D object.
The wireframes, which were used to iterate through many versions of the app, are the guide to everything going forward. They were used to refine user flow and interaction design, help identify potential issues, as well as help creating copy and UI design.
To be continued.
(When I can share more details.)