Rather than block out users’ reality (as in virtual reality) or distort the user’s reality (as in mixed reality), provide Meta UIs—metadata about the world, near but not occluding it—that adds something useful to their understanding of their environment. This is the ultimate aim of this document: to create an informative, powerful and unobtrusive layer of digital information on top of the real world.
Augmented, not mixed reality
UI design suggestions
1. Compliment the real world
Design interfaces to complement real world objects, instead of masking or transforming them. Augmented reality is an opportunity to elevate our understanding of our environment, our work, and each other by associating relevant information and tools with the world around us. For instance, a panel, with a wikipedia page about a flower, placed near but not occluding it would be considered Augmented Reality. Consider a user that is able to touch a flower and learn about the amount of DNA it shares with it. This simplistic example demonstrates how one can enhance their connection with their world, and represents an instance of meta UIs.
2. In contrast: A surreal experience that turns a gym floor into the surface of a pool, however, will trigger unnecessary confusion, and give the user the wrong idea about how the environment behaves, and leading to erroneous behavior like diving into a solid floor. This embodies a violation of “Augmented Not Mixed Reality” since it changes or distorts the physical characteristics of an object in the real world, rather than enhance the user’s connection with it. While such distortions of reality can seem exciting at first, they add a layer of confusion from a cognitive perspective. Needless to say, this concern will only become more significant as optical technologies improve realism.
3. Add value and insight for the user
Rich content should be displayed within Meta UI’s to provide information with as much clarity and depth as possible. Text, video, volumetric content, and the web can all play a role in empowering the user with deeper understanding and connection to their world.
4. A note about use cases: in gaming, we recommend not “breaking through walls”, or covering real people with with overly distorting visuals, instead allowing for virtual characters to traverse the real user’s environment. Certain use cases such as interior decoration, on the other hand, may warrant a momentary alteration of the characteristics of the environment (e.g., planning a new paint color for a wall). Importantly, those use cases lead to an eventual modification or improvement of the physical characteristics of the real world. In other words, they enhance the user’s connection with the real world by allowing the user to better plan projects within it.
Imagine an educational app that teaches photosynthesis to high school students. A well-designed AR experience might follow an order like the following:
- The user sees an initialization interface first, rather than be immersed in a jungle the moment they open the app.
- A plant would be selected from a menu next, causing a lifelike model to appear in space.
- The user could then reach into the plant to zoom in on details like Light Dependent Reactions and the Carbon Concentrating Mechanism.
Note that each step falls within the user’s expectations; there is no blurring of the lines between the plant and reality, nor is the user’s environment ever distorted in a way that might cause confusion. The plant might look extremely realistic, for instance, but would sit in a pot staged in an area that is distinctly different from its surroundings.
The neuroscience behind it
When imagery added to the user’s visual field by an AR interface adds context and information to the world without distorting it, the user’s priors—their mental model of the world—can still apply in effectively understanding it.
Extreme deviations from reality, however, as seen in Mixed Reality applications, can contradict the user’s priors to the point of triggering error neurons, which trigger a burdensome cognitive feedback loop as it attempts to change the user’s mental model. This disruptive, resource-intensive process will continue until the prior has been changed, at which point the error neurons will decrease their activity until the user’s brain returns to normal. Repeatedly triggering this process can result in confusion for the user, and wasted mental energy in resolving ambiguities, which will lower task efficiency. This process can lead to confusion and delusions, which are also the bases of psychiatric diseases such as schizophrenia.
Note: We are actively testing this hypothesis and the point of this document is to protect the user against such conditions.
A Warning on the potential for delusion in mixed reality
Behavioral and MRI studies have shown that when sensory stimulus is ambiguous, unstable or confusing—a distinct possibility when experiencing the subtle effects of mixed reality—higher-level cognitive beliefs can bias perception and even interfere with the information processing of the visual cortex. This can result in maladaptive learning and the formation of delusional beliefs (Schmack, et al, 2013).
The error neurons discussed above are triggered in the Ventral Striatum, initiating a dopamine-driven feedback loop as it attempts to incorporate the new input into the prior.
Schmack K., Gomez-Carrillo de Castro A., Rothkirch M., Sekutowicz M., Rossler H., Haynes J.D., Heinz A., Petrovich P., Sterner P. (2013) Delusions and the Role of Beliefs in Perceptual Inference. The Journal of Neuroscience. 33(34):13701–13712
4 With UIs that show the characters are not fully real, such as a silhouette around them, or adding transparency.