Virtual Embodiment

The most praised ability of Virtual Reality is its capability to immerse the user in a Virtual Environment — to the degree that the subject feels present in it. The magic is to be fooled by the system so that one feels present where one actually does not physically reside. This effect can, however, turn even more magical. A deeper step into the effects of technological immersion is found in the concept of Virtual Embodiment. If a subject is embodied virtually,  not only is the virtual environment accepted as such; the subject also identifies with a virtual body or avatar inside the virtual environment. This differs from realizing which character you control in a game — within Virtual Embodiment it is the same processes that make you identify with your real body that makes you identify with a virtual one. This is a key point, as it is why research into virtual embodiment is important.

Peeling layers of the onion: VR can be a tool to discover who we are, through investigation of what and how we identify with our bodies. Illustration: “Mask of Day by Day” by Paulo Zerbato.

Hacking and Experimenting with Consciousness

What is fascinating about both of these possibilities of illusion, then — is how, and that, they are possible at all. Knowledge on how to achieve such immersion is obviously relevant for all VR developers, but the knowledge that can be obtained by researching these phenomena goes far beyond knowing how to apply it in VR technology. By creating experiments in VR, we can generate, and investigate, phenomenas of the mind under various experimental conditions. Exploring Virtual Embodiment, for instance, can enable us with a better understanding of our self-consciousness and the relationship between body and mind. Because of this wider span, research on Virtual Embodiment attracts neuroscience researchers, psychologists, information scientists and philosopher’s alike.

The Rubber Hand Illusion

The Rubber Hand Illusion (RHI) is an excellent example of the kind of ‘brain hacks’ that can be achieved by sensory manipulation. The illusion, as illustrated below, is a perfectly simple experiment that does not even require the use of VR technology to perform.  The RHI was introduced by Ehrson, Spence & Passingham (2004) and has been an ingenious way to illustrate how we identify with our bodies. More importantly for this entry, the results of the experiment has inspired further research on Virtual Embodiment.

Illustration from Thomas Metzinger’s book “The Ego Tunnel: The Science of The Mind and The Myth of the Self”

In the RHI, the hand of the subject is replaced by a rubber hand, while the normal hand is blocked from sight by a separating wall. When the subject is sitting as such, a researcher will stroke each hand, both the rubber and the physical hand, simultaneously. Now, the question is what happens when experiencing the sensory impression of stroking, all the while seeing a corresponding stroke on the rubber hand?

Put very simply, the brain does a ‘reasonable guess’ that this hand is indeed the correct physical hand attached to your body.  You feel that the rubber hand is yours, with nerve-endings and all — and you couple your physical feelings to the vision of the hand. This means that in your subjective experience, the rubber hand is the hand that has the sensation. Ehrson et. al write that their results suggested that “multisensory integration in the premotor cortex provides a mechanism for bodily self-attribution”. When our brains receive sensory information from two differing sensory inputs (sight+feel), these are coupled: the brain is coupling the stroking-sensation with imagery of a nearby-hand being stroked, and this is enough for the brain to attribute its self with the hand, to acknowledge it as its own.

This simple experiment share a lot of principles with the concept of Virtual Embodiment, and has inspired research in the field that we will present in this entry.

Some experience out of body experiences (OBEs) on the onset of sleep or waking up. Often they may feel that they are floating over their bodies. VR may help to study such states of consciousness by systematically inducing them.

Virtual Body Illusion

In a later experiment by Lenggenhager et.al (2007), not only the hands of the subjects — but their whole bodies were replaced with virtual representations. Moreover, in the experiment they present, the bodies are seen from behind. In effect, they were simulating out-of-body experiences, with very interesting results.

The experiment was conducted as such: the subjects wore a Head-Mounted Display which projected imagery from a camera located behind the subjects. As such, the subjects could see a representation of their bodies “live”, but from behind. Of course, this is deviating slightly from how we normally experience life. Although the subjects saw their body responding and performing actions in real time as under normal conditions — there is a logical dissonance due to the mismatch between the location of the subjects’ eyes in the virtual environment, and what these eyes see. Effectively, the user is seeing inside a pair of “portal” binoculars (HMD), which display the light from, if not another dimension, then at least a few feet away. And this will be a part of the point.

What is interesting about this experiment is not necessarily simply that the users feel present where they do not reside physically, but how the distance is only a few feet off. The users feel present right outside of their bodies. The situation is similar, the body and the environment is there, but everything is a bit off. What is interesting to investigate then, is how the body adapts to this. Will it accept that it now controls its body from a third person perspective, similarly to how Stratton’s subjects got used to seeing the world upside down?

What they studied was basically whether this change of perspective had an impact on where the users felt embodied. To investigate this, the researchers stroked the subjects as they did in the Rubber Hand Illusion, except at their backs — so that it was perceivable by them. The question is then where this physical feeling will be attributed to — how will the phenomena of the subjective experience present themselves to the subjects?

Out of Body experiences can be achieved virtually by using sensory impressions from other locations, for instance five meters behind you as in the experiment by Ehrson (2007). You can then effectively look at yourself from the outside.

First of all, to be clear on this — the sensory data of being stroked will initially be provided by the nerves in the physical shoulder of the user. The problem of the brain, however, is that the shoulder is out of sight — blocked by the Head-Mounted Display. There is, however, the visual impression of a shoulder on a person standing in front — being scratched in exactly the same way. Although the nerve-endings definitely feel the stroking, the problem is that where this feeling will be placed in our subjective experience is not the responsibility of the shoulder, but rather the brain. And, as the placement of the physical feeling in the bodily self-consciousness is largely dependent on vision for coordinates, what will happen? How will the brain fix this sensory discord?

In this beautifully written article by The New Yorker, its author Rothman describes one of the co-authors of the research paper, Thomas Metzinger’s, own experience undergoing the experimental conditions:

Metzinger could feel the stroking, but the body to which it was happening seemed to be situated in front of him. He felt a strange sensation, as though he were drifting in space, or being stretched between the two bodies. He wanted to jump entirely into the body before him, but couldn’t. He seemed marooned outside of himself. It wasn’t quite an out-of-body experience, but it was proof that, using computer technology, the self-model could easily be manipulated. A new area of research had been created: virtual embodiment.”

Are We Already Living in Virtual Reality?” — The New Yorker has a brilliant, long, read on Virtual Embodiment that features interviews with VR and Consciousness researchers Prof. Mel Slater and Prof. Thomas Metzinger.

Phantom Pain

Another curious potential effect of Virtual Embodiment, is the possibility of phantom sensory impressions as well. Handling virtual objects while being embodied, for instance, may convince your body to expect pain or touch — and so this is, somehow, actively generated. Because of this, VR may be a way to study how phantom pain is created, and further how it can be alleviated. For instance, several studies show how VR can embody a subject missing a leg in a body with two legs, similarly to traditional mirror therapy treatment, which is effective in reducing phantom pain. Again — what may be most interesting here is the possibility of systematically creating the phenomena and studying it afterwards. For instance, as Metzinger is quoted on in The New Yorker’s article, it may be supposed that phantom pain is created by a body model not corresponding to the physical reality. This will be the case for phantom pain in VR: it is not based on the physical reality, you are only relating to a virtual reality instead. Similarly, those those with real phantom pain may also be relating to a certain kind of “virtual reality”, but rather one in the format of their skewed narratives — maintained by their minds instead of a computer.

That the narrative, worldview and consciousness that our brain’s experience and generate is often not the best match with reality is not something new. As for Matrise, these concepts reminds us of the conclusion from our three-series entry towards a metaphysical standpoint on VR, in which we discussed VR as rather examplifying of our abstracting tendencies of mind. These entries can be read at Matrise, and were called: 1) On Mediums of Abstraction and Transparency, 2) Heidegger’s Virtual Reality, and 3) The Mind as Medium.

Virtual Embodiment for Social Good

Now that we have discussed the concept of Virtual Embodiment, it may be natural to discuss what this knowledge can be used for. As discussed already, generating experiments in VR that hacks our self models, may provide useful knowledge on the structure of our self-consciousness. Apart from this general knowledge, some may also have practical utilisation in applied VR for specific scenarios.

Racial Bias

A very exciting paper that describes work utilizing virtual embodiment, is one by Banakou, Hanumanthu and Slater. In the project, they embodied White people in Black bodies, and found that this significantly reduced their implicit racial bias! The article can be found and read in its entirety here (abstract available for all).

Domestic Violence

Another interesting project by Seinfeld et. al, is one in which male offenders of domestic violence became embodied in the role of a female victim in a virtual scenario. At first in the experiment, the male subject is familiarized with his new, female, virtual body and the new virtual environment. When the body ownership illusion, or virtual embodiment, has been achieved, a virtual male enters the room and becomes verbally abusive. All this time, the subject can see his own female body reflected in a mirror, with all his actions corresponding to his. After a while, the virtual male starts to physically throw around things and start to appear violent. Eventually it escalates and he gets closer into what feels like the subjects personal space, and appear threatening.

They write:

Our results revealed that offenders have a significantly lower ability to recognize fear in female faces compared to controls, with a bias towards classifying fearful faces as happy. After being embodied in a female victim, offenders improved their ability to recognize fearful female faces and reduced their bias towards recognizing fearful faces as happy”

The article can be read in its entirety at ResearchGate.

Staying Updated in the field of Virtual Embodiment

Research on Virtual Embodiment is happening continuously. To stay updated on this area of VR research, I enjoy following Mel Slater, Mavi Sanches-Vives and Thomas Metzinger on Twitter. Last but not least, I would stay updated on Virtual Bodyworks at Twitter, of which both Sanchez-Vives and Slater are co-founders of.


N.B: This entry lies at the centre of Matrise’s interests, and we are planning on writing several entries on this topic further in philosophical directions. Have any ideas or want to contribute? Please contact us.

Literature list

 

 

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSaveSaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

Camera Lucida

We have previously discussed several interesting optical technologies of relevance to VR. For instance we discussed the fascinating 17th century Camera Obscura and in our entry on a History of VR, we discussed the 19th century Stereoscope, which technology still is used in modern day VR Head-Mounted Displays.

In this entry we will discuss yet another,  similarly old, optical technology, which in category leans more towards that of Augmented Reality than Virtual Reality; the Camera Lucida.

Invented by phycisist William Hyde Wollaston in 1807, the Lucida was a device praised by artists and illustrators for its aid in their art. Similarly to the earlier dated Obscura, the optical artifact could project and redirect images from the external world, making it easier to recreate them in ink. While the Obscura required a dark room to project its images on a surface, the Lucida had the benefit of redirecting the light directly to its users eyes, and was thus more appropriate to use in a lit office, or even while travelling.

Apart from the underlying technical difference, the practice of use was relatively similar; the user would perceive the redirected light representing the object of projection on the surface that should be drawn, and by following the lines with a pen, the image could be reproduced in ink. To draw objects far off, the light could also be captured by a telescope, or for very small details, even a microscope, as seen in the illustration below.

Camera Lucida and Modern day AR technology
The camera lucida share many conceptual and experiential similarities with Augmented Reality (the concept of augmenting our real world with virtual phenomena). When a user is looking through the Camera Lucida,  a ‘virtual’ representation of what the Lucida is directed at, is added to and combined with the user’s normal vision. In AR goggles such as the Microsoft Hololens, this concepts remain the same, only the HoloLens’ holographic images originate from software and not redirected light from the external world.

 

The Microsoft HoloLens, an AR Head-Mounted Display by Microsoft.

Obviously, this is not the only difference between the two — compared to modern AR tech, the functionality and applicability of the Lucida is bleak. The HoloLens is  capable of stereo pictures, and features placement and projection of almost any virtually conceivable object in to the environment. Yet, the beautiful Camera Lucida artifact does share the essential underpinnings of augmenting the environment with re-presentations.

A curious example of the similarity between the two, is how the Lucida these days are being recreated with (mobile) AR. Using for instance an iPad with its camera, the canvas and your hand drawing is displayed to you on the screen, with a see-through image of that which you want to draw. Even better, a similar application has also been developed for the Microsoft HoloLens, called SketchAR HoloLensEDU — and is currently being employed teaching young artists.


Do you know of any good, useful applications within the AR domain? Please comment below!

 

SaveSave

The Web of VR Will Change Everything

The World Wide Web (WWW) need no further introduction. The greatest innovation of the Information Age is now essential to the world like no other technology. Before the WWW – computers, programs and information were not linked. The computers were lonely, and users could not browse the millions of interconnected computers the way we do today.

The Web has been changing ever since its dawn in the 90s, and has seen its distinct phases. What we call “Web 1.0”, for instance, was a static web. Websites could be visited and navigated, but they were static in the sense of not affording any user interaction. Web 2.0 opened up for more dynamic web applications that could be altered by user input. These did not just allow download, they could also be uploaded to – a feature that is now an essential underpinning of social media and web-based applications. Companies like Facebook, Snapchat, Instagram, etc., do not provide content to be downloaded, but rather a computer service to be used where the users provide the content.

Illustration by Lancelot Speed from Andrew Lang’s “The Red Fairy Book”.

It should be noted that “Web 1.0” and “Web 2.0” are just terms: there are constantly being added changes to the Web. The terms does, however, signify when these changes are inducers of a paradigm shift in the use of the Web. What is to be classified as Web 3.0 is therefore also discussed. Although not necessarily a feature of the Web, the Internet of Things (IoT) is a candidate for what has become a paradigm shift within web technologies, as more and more devices and artifacts are connected, allowing for ubiquitous computing. Others discuss the personalised Web we see today, as it is influenced by social media, while some are joining the AI hype, and claim that the Web has now become smart. The latter is far from a paradigm shift as of yet.

The Virtual Web
In this entry we will discuss WebVR, or Virtual Reality through, and on, the Web. The title of this entry, which claim that “The Web of VR Will Change Everything”, may indicate a stand towards the debate that I have introduced, of what will be the “Web 3.0” –  but that is not the point of this entry. I do not wish to make a claim of WebVR as a paradigm shift of the way the Web operates, but it most significantly will be a paradigm shift in how we experience the web technology as it is.

The concept and role of the Web, nevertheless, is the same: we have a dynamic web which features download and upload of web documents. In the case of VR the difference is that what we download and upload, are perceived as realities for us: the web is the mediator of realities, and this new way of using the web changes VR more than VR necessarily changes the Web. What characterizes the Web, is its simplicity, its openness and the innate element of surprise. Anything can be found, and the exploration as such is an important part of it. These features are the same that will be valuable in VR as well: to discover open virtual worlds, created by anyone.

Mozilla’s A-Frame is now ready for Link Traversal through hyperportals

Creating VR for the Web
WebVR, a framework for browsers to support Three-Dimensional Stereoscopic Virtual Environments, is already developed and supported by many browsers. As HTML, CSS and JavaScript already have the powers to create and render graphics through frameworks such as WebGL, the web languages have increasing power to support such scenarios. Lately, frameworks combining these different frameworks to make implementation of WebVR even easier. Mozilla’s A-Frame, lets the user set up a Virtual Environment only with less than 20 lines of code (see their Hello WebVR example), using ‘normal’ HTML tags, which they call primitives, to create 3D objects in 3D space. A-Frame utilizes Three.JS to do this, and Three.JS uses the WebGL.

It is now easy to create Virtual Environments on the Web, even arguably easier than creating them through Unity. The great benefit of this is that they can be connected to each other, by a standard hypertext reference, instead of uploading to Steam or Oculus Store, etc. A-Frame introduced hypertext support, which they call “Link Traversal”, in July of 2017, but the browsers are only just catching up. As of now, it is only supported by Firefox and Supermedium on PC, however, as of February 2018 Oculus Browser has supported it on GEAR VR, and most likely also on Oculus Go.

A-Frame’s Diego Marcos called this a great achievement, as A-Frame finally achieved their ‘Web badge’. For this they deserve congratulations, A-Frame has now completed an essential step towards the Web of Realities. In their introductory blogpost, they introduce a “hyperportal” example, which provides you with a preview of the VR world you are about to enter, and which redirects you to the page when you virtually walk through it. This is a piece of very fun code to play around with. A neat feature is that the portal itself is “transparent”, and so provides a preview of the virtual environment to which you are travelling.

The future of WebVR
As with anything within VR, we are still a few years behind its potential. WebVR has had a solid boost the latest few years, but before a Head-Mounted Display is commonplace, we probably wont find a VR search engine or enough websites for exploration to be truly amazing. This is not bad news, however, it means that this is just the right time for creative ideas. We see the inevitable emergence of the VR Web, and can help shape it. For instance, at Matrise, we have previously discussed Virtual Reality Memory Palaces. This would be great to incorporate for sharing on the Web, so each memory palace could be interconnected, creating vast banks of knowledge for memorization.

Do you have any good ideas for any WebVR apps?
Feel free to comment below.

The ancient Greeks created “Memory Palaces” to recall important information.