Existentialist Design

The term “Philosophy” can refer to a method or approach to investigate how we should relate to and understand ourselves and the world. Moreover, how we understand ourselves and the world is dependent on technology — especially in the case of mediums that present and abstract information to us. We have previously at Matrise discussed the philosophy of technology as a subfield of Philosophy. In this entry, however, we will discuss how philosophy can, concretely and directly, inform information technologies — especially within the field of Human-Computer Interaction.

Martin Heidegger by Barry Bruner.

That how we understand ourselves and the world is dependent on technology is being recognized this year at the CHI 2019 conference in May on Human Factors in Computing Systems — the premier international conference on Human-Computer Interaction (HCI). In December last year, there was issued a call for position papers to a workshop called “Standing on the Shoulders of Giants: exploring the intersection of Philosophy and HCI“, led by experienced researchers with experience in philosophically informed design or research. In the call they write:

“Philosophy has provided a vital perspective for HCI on how we navigate, experience, understand and judge the world around us and its artifacts. Lately, HCI scholars have also sought to use philosophy’s program of answering what it means to live a “good” life to investigate the ethical and moral implications of the technologies we design. As philosophy in its many forms continues to open up new influences and our relations with technology broaden, we believe it is timely to have a meta-discussion about what links philosophy and HCI. As we understand it, philosophy’s strength lies in its diversity, depth, and interpretive flexibility.”

I’m attending CHI as I am an author of a paper there and was naturally interested in submitting a position paper. In the position paper,  I present a view of design inspired by philosophical thinkers such as Kierkegaard and Heidegger, where the aim of the design is a controlled accident in which we do not want to dominate the user experience, but rather open up for new experiences that can be interpreted by the user.

The rest of the entry is best explained by the position paper itself, with its abstract, introduction, body and conclusion. The paper can also be download and read at ResearchGate.

Title: A Controlled Accident: Imagining VR as a Catalysator for Self-exploration

Abstract
In this position paper, I discuss an existentialist design approach towards Human-Computer Interaction (HCI). The aim of the existentialist designer is to not dominate the user experience, but rather to design for a controlled accident in which the medium itself is explored, and one’s self through the medium, where the overarching purpose is the exploration itself. This line of thinking is inspired by existential philosophers such as Kierkegaard and Heidegger, and its aim can be illustrated in how Kierkegaard discusses life not as a ‘problem to be solved, but a reality to be experienced.’ The hope is that such an approach towards technology escapes the somewhat limited view of technology as simply a tool to get from A to B, and that technology may be seen as a lens through which reality can be presented free from an otherwise culturally enframing narrative. The aim is, therefore, to design technology as a catalysator for a more original revealing of truth. The paper illustrates this with an example of the employment of Virtual Reality (VR) technology in sensory deprivation tanks.


Introduction
According to Martin Heidegger’s technology criticism, the essence of technology is not itself something technological. Put way too short, technology is rather a way that we understand the world, or in Heidegger’s words, ‘a way of revealing’ (Heidegger, 1977). What we reveal in this technological framework, that is, what we deem as true, is not necessarily the truth, but only ‘correct’ relative to the framework itself. Thus, the common correct definition of technology – that technology is a human activity and a means to an end – although correct relative to the technological narrative, is not necessarily true. For Heidegger, these two definitions must not only be combined, as in that it is a human activity to think of means to ends, we must also recursively look on how this activity impacts the way we look at the statement itself.

It is the mindset of thinking in terms of means and ends, of interpreting and enframing things within this technological framework, which is the essence of technology according to Heidegger. When Heidegger speaks of ‘revealing’, he means what is presented as true and brought forth into that way of revealing. In the case of the technological framework, what is revealed is within the narratology of ‘man versus nature’, which is a fundamental view that reveals the world as such. For Heidegger, the danger is that man himself cannot escape his enframing, and that it ‘may be denied for him to enter into a more original revealing’. Heidegger’s brief comment towards a solution to this problem – although he explicitly states that modifying technology can never be the answer – is a different kind of technology, or techne, poiesis; the Greek word for both art and technology. Here he refers to art, as art is not fixed in terms of interpretation or the straight rules of means to ends, and A’s to B’s, and may, therefore, bring a more original revealing – or at least another narrative that provides another revealing.

Martin Heidegger contributed to the existential and phenomenological tradition of Philosophy in the mid-1900s. Although he never lived to see Information Technology, his philosophy on technology is not necessarily concerned with the details of different technologies, but rather what technology is in its essence. In this way, his works may still influence the design of artifacts in Human-Computer Interaction (HCI) in the 2020s. But how can existential philosophy benefit the research of the relationship between humans and machines? How can such a critical view of technology in its essence benefit the design of technology?

Existentialism in HCI
Arguably, methods and approaches appropriate for creating usable, enjoyable, and practically useful products and services, cannot be assumed to be also appropriate for addressing the issue of how technology is related to the most fundamental aspects of human existence” (Kaptelinin, 2018).

Kaptelinin (2018) presents a broad overview of previous existential approaches to the field of HCI. The paper is further contributive and practical in that it approaches a framework compatible with where HCI is today. Kaptelinin (2018) writes that previous attempts at employing existentialism in HCI research has not been very popular, and argues this is because it is ‘too distant from traditional HCI problems and concerns, and too abstract to provide concrete support for analysis and design’ (p. 1). This is a danger for any approach inspired by an underlying ideal and something which this position paper also is subject to. For instance, Kaptelinin (2018) discusses Karlstrøm’s (2006) paper as criticizing the ‘problem-solving attitude of HCI’. As this paper will present a similar approach, it should, therefore, immediately comment on how problem-solving itself can be undesirable.

We may, therefore, begin by restating the points of Heidegger and Kierkegaard: the problem-solving can itself be a problem. In relating to the givens of existence, the solution is not necessarily to define them as problems and find strategies to eliminate them. An approach can, however, be to explore them and see them for what they are, and thus enter into a more authentic relationship with them. Problem-solving as an attitude may provide the illusion that a fix is possible by pushing through. Thus, existentialism may claim that it is not the givens of existence that is the problem, but rather how we relate to them, either as problems or something else. The standpoint, therefore, is that problem-solving may be the real problem, as it enframes the world as something that can be solved. This is correct relative to its own framework, but in cases of how we relate to the givens of existence, this need not necessarily be true. This is further coherent with the way Kaptelinin (2018) discuss existential psychotherapy: there is not one solution to it, and that may mean that we require technology that is far more open, adaptive and exploring. It may be that in these situations technology need not provide a solution, but perhaps even on the contrary be an important tool to reinstate the problem for a more clear inspection, and by means of this lay grounds for establishing a different relationship towards it.

The aim of such a technology will rather be to open up the world for new possible interpretations, than aiming for one specific function. The question that would be explored by interaction with such technologies is whether technology can help us break a certain narrative, or put in Heideggerian terms, whether technology provides us a more original revealing. In the next section, an example of a technology that can be used this way is presented.

Existentialism in HCI
In VR, presence is often defined as the degree to which the subject feels present in the virtual world. What is interesting to note, is that this naturally has to be viewed relatively to the degree that the subject feels present in the physical world as we usually receive information from both our physical and our virtual environments. There can thus be two separated approaches to designing for presence in virtual reality environments: one is to provide the sensory stimulus of the virtual environment, and the other is to block sensory stimulus from the physical environment. Both approaches work towards the same goal of immersion – the encapsulation of the user in the Virtual environment (VE).

Obviously, the principle of adding and removing sensory experience go hand in hand; by equipping a Head-Mounted Display you are blocking the physical impressions and replacing them with virtual impressions, all the while shielding for incoming light from the surroundings. Blocking light, however, is not the only way to deprive the senses of information from the physical environment. The inclusiveness of the immersion can also be achieved by sensory deprivation through floatation tanks.

Alone With Your Thoughts”, Illustration by Cole Ott

Floating in Virtual Reality
Floatation chambers, or sensory deprivation tanks, are pools of water with copious amounts of Epsom salt. The tanks are sealed for any incoming light and sound, and the air- and water temperature is equal to that of your body. When you lie down, you will feel how the salt makes you float even though the pool is very shallow. As you lie there, you notice how the ripples you created when lying down start to slowly subside as you sink down into weightlessness. After a while, because of the air- and water temperatures are the same as that of your body, you can no longer pinpoint where the water ends and the air around you begins. In fact, it gets hard to distinguish anything from anything else, including your body from the air and water. There is really nothing that is easy to grasp as isolated, save perhaps your breath. And as the minutes go, with total physical relaxation and lack of much sensory impression at all, things may start to change.

The most significant, explicit change one may notice in the tank is that after a while your bodily self-consciousness is not what it used to be. Your mental model of where your body is in relation to the world around you starts to become blurred. Normally reinforced by tactile stimuli of air and water (of varying temperatures), and visual and auditive stimuli from the environment, your body model is now lacking information on which to create it. Your sense of spaciousness has also changed – that is the feeling of your position as defined relatively to say, the walls, mountains, and the sky have disappeared. You now really experience nothing around you, but neither any edges to this lack of information about your surroundings. You may get the feeling of floating in empty space, but where are you in all of this? What, in this stream of conscious experience is matter and what is mind?

Example Experiment
To exemplify the ideas discussed in this paper, I imagine the following experiment. A user employs a VR HMD that is connected to biometric sensors, e.g. EEG, GSR, heart rate, breathing, etc. A connected computer visualizes the feedback through abstract imagery in a 3D visualization. The direct effect is that an abstraction of the user’s state is projected externally, but the application does not do a hard classification to moods in the form of emoticons. Rather, the user can meditate and explore the visualization as the floating continues and can establish a way of exploring the technology through relating to both the medium and through it themselves. It would further be interesting to use eye-tracking technology as a way of navigation in the vast, abstract visualizations. If one traveled towards where one saw, one could even be interactive while lying still in the floatation tank. This could also possibly have curious effects on which parts (perhaps the eyes), we identify with our selves âĂŤ perhaps the placement of our self could be altered by changing the agency for transportation. My interest in such a prototype or such a future experiment would be to which extent it could open us up to the direct here-and-now experience, and attempt to have experiences beyond the traditional subject-object hierarchy. It is existential in the sense that it seeks to delete the traditional narrative. 

Literature list

The Virtual Freud

We are not few who would save a coin or two by being able to be our own psychologist. In a very concrete sense, this is the topic of today’s entry — how Virtual Reality (VR) can allow us another perspective on ourselves, and how this may better our mental health. At Matrise, we have previously discussed how VR can benefit anxiety sufferers through virtual reality exposure therapy. We have also discussed how the medium can facilitate Mindfulness meditation. In this entry, however, we will discuss a VR application that lets you have a conversation with Dr. Sigmund Freud. Oh, but there’s a twist!

Sigmund Freud, oil on linen. Mathieu Laca (2015).

In 2015, Sofia Adelaide Osimo, Rodrigo Pizarro, Bernhard Spanlag and Mel Slater published a paper called “Conversations between self and self as Sigmund Freud — A virtual body ownership paradigm for self-counseling“. The paper discusses an application where you sit in a chair facing Dr. Sigmund Freud. Upon entering the virtual environment, you do not float in empty space as one often does in VR — rather, you notice you have a virtual body that responds to your movements. This may lead you to identify the virtual body as your own, a magical feature commonly referred to as Virtual Embodiment. We have written extensively on this subject in a previous entry — but put shortly, the effect, apart from being very interesting in itself, has many practical applications. Self-identification with a virtual body can be exploited to, for instance, reduce implicit racial bias and make offenders of domestic violence get better in noticing the fear in victims.

Self as Other

Seeing outside into the inner

When you sit in your new virtual body, facing Sigmund Freud, you are asked to tell him about a problem. Sometime after you have emptied your heart, the virtual environment fades to black, before you once again are placed in a body, but this on the other side of the room. You are now Dr. Sigmund Freud and your patient, who looks remarkably like you, starts talking. You hear a recording of what you just said minutes ago, but you get to view your statement in a ‘new dress’: a 3D model of yourself is saying it, while you are virtually embodied elsewhere.

As humans, we know ourselves inside-out (or at least we believe we do). This may lead us to be more critical towards ourselves than others, as we compare our worst to the others best, our shame to their facade. We know all our terrible, dirty secrets, and talking to ourselves we do not have to adhere to any sort of social norms or even any general courtesy for that matter. This may lead to our inner voice becoming quite … crude. If we could focus on our own problems, in the form of the problems of others, it may be easier to be more loving towards ourselves, by utilizing the love we usually give to others. The technology can have remarkable results in affecting our selves.

In their paper abstract, Osimo et. al. write:

…this form of embodied perspective taking can lead to sufficient detachment from habitual ways of thinking about personal problems, so as to improve the outcome, and demonstrates the power of virtual body ownership to affect cognitive changes”

Internal as External

This detachment from the habitual may be very beneficial, perhaps especially in terms of Self and Identity. We have discussed this previously in our entry called “Inner as Outer: Projecting Mental States as Immersive Virtual Reality“. Apart from the philosophical buildup of the entry, the article discusses an application that, to a certain extent, allows you to view your inner states (measured through pulse and breath), as your encompassing external reality. In our entry on the use of VR in floatation tanks, we also discuss the extreme potential of this — the possibility to be stimulated by only sensory deprivation, of which can be based on your inner phenomena, thus resulting in an experience where there is no separation between the inner and the outer, thus refuting the subject-object dualism that affects our everyday living experience.

Do you have any ideas to this? Feel free to comment below.

The Capture of Reality

When creating experiences for Immersive Virtual Reality, there are essentially two approaches. The first one of these is manual construction through Computer-Generated Imagery (CGI), and is how most games and VR experiences are made.  The second approach is far more automatic and attempts to ‘capture reality’ instead of actively generating it. It is this approach that we will discuss in this entry. In addition to presenting the technicalities of the methods of capture, we will also discuss its limitations, and provide an innovative example of how these can be solved in the future, drawn from a student project at the University of Bergen.

An early 360° camera — horisontally at least — probably the first with a synchronised shutter.

360° Video

In a previous article on Virtual Reality Journalism, we discussed how 360° 3D cameras can be used to present a user to an immersive experience. This approach has several unique benefits. First of all, it is far less time-consuming to capture and re-use already existing physical environments, instead of spending time creating it through 3D modelling. The same is perhaps especially the case when the environment involves any human actors, as it easier to avoid the uncanny valley effect and simultaneously maintain high standards of realism when using image capture equipment, than it is to create it with 3D animation.

How does it work?

360° cameras usually comprise two or more (ultra-)wide angle lenses. In the case of cameras with just two lenses, such as the GEAR 360 or Ricoh Theta V, each of these lenses then have to be able to capture 180° degrees horizontally and vertically. The recordings from these lenses, when raw straight from the camera, are separate — and need to be stitched together with software (for instance) an equirectangular view to compose a spherical view of 360° (See Illustration 2). Illustration 1 illustrates how the equirectangular format works, in the format of a world map, perhaps our most relatable example of spherical / global shapes presented in the format of rectangles.

Illustration 1: A relatable example of the equirectangular format. The furthest point west is close to the furthest point east, and as such we deal with a ‘sphere’, or more rightly globe, that is stretched out to a rectangle. The closer we get to the poles, such as Antarctica, the more the image is stretched, as the circumference of the earth is lesser at the poles.
Illustration 2: In this equirectangular photo, captured with a Ricoh Theta V, we see the same effect as in Illustration 1. My hands, which enclose the bottom of the camera, are given the same effects as Antarctica in the map. The stairs, however, which appear to be circular are straight, but it’s bending by the lenses are especially clear when viewing it ‘equirectangularly’.

When an equirectangular image is viewed through an HMD or a smartphone, the software selects only about 110° of 360° of the image, relying on the sensors in the HMD or phone on which degrees of the image to present.

3D Images

Although regular 360˚ cameras (GEAR VR; Ricoh Theta V) to a large extent cover the world as we see it in all it’s 360°, their images are still monoscopic. Essentially, this means that the same image  is presented to each eye when viewed in a HMD, and this is not the way we ordinarily see reality. As our eyes are distanced by a centimeter or two,  the visual feed slightly varies in its capture of reality. It is this which enables us to perceive the depth of the world, that is, when our eyes are not fooled by illusions exploiting this effect, such as VR itself. We discuss this in more detail in our entry on the History of VR, in which we discuss the invention of the Stereoscope, but a small introduction will also be given in this entry. Essentially, 3D 360˚ cameras utilise the same feature as human beings to capture depth, by separating the cameras similarly to that of the human eye. Such cameras are, however, more cumbersome and costly to produce, and to capture stereoscopic images one needs to double the minimum of lenses — leading to a minimum of four lenses —two for each eye for each 180˚ of capture. Unlike the  4K 360˚ monoscopic cameras available rather cheaply at the commercial market (from $200 and up), stereoscopic cameras have not entered the market at very reasonable prices yet. There is hope, however, and I can personally recommend Vuze+, a 360˚ 3D camera that deliver 4K resolution per eye, and comes with a well-designed acommpanying stitching- and editing software. The price is still a bit stiff for most non-professional use, at $1200, but it brings hope for future technology that these can soon be more affordable. We have used the Vuze+ camera in a research project at the University of Bergen, with good results. It is comparable to the quality of a Ricoh Theta V — except that it delivers the stereoscopic images rather than monoscopic ones.

Regarding Resolution

Unfortunately, a resolution of 4K per eye sounds great — and many are dissapointed when they view the recordings of a camera such as GEAR VR, Ricoh Theta, or the Vuze+. They may recall their images on their 4K TV as incredibly sharp, and yet, their recorded videos appear somewhat blurry and pixelated. The answer to why this is the case is quite simple. The 360˚ images do indeed have a 4K resolution, however, we are unable to view all the pixels at a time as they are stretched out on a sphere.  To keep matters simple, let’s say that your Head-Mounted Display has a Field of View of 90˚ (although most have 110˚). In this case, just  1/4 of the 4K image is being seen at any given time. Thus, we will have to divide the pixel count by four. This is somewhat simplified because of stretching, but it should be enough to get the point. To get an effective resolution of 4K, or something akin to 3K such as the HTC Vive Pro and Samsung Oddysey(+) can afford, one would need a far higher resolution of the cameras.

Another Step in Fidelity: Volumetric Video

At first thought, it may perhaps be hard to imagine how we can proceed to more details in immersive  360˚ 3D recordings except by increasing the resolution. As we briefly commented, however, stereoscopics in 3D movies at the cinema, or in 360˚ 3D recordings merely provide an illusion of depth — not actual depth. The same goes for our eyes, although they mostly perceive it correctly,  they are easily fooled. 360˚ 3D cameras is an example of this, they merely fool our eyes: although it seems that there is depth, we can not really move in the image — as there is no actual depth to it. Here, volumetric video acts differently, and affords positional interaction. Volumetric video attributes the recorded images in a 3D (x,y,z) space, in addition to delivering stereoscopy so that we can perceive it. Volumetric video is unfortunately very hard to create while still retaining high quality, and plug-and-play solutions still seem far off. To get an idea of how volumetric video works, we recommend to look into the concepts of photogrammetry — and perhaps even to create a 3D model yourself, using images captured with your smartphone. This YouTube tutorial shows you how to do this in Agisoft Photoscan Pro, which has a free trial available.

Limitations

Developed in an undergraduate course at the University of Bergen, the short 360 movie “Schizophrenia“, experimented with interactive 360 video.

Despite these great innovations in the capture of reality, CGI has some benefits that neither 360˚ 3D or Volumetric videos can really achieve. The most important of these is that of interactivity . As 360° videos are linear (that is, they have a predetermined beginning and end), the user can not really affect what happens in the video — except by choosing which degrees of the video to see.

In our course in VR Journalism at the University of Bergen where I taught students VR programming, 360° video and photogrammetry — we faced this exact limitation. A group that worked on providing an experience of the reality-shattering disorder of Schizophrenia, wanted hallucinations to occur when the user viewed at certain areas. The students solved this by placing transparent gifs over the video in A-Frame, edited based on the real footage, and put gaze event listeners to activate the playing of the gif. The results were extraordinary, and could well provide a new way to provide a means of simpler interaction on top of 360° videos. The experience, which voices are in Norwegian, can be viewed here (WebVR browser such as Chrome is necessary).

Sensory Deprivation — Floating in Virtual Reality

If we look to our glossary, we see Presence within Virtual Reality (VR) defined as the degree to which the subject feels present in the virtual world. What is interesting to note, is that this naturally has to be viewed relatively to the degree that the subject feels present in the physical world — as we usually receive information from both our physical and our virtual environments.

There can thus be two separated approaches to designing for presence in virtual reality environments: one is to provide sensory stimulus of the virtual environment, and the other is to block sensory stimulus from the physical environment. Both approaches work towards the same goal of immersion — the encapsulation of the user in the VE. Slater and Wilbur (1997) recognise this in their definition of Immersion, which is closely related to the notion of Presence. They define immersion in terms of four qualities the system can afford, the first one of which is called inclusiveness. Inclusiveness they define as the extent to which physical reality is shut out.

Obviously, the principle of adding and removing sensory experience go hand in hand; by equipping a Head-Mounted Display you are blocking the physical impressions and replacing them with virtual impressions, all the while shielding for incoming light from the surroundings. Blocking light, however, is not the only way to deprive the senses of information from the physical environment. In this entry, we will discuss how we can maximize the inclusiveness of the immersion by achieving sensory deprivation in floatation tanks. Floating in Virtual Reality!

Floatation Chambers

Floatation chambers, or sensory deprivation tanks — are pools of water with copious amounts of epsom salt (≈600kg). The tanks are sealed for any incoming light and sound, and the air- and water temperature is equal to that of your body. When you lie down, you will feel how the salt makes you float even though the pool is very shallow. As you lie there, you notice how the ripples you created when lying down start to slowly subside as you sink down into weightlessness. After a while, because of the air- and water temperatures are the same as that of your body, you can no longer pinpoint where the water ends and the air around you begins. In fact, it gets hard to distinguish anything from anything else, including your body from the air and water. There is really nothing that is easy to grasp as isolated, save perhaps your breath. And as the minutes go, with total physical relaxation and lack of much sensory impression at all, things start to change.

“Alone With Your Thoughts”, Illustration by Cole Ott

The most significant, explicit change one may notice in the tank  is that after a while your bodily self-consciousness is not what it used to be. Your mental model of where your body is in relation to the world around you starts to become blurred. Normally reinforced by tactile stimuli of air and water (of varying temperatures), and visual and auditive stimuli from the environment, your body model is now lacking information on which to create it. Your sense of spaciousness has also changed, that is the feeling of your position as defined relatively to say, the walls, mountains and sky has disappeared. You now really experience nothing around you, but neither any edges to this lack of information in your surroundings. You may get the feeling of floating in empty space — but where are you in all of this? What, in this stream of conscious experience is matter and what is mind?

Inner vs Outer

In our entry — ‘Inner as Outer: Projecting Mental States as External Reality‘ — we discussed the potential of using VR for meditation purposes in experimental ways. In the introduction to the entry, we discussed our feeling of Self as a duality of Inner and Outer, of which our everyday experiences usually comprise. We discussed how technology may have the power to transform our consciousness away from this traditional subject-object hierarchy and into a non-dual one, where the Inner is seen as the Outer, and the Outer as Inner. In this entry we are building further on these ideas. Similarly to visualising inner states in VR through biometrics, using VR in floatation tanks might provide illusory experiences where the conscious experience is significantly altered.

One other entry relevant to our experiments with VR in floatation tanks should be mentioned before we go on: the entry on Virtual Embodiment. In the entry, we discuss the great potential of VR to hack our consciousness; why it is possible, and what it can be used for. The research is highly relevant for floatation in VR, as both floatation tanks and VR alter our self-model, as both alter the sensory impressions necessary to maintain it.

Research on Virtual Embodiment in Floatation Tanks

Matrise partnered up with Bergen Flyt, a local company offering floatation therapy in the heart of Bergen city. We used a Samsung Gear VR with a Samsung S8 phone. We did not use a HTC Vive (Pro) as it would be more risky exposing the cable to water. Also, no room tracking or even much head orientation was needed, and in terms of resolution the HMD is quite high in ppi. We chose to first try out some abstract visualisations through the application “Fractal Lounge”, that shows varying psychedelic visuals and floating through space.

My Experience

“After I had showered, I put on the GEAR VR headset, started the application, and slowly entered the floatation pool. I held my hands towards the wall, as I did not see anything else than the visuals in the headset. When I was inside, I closed the glass door, and slowly lowered myself into the water — back first. It took a few seconds before I dared to lower my head all the way down, but very soon I was totally relaxed. As expected, the electronics in the display was kept well above water, due to the intense amount of salt in the water …”

The kind of visualisations provided from Fractal Lounge, the application that was tried in the floatation tank.

“The visualisation pulsated, floated, drifted along — and often totally changed in colours and shapes. It took probably about ten minutes before my feeling of body totally vanished, to the degree that it was a larger gap between wanting to move the body and actually being able to move it than usual. I felt like perceiving a great drama and scene, and I got engaged in the forms and ways of the visualisations, sometimes quite invested in it, as it felt close and reality-defining for me. After about twenty minutes in, I felt as if I was drifting along in space at high speed, because of the steady movement of stars away from me. At the same time, there was no sound, which made the quick travel feel peaceful and smooth. As with normal floating, about every ten minutes there is a sort of reality-check moment where you remember you are in the tank and contemplate how weird it is. This also happened in VR, and was … equally as weird”

Reflections and Future Work

My first experiment with floatation in VR lasted for about 45 minutes. Sometimes, unfortunately, the VR headset glided slightly off my face, and I had to reposition it with my wet, salty fingers. After this happened about three times, I had to leave the tank in order to save the equipment.

Thank God that we have floatation pools instead of this creepy stuff.

My first experience of floating in virtual reality was very promising. The largest surprise was the feeling of movement through space at high speed. The largest frustration was the lack of any sort of interaction with ones surroundings at all, except the possibility to open and close one’s eyes. A great experiment would be to use eye tracking technology as a way of navigation in the vast, abstract psychedelic spaces. If one travelled towards where one saw, one could even be interactive while lying still in the floatation tank. This could also possibly have curious effects on which parts (perhaps the eyes), we identify with our selves. Perhaps the placement of our self could be altered by changing the agency for transportation.

Matrise will continue the cooperation with Bergen Flyt, and both try and develop different applications. Our plan is to measure the feeling of presence and self-identification and consciousness while in the tank.

 

Literature list

Virtual Embodiment

The most praised ability of Virtual Reality is its capability to immerse the user in a Virtual Environment — to the degree that the subject feels present in it. The magic is to be fooled by the system so that one feels present where one actually does not physically reside. This effect can, however, turn even more magical. A deeper step into the effects of technological immersion is found in the concept of Virtual Embodiment. If a subject is embodied virtually,  not only is the virtual environment accepted as such; the subject also identifies with a virtual body or avatar inside the virtual environment. This differs from realizing which character you control in a game — within Virtual Embodiment it is the same processes that make you identify with your real body that makes you identify with a virtual one. This is a key point, as it is why research into virtual embodiment is important.

Peeling layers of the onion: VR can be a tool to discover who we are, through investigation of what and how we identify with our bodies. Illustration: “Mask of Day by Day” by Paulo Zerbato.

Hacking and Experimenting with Consciousness

What is fascinating about both of these possibilities of illusion, then — is how, and that, they are possible at all. Knowledge on how to achieve such immersion is obviously relevant for all VR developers, but the knowledge that can be obtained by researching these phenomena goes far beyond knowing how to apply it in VR technology. By creating experiments in VR, we can generate, and investigate, phenomenas of the mind under various experimental conditions. Exploring Virtual Embodiment, for instance, can enable us with a better understanding of our self-consciousness and the relationship between body and mind. Because of this wider span, research on Virtual Embodiment attracts neuroscience researchers, psychologists, information scientists and philosopher’s alike.

The Rubber Hand Illusion

The Rubber Hand Illusion (RHI) is an excellent example of the kind of ‘brain hacks’ that can be achieved by sensory manipulation. The illusion, as illustrated below, is a perfectly simple experiment that does not even require the use of VR technology to perform.  The RHI was introduced by Ehrson, Spence & Passingham (2004) and has been an ingenious way to illustrate how we identify with our bodies. More importantly for this entry, the results of the experiment has inspired further research on Virtual Embodiment.

Illustration from Thomas Metzinger’s book “The Ego Tunnel: The Science of The Mind and The Myth of the Self”

In the RHI, the hand of the subject is replaced by a rubber hand, while the normal hand is blocked from sight by a separating wall. When the subject is sitting as such, a researcher will stroke each hand, both the rubber and the physical hand, simultaneously. Now, the question is what happens when experiencing the sensory impression of stroking, all the while seeing a corresponding stroke on the rubber hand?

Put very simply, the brain does a ‘reasonable guess’ that this hand is indeed the correct physical hand attached to your body.  You feel that the rubber hand is yours, with nerve-endings and all — and you couple your physical feelings to the vision of the hand. This means that in your subjective experience, the rubber hand is the hand that has the sensation. Ehrson et. al write that their results suggested that “multisensory integration in the premotor cortex provides a mechanism for bodily self-attribution”. When our brains receive sensory information from two differing sensory inputs (sight+feel), these are coupled: the brain is coupling the stroking-sensation with imagery of a nearby-hand being stroked, and this is enough for the brain to attribute its self with the hand, to acknowledge it as its own.

This simple experiment share a lot of principles with the concept of Virtual Embodiment, and has inspired research in the field that we will present in this entry.

Some experience out of body experiences (OBEs) on the onset of sleep or waking up. Often they may feel that they are floating over their bodies. VR may help to study such states of consciousness by systematically inducing them.

Virtual Body Illusion

In a later experiment by Lenggenhager et.al (2007), not only the hands of the subjects — but their whole bodies were replaced with virtual representations. Moreover, in the experiment they present, the bodies are seen from behind. In effect, they were simulating out-of-body experiences, with very interesting results.

The experiment was conducted as such: the subjects wore a Head-Mounted Display which projected imagery from a camera located behind the subjects. As such, the subjects could see a representation of their bodies “live”, but from behind. Of course, this is deviating slightly from how we normally experience life. Although the subjects saw their body responding and performing actions in real time as under normal conditions — there is a logical dissonance due to the mismatch between the location of the subjects’ eyes in the virtual environment, and what these eyes see. Effectively, the user is seeing inside a pair of “portal” binoculars (HMD), which display the light from, if not another dimension, then at least a few feet away. And this will be a part of the point.

What is interesting about this experiment is not necessarily simply that the users feel present where they do not reside physically, but how the distance is only a few feet off. The users feel present right outside of their bodies. The situation is similar, the body and the environment is there, but everything is a bit off. What is interesting to investigate then, is how the body adapts to this. Will it accept that it now controls its body from a third person perspective, similarly to how Stratton’s subjects got used to seeing the world upside down?

What they studied was basically whether this change of perspective had an impact on where the users felt embodied. To investigate this, the researchers stroked the subjects as they did in the Rubber Hand Illusion, except at their backs — so that it was perceivable by them. The question is then where this physical feeling will be attributed to — how will the phenomena of the subjective experience present themselves to the subjects?

Out of Body experiences can be achieved virtually by using sensory impressions from other locations, for instance five meters behind you as in the experiment by Ehrson (2007). You can then effectively look at yourself from the outside.

First of all, to be clear on this — the sensory data of being stroked will initially be provided by the nerves in the physical shoulder of the user. The problem of the brain, however, is that the shoulder is out of sight — blocked by the Head-Mounted Display. There is, however, the visual impression of a shoulder on a person standing in front — being scratched in exactly the same way. Although the nerve-endings definitely feel the stroking, the problem is that where this feeling will be placed in our subjective experience is not the responsibility of the shoulder, but rather the brain. And, as the placement of the physical feeling in the bodily self-consciousness is largely dependent on vision for coordinates, what will happen? How will the brain fix this sensory discord?

In this beautifully written article by The New Yorker, its author Rothman describes one of the co-authors of the research paper, Thomas Metzinger’s, own experience undergoing the experimental conditions:

Metzinger could feel the stroking, but the body to which it was happening seemed to be situated in front of him. He felt a strange sensation, as though he were drifting in space, or being stretched between the two bodies. He wanted to jump entirely into the body before him, but couldn’t. He seemed marooned outside of himself. It wasn’t quite an out-of-body experience, but it was proof that, using computer technology, the self-model could easily be manipulated. A new area of research had been created: virtual embodiment.”

Are We Already Living in Virtual Reality?” — The New Yorker has a brilliant, long, read on Virtual Embodiment that features interviews with VR and Consciousness researchers Prof. Mel Slater and Prof. Thomas Metzinger.

Phantom Pain

Another curious potential effect of Virtual Embodiment, is the possibility of phantom sensory impressions as well. Handling virtual objects while being embodied, for instance, may convince your body to expect pain or touch — and so this is, somehow, actively generated. Because of this, VR may be a way to study how phantom pain is created, and further how it can be alleviated. For instance, several studies show how VR can embody a subject missing a leg in a body with two legs, similarly to traditional mirror therapy treatment, which is effective in reducing phantom pain. Again — what may be most interesting here is the possibility of systematically creating the phenomena and studying it afterwards. For instance, as Metzinger is quoted on in The New Yorker’s article, it may be supposed that phantom pain is created by a body model not corresponding to the physical reality. This will be the case for phantom pain in VR: it is not based on the physical reality, you are only relating to a virtual reality instead. Similarly, those those with real phantom pain may also be relating to a certain kind of “virtual reality”, but rather one in the format of their skewed narratives — maintained by their minds instead of a computer.

That the narrative, worldview and consciousness that our brain’s experience and generate is often not the best match with reality is not something new. As for Matrise, these concepts reminds us of the conclusion from our three-series entry towards a metaphysical standpoint on VR, in which we discussed VR as rather examplifying of our abstracting tendencies of mind. These entries can be read at Matrise, and were called: 1) On Mediums of Abstraction and Transparency, 2) Heidegger’s Virtual Reality, and 3) The Mind as Medium.

Virtual Embodiment for Social Good

Now that we have discussed the concept of Virtual Embodiment, it may be natural to discuss what this knowledge can be used for. As discussed already, generating experiments in VR that hacks our self models, may provide useful knowledge on the structure of our self-consciousness. Apart from this general knowledge, some may also have practical utilisation in applied VR for specific scenarios.

Racial Bias

A very exciting paper that describes work utilizing virtual embodiment, is one by Banakou, Hanumanthu and Slater. In the project, they embodied White people in Black bodies, and found that this significantly reduced their implicit racial bias! The article can be found and read in its entirety here (abstract available for all).

Domestic Violence

Another interesting project by Seinfeld et. al, is one in which male offenders of domestic violence became embodied in the role of a female victim in a virtual scenario. At first in the experiment, the male subject is familiarized with his new, female, virtual body and the new virtual environment. When the body ownership illusion, or virtual embodiment, has been achieved, a virtual male enters the room and becomes verbally abusive. All this time, the subject can see his own female body reflected in a mirror, with all his actions corresponding to his. After a while, the virtual male starts to physically throw around things and start to appear violent. Eventually it escalates and he gets closer into what feels like the subjects personal space, and appear threatening.

They write:

Our results revealed that offenders have a significantly lower ability to recognize fear in female faces compared to controls, with a bias towards classifying fearful faces as happy. After being embodied in a female victim, offenders improved their ability to recognize fearful female faces and reduced their bias towards recognizing fearful faces as happy”

The article can be read in its entirety at ResearchGate.

Staying Updated in the field of Virtual Embodiment

Research on Virtual Embodiment is happening continuously. To stay updated on this area of VR research, I enjoy following Mel Slater, Mavi Sanches-Vives and Thomas Metzinger on Twitter. Last but not least, I would stay updated on Virtual Bodyworks at Twitter, of which both Sanchez-Vives and Slater are co-founders of.


N.B: This entry lies at the centre of Matrise’s interests, and we are planning on writing several entries on this topic further in philosophical directions. Have any ideas or want to contribute? Please contact us.

Literature list

 

 

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSaveSaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

Apple, Mac and Virtual Reality

N.B: This blog entry is in Matrise’s category “Lights”, which holds more technical, often smaller posts, that concern actual and recent events. These entries stand out from other entries at Matrise, which is often more conceptual, ideal and philosophical. Lights entries need not be very related to VR, though they will always be related to computer science. You can read about Matrise here.


Apple has never created computers capable of much graphical power. Although Mac’s are often preferred by those working with media applications for video and photo editing, etc., these kind of operations rather need a good CPU rather than GPU. This means that the Mac has never been a good candidate for gamers, who require heavy graphical power to run their games. Unfortunately, this bitter ripple effect of Mac’s crappy GPUs, also extends to VR support. As the Mac has not really been a candidate for good gaming, Apple has been left out of the loop by HTC Vive, Oculus, etc., simply because none of their machines would fit the minimum requirements of running VR.

So although the choice to not try to stuff a GTX 1080 ti into a Macbook has secured its ability to look pretty and slim it has been dissapointing for developers and VR enthusiasts with a fondness for the Mac OS X.

External GPUs for Mac

Last year, Apple revealed that their new operating system MacOS High Sierra would take steps to support VR on mac. As part of this, Steam VR for Mac was released — and support for external Graphical Processing Units (eGPUs) was added as well. Mac’s had unfortunately always have had terrible GPUs relative to their PC equivalents, which has limited their use for gaming- and VR purposes. Though this has secured the Macbook’s ability to look pretty and slim, it has been dissapointing for developers with a fondness for the Mac operating system.

Thunderbolt

The latest Macbook Pro series, for instance, has four slots for Thunderbolt 3. Now, the new Thunderbolt 3 support transfer speeds up to 40Gbps, which is significantly higher than the cables connecting your Mac to your internal GPU. This has opened the possibility of using the slim, pretty laptop for lectures, meetings or writing at home — all the while being possible to augment the same laptop to a graphical beast while coupling in the eGPU. You bring the light parts, and leave the heavy ones.

The Sonnet eGFX Breakaway Box for coupling graphics card externally via a Thunderbolt 3 port. In Matrise’s eGPU, we currently host an AMD Radeon RX 580 “Sapphire”. This does a good job at supporting the HTC Vive in a Macbook Pro 15.

In the fall of 2018, on the introduction of their new eGPU support, Apple partnered up with Sonnet to sell eGPU cards with a Sonnet cooling chassis from their Apple Store. As the support for eGPUs were still in beta, Apple only sold the eGPUs to registered apple developers. Matrise bought one, obviously, as this opened up for VR development, and testing, at the Mac.

In the beginning (the beta stages), the support for this was decent, but slightly annoying. Everytime you plugged in the eGPU you had to log in and out of your account — and sometimes there were trouble to get the screens connected. For the last months, however, the support feels more solid, with an icon in the menubar that can be used to eject the eGPU. You no longer have to log out everytime to connect it, which simplifies the workflow of those who use this to power , say, one 4K screen and another WQHD display at their work station.

The Office. Apart from VR development, the eGPU is useful in giving graphical power to external monitors, at the same time as providing electricity. For this setup of two >HD screens, only one Thunderbolt cable is used.

Apple and VR

Although Mac users now have the possibilities that come with increased graphical power — this does not mean that VR and Apple is a very great match yet. They have, however, lately opened their eyes to the fact that they need to support developers of this new medium. Last month they introduced their new MacOS “Mojave”, of which “Dark Mode” we discussed in our previous “Lights” entry. What is perhaps more important, however, is that the new Mac OS Mojave would have plug-and-play support for the new HTC Vive Pro (which Mac users now luckily can actually use thanks to the eGPU support). Matrise has ordered a HTC Vive Pro Kit, and will post a performance test using an eGPU in Mojave when it arrives.

The HTC Vive Pro is to receive plug and play-support in the new Mac OS “Mojave”

Although now Apple with their Mac’s have the technical solutions that make it possible to create and view VR in the same way that normal Windows PC’s have, this does not mean that Apple’s Mac stand equal before the task. The outcome of long years where Mac’s would not really be able to play any VR games still stand, and there are therefore very few games that bring support for Mac users. Hopefully this will change in the future, now that Apple at least actually plans the road ahead to be friendlier rather than hostile towards the technologies.

Modular Computing
What is an interesting in the way we see these eGPUs work, is how this kind of modular computing may be the future for laptops. Stationary computer parts have the benefit that they can be as big as they need to be, which reduces the cost of the labour of fitting these components into thin laptops. Scenarios could be imagined where it is normal to have a strong GPU and/or even CPU at home and at work, along with some monitors, to augment your computing once you are there — while always keeping the base parts (your laptop) in your bag to go. This workflow may remind us of the new Nintendo Switch — which can change from console to portable by simply removing the necessary parts and thus “switching” to portable.

What may be even more convenient than modular computing, we can admit, may be cloud computing. When web transfer speeds finally turns good enough in the future, we could upload all our computing into a queue in the sky, to be performed by some quantum computer centres in a desert somewhere… Probably.


What do you think of Apple and VR? Could you imagine the modular computing scenario working in your everyday life? Please comment below.

Inner as Outer: Projecting Mental States as External Reality.

Introduction to Mysticism

Within Mysticism, the merging of Self and World — Inner and Outer — is seen as the utmost aim. Mysticism can be found within most of the world religions, such as Buddhism, Christianity, Hinduism and Islam — and its aim is often formulated as union with God. Depending on the religion, however, the degree to which Mysticism is the common way of practicing the religion varies. Although many religions have such contemplative practices, they are not always adopted by the religion’s followers at large.

When discussing «Union with God», it should be noted that the term «God» varies in its meaning between these religions. The contemplative practices often have significantly varying metaphysics, for instance Monotheistic (Christianity), Polytheistic (Hinduism), and relatively Atheistic or Agnostic (Buddhism). Be this as it may, their descriptions of the experience of this merging of Self and God is often strikingly similar. These states of enlightenment are often described as ecstatic, in which the conscious experience can not be placed within our normal frames of language or understanding.

What also unites the different traditions, is that such states of consciousness is usually  worked towards through contemplative practice such as yoga, meditation or other disciplines of focus or conscious attention. Other techniques for achieving these ecstasies have have been ascetic ones, such as fasting, waking, isolation — or other ways of stirring the Self to war.

The experience of seeing the Inner as Outer, and the Outer as Inner, is often described as the feeling that living itself is an experience of seeing and perceiving Oneself and/or God. Within this worldview, there is no Self relating to anything external.

Non-duality: synchronization of Inner & Outer

The concept of merging Inner and Outer, or Self and God, can each be viewed either in very material or spiritual terms. Although materiality and spirituality do not have to differ metaphysically, separating these gives us some communicative benefits — and Mysticism may be explained and spoken of from both these perspectives. Discussing the Inner as Outer purely «scientifically», if you will, makes sense in that all our perceptions of the Outer world is indeed created Inner, and as such — Reality will always be a synergy of Inner and Outer. We know that we do not see, or have ever seen, anything which we ourselves do not actively generate. As neuroscientist and consciousness researcher Anil Seth put it, “our brains are actively hallunicating our conscious reality”.

States where a subject experiences the Inner and Outer as ‘one’, is often referred to as «non-dual».  Often while speaking of Inner and Outer, we tend to implicitly reinforce the Self and the World as a duality (when pitching a solution we often have to pitch the problem first). By using the word «non-dual» instead of ‘one’, we may pinpoint the nuance that it is not a duality in separation, but neither completely “same-same”. Although it is non-dual, neither is it all same or flat — least of all static!

Although we classify and divide our reality, fundamentally what we perceive is a stream of experience, which in every sense is simply “reality” before divided, and, again, actively created by us. This is not to say that there are no external reality or world — but it definitely is to say that all which is external is perceived first and foremost, solely, internally. Experientially — externality has never been perceived, except as a subcomponent of internality.

A vase, or two faces? Each defines the other, and neither exist without the other.

Experiencing and Sensing the Non-Dual

This causal explanation, however, leaves out the experiential aspects of the non-duality. Although it may make sense on paper, it matters little to us as we absolutely perceive the world as dual — as subjects relating to a World. Within Philosophy, this traditional way of adhering to and speaking of the world,  is referred to as the subject-object dichotomy. Although, between different cultures and continents, the degree to which we adhere to this way of thinking vary in its intensity, it is nevertheless definitely an essential part of the human experience which we share.

How the material explanation can be said to be different from the spiritual in this sense, is that the spiritual concern is to experience the Inner as Outer, not to understand it cognitively. As such, and towards that, meditation practices such as Mindfulness and Yoga have existed, to increase wellbeing by increasing the degree to which one feels in union with God, or for those who do not fancy the term; to the degree which one has peace with oneself and the world.

Contemplative practices such as yoga and meditation, has the last fifty years become more popular in western societies. Although they have been subject to a certain degree of metaphysical raffination the last years, these methods are nevertheless largely old and traditional. The most common of these contemplative practices we see today is adopted from the Vippassana practice, commonly known as Mindfulness. These methods are now commonly used in psychological treatment of anxiety and depression, and research has the latest years started to uncover the benefits of learning to be able to sit quietly with your mind and, well, deal with shit, or seeing it for what it is.

In the next section, we will discuss an approach utilizing Virtual Reality to aid in Mindfulness meditation — which can help to perceive the Inner as Outer.

A common belief is that the aim of contemplative practices is to empty the mind. In a sense, it can be said to be correct, in that meditation practices often seek to eradicate, dilute or cancel the self-referential narratives.

The effects of Mindfulness meditation

The essence of Mindfulness or similar contemplative practices, lie in their manipulation of identity. We stated “the problem” of Mysticism as the gap between Self and Other — and for this separation to be there, we must necessarily have a relatively thoroughly defined sense of self. For most of us, this tend to be limited to the cognitive processes that constitute our mental narrative (the personalized voice in our heads, our formulated will, and how it appears to direct our actions and plans our lives). It is actually to a far lesser extent our bodies, although this also attributes to our self-consciousness.

Mindfulness is about being present attentively in each moment to one’s state of mind. When doing such focus excercises directed at the mind, and observing these mental processes closely, the idea or view of them as solid things starts to unravel. When rather seeing them as thoughts from a distance, they appear untangled to us, and we perceive our own existence as distinct from those thoughts.

Virtual Reality Biofeedback as Meditation aid

One of the great benefits of VR is its ability to project and represent data in the format of the reality encompassing us. Within the context of this entry, we could say therefore that VR can simulate what we perceive as the Outer. The question may then be asked: how can we project our Inner in to this medium of Outer?

Although I believe we will see more work on VR biofeedback within this domain in the future, in this entry we will focus only on one research paper in particular to examplify our case. At last years CHI conference, the world leading conference on Human Computer Interaction, Joan Sol Roo and his colleagues presented their work on Inner Garden: a mixed reality sandbox for mindfulness. The artifact is a physical sandbox, which the user can shape to a given terrain. The sandbox is given generally visually augmented by a projector with colors and shapes — and physical changes to the sandbox will also alter the output of the projector, which deliver terrain information such as sea levels and green growth.

The sandbox is just not physical, however; by placing a physical avatar in the physical sandbox, you can enter into the land you created in Immersive VR. A 3D-model of the land you created physically can be seen virtually, from the viewpoint of your placed avatar.

The Sandbox, which heights of the sand have been turned into an island by the projector.

Attached, to measure your inner states, is both breathing- and heart rate sensors — which are coupled to provide visual and auditive feedback. In this way, you can synchronize your breath to control the environment and rythm and breaking of the waves. The Inner Garden represents your inner state, and. by practicising breathing techniques, the flora of your world will get greener and more animals will appear.

In this way, Inner Garden works as a great example of representing Inner phenomena as External Reality. Very conceptually interesting, and hopefully one day we will also see empirical studies on similar artifacts.

You can read more about Inner Garden, which received an honorable mention at CHI’17, here.


What do you think? Do you have any ideas for VR applications using biofeedback?  Please comment below.

Literature list

 

 

 

 

 

SaveSave

SaveSave

 

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

The History of Virtual Reality

In recent years, Virtual Reality (VR) technology has finally reached the masses. 2016 was called “The Year of VR” as several actors released their Head-Mounted Displays (HMDs) on the consumer market. While HTC, Oculus and Playstation delivered high quality HMDs that require external computers to run, the year also opened up for high quality mobile VR. Both Google with their Daydream View and Samsung/Oculus with their GEAR VR have provided an easier step for consumers to enter the world of VR. These mobile VR solutions  offer better internal measurement units than the simpler Cardboard devices, and also feature simple controllers for interaction. We now see the market spreading out both  in quality and accessibility: in 2018 we have both seen the coming of the HTC Vive Pro, a more expensive high-end HMD with increased resolution, and the Oculus Go, which is a reasonably-priced ($200) stand-alone 3DOF (3 Degrees of Freedom) HMD for the starters.

It is natural to wonder how all of this started. Why did we for instance not see much VR before 2016? When it now seems to be relatively easy for commercial actors to push out HMDs down to $200, why did it not happen sooner? Of course, we have had Oculus’ development kits since 2013 — but even this is very recent.  When Google released their Cardboard (a simple HMD made out of cardboard  and some lenses), it seemed incredilous that VR could be attainable for the smartphone for only 50 cents. This, however, only points us toward how fascinatingly simple the underlying pinciples of VR technology actually are.

In this entry, we will trace the VR tech we see today back to its roots. We will go back about two hundred years, and work ourselves jumpingly forward to the very recent innovative technologies.

Stereoscope

A drawing of the Lothian Stereoscope, released in 1895; one of many different models.

In 1838, Sir Charles Wheatstone developed what would be the first Stereoscope. Even before the camera was invented, people were seeing (drawn) images with 3D effect through stereoscopes.  Stereoscopy, that is, perceptory illusion of depth, is achieved by displaying a slightly different segment of an image to each eye. Wheatstone achieved this by separating the two images by a piece of wood, and providing a lens directing the light, between each eye and the corresponding image. While looking through the stereoscope, our brains perceive the two images as one image, with the added 3D effect due to the varying segments of the images. This effect is simply caused by an utilization of how our eyes and brain work, by combining the sensory data from each eye into one. We may, for instance, most likely be able to recall  sometimes «seeing double», when our brains have yet not our varying visual impressions.

Since Wheatstone, different stereoscopes have been produced all the way up to the Google Cardboard or other HMDs; which instead of drawn images, or later photographs,  utilizes a screen to deliver the imagery to the eyes. Actually, in the early 1900s, Stereoscopes functioned as home entertainment devices, and «stereo cards» such as the image seen below could be purchased from photography companies.

Stereo card of St. Peters Church in the Vatican. Such cards, picturing tourist attractions all over the world, could be purchased and viewed at home in a Stereoscope.

Stereoscopes and modern day Virtual Reality HMDs share the essential feature of stereoscopic depth illusion (3D). Apart from that, however,  a lot has obviously happened since 1838, which we now regard as essential for the feeling of presence and realism, and which makes the technology capable of simulating realities. The most important of these have been moving images, 3D environments, interaction, and 360 degrees of orientation. With the stereoscope, images very static in every sense.

Sensorama

In the mid 1950s, however, some people saw the opportunity to spice up their stereoscopes a bit. A bold attempt at enrichening this, was the Sensorama. In addition to providing a stereoscope with motion pictures in 3D and color,  all quite revelutionary, the device had fans for simulating wind, odor transmitters for smell of the environment, stereo sound, and even a moving chair!

The Sensorama, or «Experience Theather». Illustration from Morton Heilig’s 1962 US Patent.

Pygmalion’s Spectacles

The idea of the Sensorama, or VR in general, can as many other innovative future-defining ideas, be found in science fiction literature. Before its conception, in the 1930s, the science fiction writer Stanley G. Weinbaum introduced the idea of «Pygmalion’s Spectacles». By wearing these, the user could experience a fictional, or virtual world, with holographs, smell, taste and touch, and make the virtual come alive. Pygmalion, which «Pygmalions Spectacles» were named after, were a Greek sculptor who fell in love with his sculpture, and so begged Venus that it would come alive. The Myth sheds an interesting light on VR as an ultimate dream of humanity, to create realities for ourselves to inhibit, or to create images in the format of reality.

Pygmalion, which «Pygmalions Spectacles» were named after, were a Greek sculptor who fell in love with his sculpture. He begged Venus that it would come alive. Painting by Jean-Baptiste Regnault.

Information Technology

To take a leap towards another paradigm shift in VR tech, we must enter the land of 1s and 0s.  The Stereoscope slowly moved from drawn images, to photographs, and further to moving images with the Sensorama. None of these, however, supported spherical environments that could be perceived in all their 360˚. To achieve this,  certain sensors and further computation based on their sensory input has been necessary.  The most important and interesting of these sensors, has been the Gyroscope.

The Focault Gyroscope, created by physicist Jean Bernard León Focault.

The Gyroscope was given its name by Phycisist Jean Bernard León Focault in 1852 who used the device as a means to prove the rotation of the Earth.  The gyroscope is a device consisting of a spinning top with a pair of gimbals. Its origin can not be traced to a single invention or inventor, as tops have originated in many ancient civilizations — however, unlike the «complete» Gyroscope, these were not necessarily used as instruments.  Although Focault’s gyro were not the first that were used as a measuring instrument, its affordances work well to examplify the usefulness of gyroscopes in VR HMDs; the important feature it affords is the measure of rotation, which key lies in the Gyroscope’s tops’ possibility for free rotation.

Gyroscopes are fun artifacts to play with as they seem to defy gravity. While spinning, they can remain stable in most positions. If placed on a platform, that unlike the gyro remain stable, the position in terms of rotation can be measured relatively to the platform, and as such we can also measure the rotation of a HMD. It should be noted, however, that the gyroscopes of today are not pretty mechanical objects of brass anymore, which, although they do no longer satisfy our aesthetic appetite, at least have the benefit of fitting into our smartphones and HMDs. Today, gyroscopes have heights, widths and lengths of only millimeters, which opens the possibility for placing them inside smartphones and HMDs.

The Sword of Damocles

The Sword of Damocles, an old greek cultural symbol of Mortality — ever close to those in power. We see the sword hanging from a single horse hair over the head of Damocles.

Fifty years ago, in 1968, Ivan Sutherland and his student Bob Sproull created the first computer-driven stereoscopic (3D) Head-Mounted Graphical Display with 360˚ head-tracking. The HMD was not exactly lightweight, and was named after the «Sword of Damocles» because of the heavy stand hovering over its users head. As can be seen in the illustration below, the head-tracking was mechanical, and did not in fact use a Gyroscope. Later, however, this became a more fruitful approach, so as to avoid the massive device rotating over the users head.

The field of view and graphical fidelity of the Sword of Damocles were obviously quite low, yet the Sword of Damocles is the first widely known HMD, and has since its dawn inspired and launched further decades of VR research.

The first Virtual Reality Head-Mounted Display, named after the Sword of Damocles, because of its great weight hanging over the user’s head.

Towards the modern HMD

Since the invention by Sutherland and Sproull,  creation and use of HMDs was seen more and more within research. As computational power became faster and cheaper, the HMDs decreased in size, and increased in field of view, graphical fidelity and refresh rates. Yet — even back in the 1990s for instance, the technology was still expensive, and poor in terms of graphical realism. It often caused cybersickness due to low refresh rates, and high motion to photon latency. Of this reason, as with any really powerful computer from that time, VR was reserved for research universities that could invest into the technologies, or businesses with resources to experiment with the technology. There were some attempts at commercializing VR for gaming purposes, such as the SEGA Genesis and Nintendo Virtual Boy — however, both of these remained largely as prototypes and were later discontinued. To this day, none of these companies has since experimented with the technology, although Nintendo in 2010  released the Nintendo 3DS which utilizes a stereoscopic display that does not require any glasses.

 

Image of a 3D model of the HTC Vive Pro.

Conclusion

Since the Sword of Damocles, VR technology has undergone small incremental changes leading to where we are today, mainly as a result of general computer and graphics research, and the natural progression of Moore’s Law; today our processors are smaller and more powerful, and our screens of higher resolution.

In addition to this, however, there are certain very recent technologies that have impacted the VR as we know it today as well. In Matrise’s glossary, we briefly present and define some of these technologies. Some that can be read about is Foveated Rendering and  Low Pixel Persistence Modes.


Did we miss anything? Any thoughts are welcome in the comments section.

SaveSave

SaveSave

Virtual Reality and Exposure Therapy

The most essential feature of VR is its ability to simulate what is not real. This is its core concept, and what causes its radical exclusivity and novelty. The benefits of ‘avoiding reality’ in this sense, is most often that virtuality is more cost-effective than reality. For instance, corporations worldwide train their employees in VR as it saves money to avoid renting a physical location and hiring physical trainers. ‘Money’ in this case, is of course just a measure of effectivity: it takes less resources to achieve certain objectives virtually than physically. The cost is not the only benefit, however; the virtual may also be safer. We see this especially within surgery, where a failed operation on a virtual patient is much preferred than on a real one.

“Scream”, by the expressionist painter Edvard Munch. Want to experience Munch in VR?  Read our entry on Art in Virtual Reality.

Exposure Therapy
Another scenario where virtuality may be preffered  is psychological treatment of anxiety disorders. Anxiety is a terrible disorder in the way it is eating away the lives of the sufferers, and is hard to treat to by non-addictive pharmaceutical medicine. Psychological treatment, however, is in general very successful towards certain anxiety disorders.  Agoraphobia, arachnophobia, glossophobia, etc., can be treated by what we call “exposure therapy”.

Under exposure therapy, the patient usually get together with a psychologist, and is asked to express their fears of the situation of exposure. Here they answer what they think will happen, and how they think they will react. Their fears are pinpointed, and their catastrophic thinking is outlined. In these cases, it is not uncommon that patients believe they will literally stop breathing, or die, etc.: the narrative which operates is something they buy heavily in to, and the key of exposure therapy is to challenge their acceptance of this narrative. To a certain extent, this is a central problem of anxiety disorders: patients very seldom challenge these fears, of obvious reasons, and so their map of how the world works is not challenged and updated by reality. This is, through exposure therapy, systematized.

When the patient has been exposed to their fear scenario — the psychologist confront the patient with their initial fears that were written down prior to the exposure. The patient is then encouraged to reflect on the gap between their fears and what actually happened, something which we refer to as inhibitory learning. This kind of treatment falls under what is depicted as Cognitive Behavioral Therapy (CBT); by actively challenging the patients’ mental model of the world by reflection on facts.

Long story short — exposure therapy works. The largest problem with exposure therapy is, as usual, the cost. Having highly educated psychologists dedicated to the task is expensive enough in itself — but arranging the exposure to a fear scenario is an often greater challenge, practically and economically. It is not really convenient to summon spiders into the psychologist’s office, for instance. Arranging complicated fear scenarios and executing them is not convenient, and at high cost, which is a hinder for an otherwise effective treatment.

Virtual Reality Exposure Therapy can be used to treat, for instance, arachnophobia (fear of spiders). If you are not afraid of spiders, you might need to increase their size. I can recommend trying Farpoint for PSVR, which features giant space spiders similar to those in this illustration by Alphonse de Neuville.

Virtual Reality Exposure Therapy
This is where the concept of VR enters our story, as we start talking of Virtual Reality Exposure Therapy (VREP). By using virtual environments instead of actual physical locations, effective exposure therapy can be offered to more people at lower cost. At the University of Bergen, through the research project INTROMAT, we develop and do research on VR Exposure Therapy for adolescents with fear of public speaking. The INTROMAT project aims to introduce personalized treatment of mental health problems using adaptive technology.

The question that often raises itself when we discuss the concept of VRET, is whether we can fear what we know is not real. Although we know what it is like to be nervous before talks, it is perhaps hard to imagine being afraid of speaking in front of virtual subjects in which ‘nobody’s home’. On this point, however, the research is very clear. As Lindner et. al (2017) writes, “decades of research and more than 20 randomized controlled trials show that [VRET] is effective in reducing fear and anxiety”. The reason why VRET is interesting now, today, is then not necessarily because VR is finally good enough to deliver realistic virtual scenarios. VRET has been shown to be effective with VR technologies far inferior to those setups we have commercially available today.  The reason for its relevance as a research subject now, is because the technologies are finally cheap enough to successfully be used in large scale treatment. It is therefore time to revisit the previous research, and look at how this treatment can be improved further.

Participatory Design of VR Scenarios for Exposure Therapy
We are presenting our paper on VR Exposure Therapy at the ACM CHI Conference on Human Factors in Computing Systems — the premier international conference of Human-Computer Interaction — in May 2019 (Flobak et. al.) The paper is available at ResearchGate, and has in my unbiased opinion two very interesting contributions: 1) it documents prototyping of VR with 360 cameras, which enables less technically skilled persons to produce VR (for exposure therapy, and for other areas), and 2) through the fact that it enables a larger group to produce VR, it simultaneously enables adolescents to communicate their social scenarios through the medium of VR. As we have previously discussed in The Capture of Reality, it easier to avoid the uncanny valley effect and simultaneously maintain high standards of realism when using image capture equipment, than it is to create it with 3D animation. Thus, this approach to creating VR Exposure Therapy has both the benefit of being extremely cost-effective, whilst still being developed based on good sources for the social scenario it should depict.

You can read the paper at ResearchGate. Below is the conclusion of the paper:

Conclusion
This paper has presented a participatory approach to prototyping virtual reality scenarios for exposure therapy for fear of public speaking. In the study, we demonstrate how adolescents can be involved in the design of VR scenarios enabled by 360° videos. We also show how the participants draw on their lived experiences when creating the scenarios. The paper also illustrates how 360° video is a viable tool for making the design of immersive experiences more accessible, as the method involves far less technically demanding skills than you need for constructing CGI-based environments, and is less time- consuming. Further, the expert evaluation highlighted the authenticity and realism of the scenarios, and the scenarios were seen as having a potential for use in a therapeutic context. In addition, we have discussed how this approach can be used to make tailored VR experiences for exposure therapy”

Literature list

SaveSave

The Mind as Medium

N.B: This post is the third and final post in a series that comments upon a metaphysical stand towards VR technology. The entries are based upon an essay that was written for a doctoral course on the philosophy and ethics of the social sciences. The two previous post that precedes this one, are linked here:  1) “On Mediums of Abstraction and Transparency“, and  2) “Heidegger’s Virtual Reality


3 / 3

Now that we have arrived at the final post of the series, it is time to revisit our initial problem of Virtual Reality and Authenticity. Initially, we introduced our problem as the abstracting tendencies of Information Technology, and the unique position of VR technology in this case, as it abstracts all the while displaying a high degree of transparency and coherency with the real world – while simultaneously hiding the real world as much as possible. We wanted to ask whether this was indeed a real problem, and if the technology could in effect distance ourselves from the reality of the world, and thus distance ourself from truth, or an authentic living.

An old illustration of a Stereoscope, the illusive technology responsible for 3D effects in our modern VR Head-Mounted Displays. The first stereoscope was created in 1838 by Sir Charles Wheatstone.

Discussion

After reviewing Heidegger’s essay “The Question Concerning Technology”, we noticed several questions that we could use in our existential approach to our technology critique. Following the lines of Heidegger, we could say that we could have a free relationship towards VR when we know what it is. We can for instance ask whether we can see any “Enframing” tendencies of VR technologies.

In Being and Time, Heidegger’s concern for authenticity is a concern for individuality: a concern for Dasein’s possible impossibility of leading its own life: the death of authenticity. In the they-self, no individual is thoroughly relating to its Being, and so the truths and the “goings-on” that is defined “culturally” in the they-self, are in a sense accepted blindly and left unexplored: they are abstractions as they are not defined relatively to each Dasein; they are not experiential, not resolutely made up. Effectively, we are not in control when we have given ourselves up to let the they-self decide the possibilities of what we can project upon.

“The Fairies Flew Away”, by Charles Henry Bennett.

Similarly, In Heidegger’s Questioning Concerning Technology, we introduced the term of Enframing. Enframing is dangerous because we create things that enframe us, we instantiate our enframing orientation in technology. If we do not relate to technology as exactly this which it is in its essence, Enframing, we do not relate to it as it is, and so it may hinder us to perceive the world as it is. Both by relating to the agenda set by the they-self, or the framework set by ourselves indirectly through our enframing technology, we do not relate to the world and our active projection upon it: we do not resolutely enact our nature of actualizing the possibilities that can lead us to our authentic self.

So how is this relevant for the technology of VR? Should we interpret it as that we are not in control over our possibilities, if they are presented to us through VR, rather than in real life? Can , in this respect, VR be seen as an instantiation of the they-self, as it similarly provides abstractions, terms and conventions? If we remember to follow Heidegger deeper than the image of the problem, we may see that it is not VR that we should be afraid of. VR, like other modern technology, carries the mark of its author: and similarly we can see the creation of VR as the ultimate dream of Man. We spoke earlier of the characters of challenging-forth as inspired by the view of modern physics as an exact science: we wanted to view the world as chopped up in parts and materials that we could understand, enframe, and use as means to ends. If the dream accompanying this Newtonian metaphysic ever was lost with the rise of quantum physics, VR can certainly become the free space where man’s illusive control over the world could be rekindled: finally we have a world, not of atoms, but of bits, that we can know totally through an actual access to its source code.

We can have a free relationship towards VR, when we understand what the essence of VR is. To what is the essence of VR, we will not answer “enframing”, but rather abstraction, and more specifically, abstraction in the mode of transparency. The tendencies of abstraction, was, as with Heidegger’s technology, perhaps not inherently something technological, but something human: only technology made it obvious and explicit enough for us to see clearly. VR may perhaps be an interesting way for us to look at the real problem figuratively; the technology stands in between (medium) you and your senses, in the same way that our mental terms and classifications obscure the otherwise non-reducable reality.

To create VR is a human activity, and in VR we find much of ourselves: similarly to technology in general, we see that VR is a means to an end, and that it is an instantiation of this purpose. Sometimes, this purpose is “re-presenting” an abstraction of reality, and so it is not the genuine authentic reality itself that we see. We can therefore say that it is a human activity to create representations as means to an end, and even further, we can say that it is human to abstract, it is human to deal in images, it is human to connote terms and concepts, similarly to “putting things to boxes” in the enframing attitude of mind. With VR, these boxes are presented to us as reality, or at least in the format of reality, and the result is a realm of abstraction, a realm of representation, that blocks naturally-occurring presentation. Similarly to how Heidegger’s technology illustrated our enframing tendencies, VR may show us our desire to create our own bubbles of reality to inhibit, our own terming and associations and concepts of the self. In this respect, the essence of VR may not even be anything new. In this sense, we have created “mental virtual realities” for a long time, and the technological, the “material” expression of this does not provide anything new.

Conclusion

If we can understand the essence of VR as abstraction in the mode of transparency, and, similarly to Heidegger’s technological essence, believe that this essence is inherited from our own tendencies of mind, we will view it as such that it is human to create transparent abstractions. Through our terms, conventions, and definitions, we abstract, and through relating to these abstractions, we perceive them as real. Our thoughts and defined concepts, and the conventions we adhere to, work as our transparent user interface’s which we use to navigate the world. Our initial fear was that IT, examplified in its extreme case of VR, would act as a wall between us and the world; inhibiting a true, authentic relationship towards it. This is, however, not fundamentally something that we find in especially in a certain technology – instead, we find that when we look at this technology we are instead looking at ourselves. The mediums and interfaces that classifies and simplifies is inspired by our minds that classify and simplify. Technology is not the separation between us and the world, at least not any less than the enframing and abstracting orientation of our minds to the world are: the one mirror the other. We have thus reached a new question to replace the first, one step further on the hermeunetical spiral, and that is whether our own abstracting tendencies of mind keep us from authenticity.

Literature list