Philosophy & Science

On our ‘About’ page, we describe Matrise as a blog that focuses on the topics of VR and Consciousness, and the related Science & Philosophy. In this entry, we will discuss why we find both Science & Philosophy essential and how the disciplines compliment each other. The entry is apologetic of Philosophy as a discourse, and especially critiques the philosophy of Scientism and other extremes of logical positivism — of which the former can be understood as an ignorance of the role of philosophy in general. The discussion is highly relevant as this is a worldview increasingly adopted by young people and promoted by popular science figures. The danger is a spread of anti-intellectualism, which ironically is going on under the flag of science, and further even tries to adopt its authority.

It should be noted that although the entry critiques some ignorant science communicators, it is not the aim of this entry to argue philosophy in any way triumphant over science.  The aim is rather to replace “versus” with “and”, and illustrate their mutual dependency. Like my philosopher friend and colleague Deborah G. Johnson once said of her technology criticism: “I prefer to think of my role as similar to that of an art critic” — an art critic loves art, and similarly, in this entry, my critique is out of love for the scientific discourse. It is, by far, not directed at the conceptual core of science, but rather towards the bad tendency that may be had towards it.

In my occupation as a Ph.D. fellow, I ought to, and do, value both science & philosophy, and the main argument of this entry will be the inseparability of the two disciplines and the beauty of their interplay.

Philosophy is Dead, Long Live Philosophy

Illustrating excerpt from “The Philosophy Force Five vs the Scientismists” by Existential Comics. The left guy seems to be Richard Dawkins. Original can be found here.

It may benefit us to personify the attitude towards science and philosophy that we are criticizing with an example. In 2010, the late brilliant Professor Stephen Hawking declared that ‘Philosophy is Dead’, and that the torch guiding the quest for knowledge had been passed to scientists rather than the philosophers. The strong irony present is obviously how Hawking argues the death of philosophy in terms of philosophical arguments, thus illustrating the futility of his statement simultaneously. In his book, “The Grand Design”, Hawking writes that metaphysical questions such as the meaning of life and the nature of reality traditionally were questions for philosophers — but that now, alas, philosophy is dead. Luckily, however, Hawking himself comes to the rescue: further in the book he writes that the “purpose of this book is to give the answers that are suggested by recent discoveries and theoretical advances” — thus pronouncing ever more clearly that his book is one of philosophy rather than science — but which reasons on asserted scientific facts. Revolutionary, indeed. Why did not Philosophy think of just looking at the answers that were suggested by the facts? The whole field of Philosophy is dead, but Hawking has taken it up as a side project and written a philosophical book, so we need not worry.

A critical point that illustrates Hawking’s misunderstanding here, is how he announces that his book discusses “the answers that are suggested by science”. His attitude is therefore that the physics always imply its metaphysics; that descriptive facts of the world also tell of their own meaning, that the solution is always innate in the problem. The situation is that Hawking, of course, has a certain philosophy, he is simply not critical towards his acceptance of it. His assertion is easy to critique, however,  by arguing that science does not suggest anything — Hawking, however, suggest several things, and these may be great things and great ideas! It is easy to agree with his philosophy as broad as it is: through observation, we agree on facts, and with logic, we reason on the meaning of these facts. Hawking is philosophizing on scientific discoveries and theories, and that would be a great thing had he only been aware that was his approach. There is not (necessarily) anything wrong with his philosophical stance, except in exactly how it claims to not be a philosophical stance. The saying “Philosophy is Dead” instantly revives Philosophy as the utterance itself is a philosophical statement. The sentence achieves the same futility as a person grabbing a megaphone to announce that megaphones don’t really work.

Illustrating excerpt from “The Philosophy Force Five vs the Scientismists” by Existential Comics. Original can be found here.

The Science Gang

Unfortunately, Hawking as a popular science communicator is not alone in his ignorance. For instance, both Neil Degrasse Tyson and Bill Nye (the Science guy), has also publicly addressed their concerns over Philosophy. Neil deGrasse Tyson has called Philosophy ‘useless‘, advising bright students to stay away from it, and stated that Philosophy is not “a productive contributor to our understanding of the natural world”. He has also said that he is concerned that “philosophers believe they are actually asking deep questions about nature”, and described the venture of philosophy as being “distracted by questions”, and therefore not able to contribute anything to our understanding.

Our best example of the problem that lies at the core of these poor critiques, however,  is given by Bill Nye in a YouTube video where he is faced with a question from a Philosophy major on what his take on Philosophy is. Nye addresses the topic of Philosophy by introducing several great philosophical problems such as the nature of consciousness and reality, only to dismiss them based on the argument, or, rather saying, that he is quite sure that reality is real. Deep dive. The best illustrative quote would perhaps be his rejection, and total misunderstanding, of Descartes’ famous argument ‘Cogito ergo sum — I think, therefore I am.’ Nye comments on this: «Well, what if you don’t think, do you not exist anymore? You probably still exist even if you don’t think about existence”. I hope I don’t have to explain why this is ridiculous.

At the end of the video, Nye goes further into depravity when he presents a thought experiment to validate his lack of philosophical skepticism.  Nye simply asks the questioner to drop a hammer on his foot. This kind of argumentation is classic and has long before been termed ‘argument ad lapidem‘: a logical fallacy that consists of dismissing a statement as absurd without giving proof of its absurdity. What is even more unfortunate, however, is that this was a brilliant opportunity for Nye to illustrate how science actually benefit philosophy, as there is much evidence from scientific experiments that show we may not always be able to trust our senses, and even our own reasoning (read for instance about the fascinating split-brain experiments, or even here on Matrise on how we can fool our senses to achieve Virtual Embodiment or our entry on The World of Illusions). For those who can emotionally afford to experience a few cringy minutes, Bill Nye’s YouTube video can be watched here.

A Philosophical Argument

Illustration borrowed from Existential Comics’ “Philosophers And Physicists” available here.

The problem with a defiant attitude towards Philosophy is that it promotes an ignorance of thorough, critical thinking. Our sciences are infused with philosophy and we need philosophy to reason on our discoveries and their meaning, and further to discuss what we want to research. The best way to understand this is that our scientific methods are an accomplishment of philosophy with the epistemology, methodology, and rationality underlying it: the validity of the scientific discourse is argued for in philosophical terms. Science is not given, it is a construct: our scientific methods have been developed, and are still being discussed and developed to this day. Science & Philosophy does not compete with each other; philosophers compete with philosophers; the logical positivists compete with the idealists, and the rationalists compete with the empiricists. Science is conceptually within Philosophy, and one of its greatest accomplishments. They do not exist separately.

The Uninvestigated Myth

What Philosophy can teach us, and which perhaps especially the field of Existentialism stresses, is to critically reinvestigate our beliefs. It stresses the possibility of each individual to realize their own potential, and also implicitly, the possibility of failing to do so. Heidegger, for instance, a philosopher which we have discussed in several entries earlier is concerned with authentic existence. Allowing an uninvestigated myth such as Scientism to guide one’s life could be a good illustration of Heidegger’s “Das man”; which could be said to be his impersonated term for the norms and culture we blindly accept. It is hard to translate “Das man”, but Heidegger defines it as a possible state of Dasein’s Being. A common way to explain or translate “Das Man” is to the “They” or the “One”, as in “One should always get up early”. Heidegger explains that “‘Das man’ prescribes one’s state-of-mind, and determines what and how one ‘sees'”. This can be read as that culture, or uninvestigated norms that provide a narrative to our existence determines how we fundamentally interpret reality. In the words of crazy Terence McKenna: «Culture is not your friend!»  Rather than to accept the cultural and societal narratives, we should critically investigate them, and not let anyone dictate a narrative for us that we should just subscribe to. What narratives in our society do we subscribe to, in relation to who we are and what we want?  What did we adopt from our parents and our society in relation to how we understand the basic principles of the world?

Illustration borrowed from Existential Comics’ “Philosophy News Network: Philosophy Solved” available here.

 Conclusion

It is a very naïve view, indeed, to believe that the solutions to the problems of mankind may be solved if all the young, bright minds go into the natural sciences. We need not only to find out how nature works, but also how we ought to work with it and further with each other. Science alone does not tell us what we should do in our lives. No scientific fact can alone provide us meaning, or say what we should value and pursue. Dismissing philosophy is a dismissal of thorough criticism, it is being attached and dependent on one’s settled, foundational narrative of how the world works.

Concluding this entry is an accompanying text to one of the comics in this entry on Scientism by Corey Mohler. As this entry has not necessarily been very explicit on what Scientism is, it may be informative to read his very clear presentation on this.


“‘Scientism’ is the position that Science can solve all problems, or that all problems are empirical. Philosophically, it is mostly associated with the strongest statements made by the logical positivism movement, which mostly died out in the mid 20th century. Culturally, however, it is stronger than ever, and is closely tied to movements like the so-called “New Atheists”. These newer, more naive forms of Scientism, also have a strong tendency to call philosophy “a big waste of time”, “pointless arguing”, “nothing but semantics”, etc. Rhetorically, they tend to say that non-empirical ideas have no way to guarantee they are true, so are pointless to talk about. This is a rather ridiculous point to make, since their entire movement is based around spreading a certain set of non-empirical, philosophical norms, which they apparently don’t feel it necessary to open up to criticism. What they mostly seem to mean is, assuming everyone agrees with us on the important philosophic questions, such as atheism, utilitarianism, capitalism, eliminative materialism, etc., then we don’t need anything but science. Well, this is maybe true in a strange way, insofar that if everyone agreed on every philosophical position, i.e. if philosophy was solved, then we probably wouldn’t need philosophy. Philosophy, however, has not been solved. Furthermore, if it is going to be solved, it certainly won’t be solved by a bunch of people who don’t even read or engage in philosophy. The real goal is often just to draw a border around what we should or shouldn’t question, because they don’t want any of the fundamental aspects of society to change. And, well, people who don’t want society to change often also find themselves not wanting people to even think about changing society.” — Corey Mohler behind ExistentialComics.com

The Capture of Reality

When creating experiences for Immersive Virtual Reality, there are essentially two approaches. The first one of these is manual construction through Computer-Generated Imagery (CGI), and is how most games and VR experiences are made.  The second approach is far more automatic and attempts to ‘capture reality’ instead of actively generating it. It is this approach that we will discuss in this entry. In addition to presenting the technicalities of the methods of capture, we will also discuss its limitations, and provide an innovative example of how these can be solved in the future, drawn from a student project at the University of Bergen.

An early 360° camera — horisontally at least — probably the first with a synchronised shutter.

360° Video

In a previous article on Virtual Reality Journalism, we discussed how 360° 3D cameras can be used to present a user to an immersive experience. This approach has several unique benefits. First of all, it is far less time-consuming to capture and re-use already existing physical environments, instead of spending time creating it through 3D modelling. The same is perhaps especially the case when the environment involves any human actors, as it easier to avoid the uncanny valley effect and simultaneously maintain high standards of realism when using image capture equipment, than it is to create it with 3D animation.

How does it work?

360° cameras usually comprise two or more (ultra-)wide angle lenses. In the case of cameras with just two lenses, such as the GEAR 360 or Ricoh Theta V, each of these lenses then have to be able to capture 180° degrees horizontally and vertically. The recordings from these lenses, when raw straight from the camera, are separate — and need to be stitched together with software (for instance) an equirectangular view to compose a spherical view of 360° (See Illustration 2). Illustration 1 illustrates how the equirectangular format works, in the format of a world map, perhaps our most relatable example of spherical / global shapes presented in the format of rectangles.

Illustration 1: A relatable example of the equirectangular format. The furthest point west is close to the furthest point east, and as such we deal with a ‘sphere’, or more rightly globe, that is stretched out to a rectangle. The closer we get to the poles, such as Antarctica, the more the image is stretched, as the circumference of the earth is lesser at the poles.
Illustration 2: In this equirectangular photo, captured with a Ricoh Theta V, we see the same effect as in Illustration 1. My hands, which enclose the bottom of the camera, are given the same effects as Antarctica in the map. The stairs, however, which appear to be circular are straight, but it’s bending by the lenses are especially clear when viewing it ‘equirectangularly’.

When an equirectangular image is viewed through an HMD or a smartphone, the software selects only about 110° of 360° of the image, relying on the sensors in the HMD or phone on which degrees of the image to present.

3D Images

Although regular 360˚ cameras (GEAR VR; Ricoh Theta V) to a large extent cover the world as we see it in all it’s 360°, their images are still monoscopic. Essentially, this means that the same image  is presented to each eye when viewed in a HMD, and this is not the way we ordinarily see reality. As our eyes are distanced by a centimeter or two,  the visual feed slightly varies in its capture of reality. It is this which enables us to perceive the depth of the world, that is, when our eyes are not fooled by illusions exploiting this effect, such as VR itself. We discuss this in more detail in our entry on the History of VR, in which we discuss the invention of the Stereoscope, but a small introduction will also be given in this entry. Essentially, 3D 360˚ cameras utilise the same feature as human beings to capture depth, by separating the cameras similarly to that of the human eye. Such cameras are, however, more cumbersome and costly to produce, and to capture stereoscopic images one needs to double the minimum of lenses — leading to a minimum of four lenses —two for each eye for each 180˚ of capture. Unlike the  4K 360˚ monoscopic cameras available rather cheaply at the commercial market (from $200 and up), stereoscopic cameras have not entered the market at very reasonable prices yet. There is hope, however, and I can personally recommend Vuze+, a 360˚ 3D camera that deliver 4K resolution per eye, and comes with a well-designed acommpanying stitching- and editing software. The price is still a bit stiff for most non-professional use, at $1200, but it brings hope for future technology that these can soon be more affordable. We have used the Vuze+ camera in a research project at the University of Bergen, with good results. It is comparable to the quality of a Ricoh Theta V — except that it delivers the stereoscopic images rather than monoscopic ones.

Regarding Resolution

Unfortunately, a resolution of 4K per eye sounds great — and many are dissapointed when they view the recordings of a camera such as GEAR VR, Ricoh Theta, or the Vuze+. They may recall their images on their 4K TV as incredibly sharp, and yet, their recorded videos appear somewhat blurry and pixelated. The answer to why this is the case is quite simple. The 360˚ images do indeed have a 4K resolution, however, we are unable to view all the pixels at a time as they are stretched out on a sphere.  To keep matters simple, let’s say that your Head-Mounted Display has a Field of View of 90˚ (although most have 110˚). In this case, just  1/4 of the 4K image is being seen at any given time. Thus, we will have to divide the pixel count by four. This is somewhat simplified because of stretching, but it should be enough to get the point. To get an effective resolution of 4K, or something akin to 3K such as the HTC Vive Pro and Samsung Oddysey(+) can afford, one would need a far higher resolution of the cameras.

Another Step in Fidelity: Volumetric Video

At first thought, it may perhaps be hard to imagine how we can proceed to more details in immersive  360˚ 3D recordings except by increasing the resolution. As we briefly commented, however, stereoscopics in 3D movies at the cinema, or in 360˚ 3D recordings merely provide an illusion of depth — not actual depth. The same goes for our eyes, although they mostly perceive it correctly,  they are easily fooled. 360˚ 3D cameras is an example of this, they merely fool our eyes: although it seems that there is depth, we can not really move in the image — as there is no actual depth to it. Here, volumetric video acts differently, and affords positional interaction. Volumetric video attributes the recorded images in a 3D (x,y,z) space, in addition to delivering stereoscopy so that we can perceive it. Volumetric video is unfortunately very hard to create while still retaining high quality, and plug-and-play solutions still seem far off. To get an idea of how volumetric video works, we recommend to look into the concepts of photogrammetry — and perhaps even to create a 3D model yourself, using images captured with your smartphone. This YouTube tutorial shows you how to do this in Agisoft Photoscan Pro, which has a free trial available.

Limitations

Developed in an undergraduate course at the University of Bergen, the short 360 movie “Schizophrenia“, experimented with interactive 360 video.

Despite these great innovations in the capture of reality, CGI has some benefits that neither 360˚ 3D or Volumetric videos can really achieve. The most important of these is that of interactivity . As 360° videos are linear (that is, they have a predetermined beginning and end), the user can not really affect what happens in the video — except by choosing which degrees of the video to see.

In our course in VR Journalism at the University of Bergen where I taught students VR programming, 360° video and photogrammetry — we faced this exact limitation. A group that worked on providing an experience of the reality-shattering disorder of Schizophrenia, wanted hallucinations to occur when the user viewed at certain areas. The students solved this by placing transparent gifs over the video in A-Frame, edited based on the real footage, and put gaze event listeners to activate the playing of the gif. The results were extraordinary, and could well provide a new way to provide a means of simpler interaction on top of 360° videos. The experience, which voices are in Norwegian, can be viewed here (WebVR browser such as Chrome is necessary).

Oddysey+: an alternative route towards hi-res VR

Samsung Odyssey+: going beyond the nasty grid of the SDE, and into something nicer — apparently.

N.B: This blog entry is in Matrise’s category “Lights”, which holds more technical, often smaller posts, that concern actual and recent events. These entries stand out from other entries at Matrise, which are often more conceptual, ideal and philosophical.  You can read about Matrise here.


This week, Samsung gave notice on their new Windows Mixed Reality (WMR) Headset, the Samsung Odyssey+. Priced very reasonably at $500, like it’s predecessor the Samsung Odyssey, the Head-Mounted Display (HMD) is a very attractive option for those who value high resolution in HMDs (and who doesn’t — it is obviously a desirable trait to have a greater fidelity of the virtual world!) The market has also shown its hunger for high-resolution displays, which the Kickstarter for the Pimax 8K and 5K HMD’s have shown. When on the topic of hi res displays, resolution strumpets should also check out StarVR, a high-res 210-degree Field of View HMD with integrated eye tracking to provide foveated rendering, which can be especially fruitful with that intense FOV. Digression aside — the Odyssey+ is now already for sale in the US, and in this entry we will discuss why it can be an alternative way to experience a higher resolution.

New Features

The Odyssey+ features the same high resolution of 1400 * 1600 per eye as its predecessor. For reference, this is the same resolution as found in the HTC Vive Pro which cost far more (priced from $1098 — $1399 with two controllers and base stations). Unlike the Vive Pro, however, the Odyssey+ features inside-out tracking similar to what is used in other WMR HMD’s, and also the upcoming Oculus Quest. None of this is any news, however, as all of this could equally be said of the original Odyssey. The new feature they are releasing, which makes this a particularly HMD, is a technology they have called ‘Anti SDE’ — that is, a technology that seeks to eliminate the ‘Screen Door Effect‘ experienced in most HMD’s today.

An illustration by Samsung that attempts to illustrate the difference between the Odyssey and the Odyssey+.

Screen Door Effect

The screen door effect occurs when a user is to perceive the physical space or room between the pixels themselves.  This is of course not ideal for realism, as it becomes apparent that what you are viewing is a screen. The new Odyssey+ features a technology that diffuses the light from the pixels in between the pixels, to eliminate the SDE. Their press release stated:

“Samsung Anti-SDE AMOLED Display solves SDE by applying a grid that diffuses light coming from each pixel and replicating the picture to areas around each pixel. This makes the spaces between pixels near impossible to see. In result, your eyes perceive the diffused light as part of the visual content, with a perceived PPI of 1,233PPI, double that of the already high 616PPI of the previous generation Samsung HMD Odyssey+ [sic].”

RoadToVR report that they suspect this is the technology that Playstation VR has used in their own HMDs. The Playstation VR, with a resolution of only 1080p on the eyes combined, has surprisingly little SDE — which has made me prefer the display to the Vive regular or Oculus, albeit the tracking and computing power is vastly inferior. I’m therefore eager to see how this would work on a HMD with a lot more of the pixels.

Conclusion

Technologies such as low pixel persistence modes, asynchronous timewarp and foveated rendering are all genius technologies that enables perceived higher refresh rates than what our computers are capable of, some of which are indispensable especially for mobile VR. Anti-SDE technology may be yet such another technology, that may make it not so necessary to have 16K displays or whatever for VR to be perceived as very close to real human sight. That being said, although Samsung claims that their new HMD have a perceived PPI (pixels per inch) of 1233, it will naturally not offer the same sharpness of clarity as an actual 1233 ppi display would. The extra 50% potential increase in “perceived ppi” is only replicating, or diffusing, the already-existing pixels. Still, the tech is very welcome, and HTC also has something to learn from the way Samsung prices their products. Customers may now find a whole lot more value for their money in Samsung, and this comes from someone who already owns a Vive Pro. For those considering doing the purchase, it should be noted that the tracking is not as good as in the HTC Vive (Pro), but depending on your needs it may be more than good enough.

Sensory Deprivation — Floating in Virtual Reality

If we look to our glossary, we see Presence within Virtual Reality (VR) defined as the degree to which the subject feels present in the virtual world. What is interesting to note, is that this naturally has to be viewed relatively to the degree that the subject feels present in the physical world — as we usually receive information from both our physical and our virtual environments.

There can thus be two separated approaches to designing for presence in virtual reality environments: one is to provide sensory stimulus of the virtual environment, and the other is to block sensory stimulus from the physical environment. Both approaches work towards the same goal of immersion — the encapsulation of the user in the VE. Slater and Wilbur (1997) recognise this in their definition of Immersion, which is closely related to the notion of Presence. They define immersion in terms of four qualities the system can afford, the first one of which is called inclusiveness. Inclusiveness they define as the extent to which physical reality is shut out.

Obviously, the principle of adding and removing sensory experience go hand in hand; by equipping a Head-Mounted Display you are blocking the physical impressions and replacing them with virtual impressions, all the while shielding for incoming light from the surroundings. Blocking light, however, is not the only way to deprive the senses of information from the physical environment. In this entry, we will discuss how we can maximize the inclusiveness of the immersion by achieving sensory deprivation in floatation tanks. Floating in Virtual Reality!

Floatation Chambers

Floatation chambers, or sensory deprivation tanks — are pools of water with copious amounts of epsom salt (≈600kg). The tanks are sealed for any incoming light and sound, and the air- and water temperature is equal to that of your body. When you lie down, you will feel how the salt makes you float even though the pool is very shallow. As you lie there, you notice how the ripples you created when lying down start to slowly subside as you sink down into weightlessness. After a while, because of the air- and water temperatures are the same as that of your body, you can no longer pinpoint where the water ends and the air around you begins. In fact, it gets hard to distinguish anything from anything else, including your body from the air and water. There is really nothing that is easy to grasp as isolated, save perhaps your breath. And as the minutes go, with total physical relaxation and lack of much sensory impression at all, things start to change.

“Alone With Your Thoughts”, Illustration by Cole Ott

The most significant, explicit change one may notice in the tank  is that after a while your bodily self-consciousness is not what it used to be. Your mental model of where your body is in relation to the world around you starts to become blurred. Normally reinforced by tactile stimuli of air and water (of varying temperatures), and visual and auditive stimuli from the environment, your body model is now lacking information on which to create it. Your sense of spaciousness has also changed, that is the feeling of your position as defined relatively to say, the walls, mountains and sky has disappeared. You now really experience nothing around you, but neither any edges to this lack of information in your surroundings. You may get the feeling of floating in empty space — but where are you in all of this? What, in this stream of conscious experience is matter and what is mind?

Inner vs Outer

In our entry — ‘Inner as Outer: Projecting Mental States as External Reality‘ — we discussed the potential of using VR for meditation purposes in experimental ways. In the introduction to the entry, we discussed our feeling of Self as a duality of Inner and Outer, of which our everyday experiences usually comprise. We discussed how technology may have the power to transform our consciousness away from this traditional subject-object hierarchy and into a non-dual one, where the Inner is seen as the Outer, and the Outer as Inner. In this entry we are building further on these ideas. Similarly to visualising inner states in VR through biometrics, using VR in floatation tanks might provide illusory experiences where the conscious experience is significantly altered.

One other entry relevant to our experiments with VR in floatation tanks should be mentioned before we go on: the entry on Virtual Embodiment. In the entry, we discuss the great potential of VR to hack our consciousness; why it is possible, and what it can be used for. The research is highly relevant for floatation in VR, as both floatation tanks and VR alter our self-model, as both alter the sensory impressions necessary to maintain it.

Research on Virtual Embodiment in Floatation Tanks

Matrise partnered up with Bergen Flyt, a local company offering floatation therapy in the heart of Bergen city. We used a Samsung Gear VR with a Samsung S8 phone. We did not use a HTC Vive (Pro) as it would be more risky exposing the cable to water. Also, no room tracking or even much head orientation was needed, and in terms of resolution the HMD is quite high in ppi. We chose to first try out some abstract visualisations through the application “Fractal Lounge”, that shows varying psychedelic visuals and floating through space.

My Experience

“After I had showered, I put on the GEAR VR headset, started the application, and slowly entered the floatation pool. I held my hands towards the wall, as I did not see anything else than the visuals in the headset. When I was inside, I closed the glass door, and slowly lowered myself into the water — back first. It took a few seconds before I dared to lower my head all the way down, but very soon I was totally relaxed. As expected, the electronics in the display was kept well above water, due to the intense amount of salt in the water …”

The kind of visualisations provided from Fractal Lounge, the application that was tried in the floatation tank.

“The visualisation pulsated, floated, drifted along — and often totally changed in colours and shapes. It took probably about ten minutes before my feeling of body totally vanished, to the degree that it was a larger gap between wanting to move the body and actually being able to move it than usual. I felt like perceiving a great drama and scene, and I got engaged in the forms and ways of the visualisations, sometimes quite invested in it, as it felt close and reality-defining for me. After about twenty minutes in, I felt as if I was drifting along in space at high speed, because of the steady movement of stars away from me. At the same time, there was no sound, which made the quick travel feel peaceful and smooth. As with normal floating, about every ten minutes there is a sort of reality-check moment where you remember you are in the tank and contemplate how weird it is. This also happened in VR, and was … equally as weird”

Reflections and Future Work

My first experiment with floatation in VR lasted for about 45 minutes. Sometimes, unfortunately, the VR headset glided slightly off my face, and I had to reposition it with my wet, salty fingers. After this happened about three times, I had to leave the tank in order to save the equipment.

Thank God that we have floatation pools instead of this creepy stuff.

My first experience of floating in virtual reality was very promising. The largest surprise was the feeling of movement through space at high speed. The largest frustration was the lack of any sort of interaction with ones surroundings at all, except the possibility to open and close one’s eyes. A great experiment would be to use eye tracking technology as a way of navigation in the vast, abstract psychedelic spaces. If one travelled towards where one saw, one could even be interactive while lying still in the floatation tank. This could also possibly have curious effects on which parts (perhaps the eyes), we identify with our selves. Perhaps the placement of our self could be altered by changing the agency for transportation.

Matrise will continue the cooperation with Bergen Flyt, and both try and develop different applications. Our plan is to measure the feeling of presence and self-identification and consciousness while in the tank.

 

Literature list

Augmenting our Reality

Although Matrise usually cover the more encapsulating technologies on the Reality-Virtuality-Continuum, we are also very interested in innovative uses of all Extended Reality (XR) technologies.  In this entry we will illustrate the utility of Augmented Reality (AR) technologies with an exciting project we are presenting at this years IBC conference in Amsterdam.

Short History of AR

AR has seen a similar hype as VR have with products as Microsoft Hololens, Magic Leap and the Google Glass. The technology concept has, similarly to VR, the power to change our orientation towards reality — however, AR technology lets you see your surroundings in addition to the augmented virtual phenomena. We discussed its historically conceptual origins previously in our entry on the Camera Lucida, but apart from this case — the technology is somewhat younger than VR technology.

The first Virtual Reality Head-Mounted Display, named after the Sword of Damocles, because of its great weight hanging over the user’s head. Named after an old greek cultural symbol of Mortality — the anecdote is that the sword hung from a single horse hair over the head of Damocles after he rose to wealth.

In our History of VR we mentioned ‘The Sword of Damocles’ as the first thorough Head-Mounted Display. It should be noted, however, that this essentially also was an AR display, not just VR. The glasses, as can be seen in the illustration, were somewhat see-through, and could therefore be used to augment the physical surroundings of its wearer. Today, however, we are far more privileged, and can experience sophisticated AR that let us view virtual phenomena within the environment itself hyperlocally, with extremely room tracking with six degrees of freedom in the interaction. This enables far more sophisticated usage of the technology.

AR has yet to have a commercial launch similar to that of VR. Although mobile AR with smartphones has gained some popularity, we will have to wait some more for the non-invasive affordable Head-Mounted Displays to hit the market (the Magic Leap currently costs $2295  and the Microsoft Hololens is at about $3000 for developers). That being said, it is fun to play and develop with these technologies and be part of creating new solutions with immersive technologies. Although they are not yet fit for mass-adoption, the Microsoft Hololens is a great project that illustrates the potential of these technologies.

 

The Microsoft Hololens: an AR Head-Mounted Display released for developers. It is believed we will see the next iteration of the HoloLens in 2019.

HoloSuite: A Mobile AR Video Editing Suite

Saturday and Sunday at IBC 2018, Joakim from Matrise is joining four masters students from media- and interaction design in presenting an AR application for the broadcasting industry. The application was developed as a student project for the Bergen-based company Vimond, and is presented at Media City Bergen’s stand at IBC’18 — to represent the fruitful collaboration of the University of Bergen with the companies in the NCE Media cluster Media City Bergen.

The masters students involved; Audun Klyve Gulbrandsen; Johanne Ågotnes and Fredrik Jenssen also has lots of other interesting projects that can be read about at their website UiB MixMaster.

The presentation takes place at 12:30 at Sat&Sun, in hall 8, at booth D10 (MCB Village).

Abstract of Presentation

«Professional video editing suites of today are resource-demanding. A video editor needs great machine power in addition to multiple screens to tackle the varying formats of today’s media landscape. Effectively, this results in reduced freedom of mobility for video editors; they are dependent on their stationary office space to work. In addition to reducing flexibility, this lack of freedom may slow down the turnaround process for news agencies.

In our presentation, we describe and demonstrate an Augmented Reality (AR) cloud-based video editing suite, where up to five virtual screens are presented to the user through a Microsoft Hololens Head-Mounted Display. By employing cloud computing, the prototype can access machine power remotely through the cloud, which has benefits in terms of mobility. Effectively, the AR application is a prototype of an office for video editors that can be carried in a backpack, and utilized wherever there is network connectivity»

 

Figure illustrating the timeline and the preview screens (resolution is higher, and FOV lower in the HoloLens display itself).
Same content from a different angle, illustrating the fixed environmental position (resolution is higher, and FOV lower in the HoloLens display itself).

Do you have any great AR ideas? Matrise will soon publish a new entry on an AR product development process we are involved in — so stay tuned.

 

Virtual Embodiment

The most praised ability of Virtual Reality is its capability to immerse the user in a Virtual Environment — to the degree that the subject feels present in it. The magic is to be fooled by the system so that one feels present where one actually does not physically reside. This effect can, however, turn even more magical. A deeper step into the effects of technological immersion is found in the concept of Virtual Embodiment. If a subject is embodied virtually,  not only is the virtual environment accepted as such; the subject also identifies with a virtual body or avatar inside the virtual environment. This differs from realizing which character you control in a game — within Virtual Embodiment it is the same processes that make you identify with your real body that makes you identify with a virtual one. This is a key point, as it is why research into virtual embodiment is important.

Peeling layers of the onion: VR can be a tool to discover who we are, through investigation of what and how we identify with our bodies. Illustration: “Mask of Day by Day” by Paulo Zerbato.

Hacking and Experimenting with Consciousness

What is fascinating about both of these possibilities of illusion, then — is how, and that, they are possible at all. Knowledge on how to achieve such immersion is obviously relevant for all VR developers, but the knowledge that can be obtained by researching these phenomena goes far beyond knowing how to apply it in VR technology. By creating experiments in VR, we can generate, and investigate, phenomenas of the mind under various experimental conditions. Exploring Virtual Embodiment, for instance, can enable us with a better understanding of our self-consciousness and the relationship between body and mind. Because of this wider span, research on Virtual Embodiment attracts neuroscience researchers, psychologists, information scientists and philosopher’s alike.

The Rubber Hand Illusion

The Rubber Hand Illusion (RHI) is an excellent example of the kind of ‘brain hacks’ that can be achieved by sensory manipulation. The illusion, as illustrated below, is a perfectly simple experiment that does not even require the use of VR technology to perform.  The RHI was introduced by Ehrson, Spence & Passingham (2004) and has been an ingenious way to illustrate how we identify with our bodies. More importantly for this entry, the results of the experiment has inspired further research on Virtual Embodiment.

Illustration from Thomas Metzinger’s book “The Ego Tunnel: The Science of The Mind and The Myth of the Self”

In the RHI, the hand of the subject is replaced by a rubber hand, while the normal hand is blocked from sight by a separating wall. When the subject is sitting as such, a researcher will stroke each hand, both the rubber and the physical hand, simultaneously. Now, the question is what happens when experiencing the sensory impression of stroking, all the while seeing a corresponding stroke on the rubber hand?

Put very simply, the brain does a ‘reasonable guess’ that this hand is indeed the correct physical hand attached to your body.  You feel that the rubber hand is yours, with nerve-endings and all — and you couple your physical feelings to the vision of the hand. This means that in your subjective experience, the rubber hand is the hand that has the sensation. Ehrson et. al write that their results suggested that “multisensory integration in the premotor cortex provides a mechanism for bodily self-attribution”. When our brains receive sensory information from two differing sensory inputs (sight+feel), these are coupled: the brain is coupling the stroking-sensation with imagery of a nearby-hand being stroked, and this is enough for the brain to attribute its self with the hand, to acknowledge it as its own.

This simple experiment share a lot of principles with the concept of Virtual Embodiment, and has inspired research in the field that we will present in this entry.

Some experience out of body experiences (OBEs) on the onset of sleep or waking up. Often they may feel that they are floating over their bodies. VR may help to study such states of consciousness by systematically inducing them.

Virtual Body Illusion

In a later experiment by Lenggenhager et.al (2007), not only the hands of the subjects — but their whole bodies were replaced with virtual representations. Moreover, in the experiment they present, the bodies are seen from behind. In effect, they were simulating out-of-body experiences, with very interesting results.

The experiment was conducted as such: the subjects wore a Head-Mounted Display which projected imagery from a camera located behind the subjects. As such, the subjects could see a representation of their bodies “live”, but from behind. Of course, this is deviating slightly from how we normally experience life. Although the subjects saw their body responding and performing actions in real time as under normal conditions — there is a logical dissonance due to the mismatch between the location of the subjects’ eyes in the virtual environment, and what these eyes see. Effectively, the user is seeing inside a pair of “portal” binoculars (HMD), which display the light from, if not another dimension, then at least a few feet away. And this will be a part of the point.

What is interesting about this experiment is not necessarily simply that the users feel present where they do not reside physically, but how the distance is only a few feet off. The users feel present right outside of their bodies. The situation is similar, the body and the environment is there, but everything is a bit off. What is interesting to investigate then, is how the body adapts to this. Will it accept that it now controls its body from a third person perspective, similarly to how Stratton’s subjects got used to seeing the world upside down?

What they studied was basically whether this change of perspective had an impact on where the users felt embodied. To investigate this, the researchers stroked the subjects as they did in the Rubber Hand Illusion, except at their backs — so that it was perceivable by them. The question is then where this physical feeling will be attributed to — how will the phenomena of the subjective experience present themselves to the subjects?

Out of Body experiences can be achieved virtually by using sensory impressions from other locations, for instance five meters behind you as in the experiment by Ehrson (2007). You can then effectively look at yourself from the outside.

First of all, to be clear on this — the sensory data of being stroked will initially be provided by the nerves in the physical shoulder of the user. The problem of the brain, however, is that the shoulder is out of sight — blocked by the Head-Mounted Display. There is, however, the visual impression of a shoulder on a person standing in front — being scratched in exactly the same way. Although the nerve-endings definitely feel the stroking, the problem is that where this feeling will be placed in our subjective experience is not the responsibility of the shoulder, but rather the brain. And, as the placement of the physical feeling in the bodily self-consciousness is largely dependent on vision for coordinates, what will happen? How will the brain fix this sensory discord?

In this beautifully written article by The New Yorker, its author Rothman describes one of the co-authors of the research paper, Thomas Metzinger’s, own experience undergoing the experimental conditions:

Metzinger could feel the stroking, but the body to which it was happening seemed to be situated in front of him. He felt a strange sensation, as though he were drifting in space, or being stretched between the two bodies. He wanted to jump entirely into the body before him, but couldn’t. He seemed marooned outside of himself. It wasn’t quite an out-of-body experience, but it was proof that, using computer technology, the self-model could easily be manipulated. A new area of research had been created: virtual embodiment.”

Are We Already Living in Virtual Reality?” — The New Yorker has a brilliant, long, read on Virtual Embodiment that features interviews with VR and Consciousness researchers Prof. Mel Slater and Prof. Thomas Metzinger.

Phantom Pain

Another curious potential effect of Virtual Embodiment, is the possibility of phantom sensory impressions as well. Handling virtual objects while being embodied, for instance, may convince your body to expect pain or touch — and so this is, somehow, actively generated. Because of this, VR may be a way to study how phantom pain is created, and further how it can be alleviated. For instance, several studies show how VR can embody a subject missing a leg in a body with two legs, similarly to traditional mirror therapy treatment, which is effective in reducing phantom pain. Again — what may be most interesting here is the possibility of systematically creating the phenomena and studying it afterwards. For instance, as Metzinger is quoted on in The New Yorker’s article, it may be supposed that phantom pain is created by a body model not corresponding to the physical reality. This will be the case for phantom pain in VR: it is not based on the physical reality, you are only relating to a virtual reality instead. Similarly, those those with real phantom pain may also be relating to a certain kind of “virtual reality”, but rather one in the format of their skewed narratives — maintained by their minds instead of a computer.

That the narrative, worldview and consciousness that our brain’s experience and generate is often not the best match with reality is not something new. As for Matrise, these concepts reminds us of the conclusion from our three-series entry towards a metaphysical standpoint on VR, in which we discussed VR as rather examplifying of our abstracting tendencies of mind. These entries can be read at Matrise, and were called: 1) On Mediums of Abstraction and Transparency, 2) Heidegger’s Virtual Reality, and 3) The Mind as Medium.

Virtual Embodiment for Social Good

Now that we have discussed the concept of Virtual Embodiment, it may be natural to discuss what this knowledge can be used for. As discussed already, generating experiments in VR that hacks our self models, may provide useful knowledge on the structure of our self-consciousness. Apart from this general knowledge, some may also have practical utilisation in applied VR for specific scenarios.

Racial Bias

A very exciting paper that describes work utilizing virtual embodiment, is one by Banakou, Hanumanthu and Slater. In the project, they embodied White people in Black bodies, and found that this significantly reduced their implicit racial bias! The article can be found and read in its entirety here (abstract available for all).

Domestic Violence

Another interesting project by Seinfeld et. al, is one in which male offenders of domestic violence became embodied in the role of a female victim in a virtual scenario. At first in the experiment, the male subject is familiarized with his new, female, virtual body and the new virtual environment. When the body ownership illusion, or virtual embodiment, has been achieved, a virtual male enters the room and becomes verbally abusive. All this time, the subject can see his own female body reflected in a mirror, with all his actions corresponding to his. After a while, the virtual male starts to physically throw around things and start to appear violent. Eventually it escalates and he gets closer into what feels like the subjects personal space, and appear threatening.

They write:

Our results revealed that offenders have a significantly lower ability to recognize fear in female faces compared to controls, with a bias towards classifying fearful faces as happy. After being embodied in a female victim, offenders improved their ability to recognize fearful female faces and reduced their bias towards recognizing fearful faces as happy”

The article can be read in its entirety at ResearchGate.

Staying Updated in the field of Virtual Embodiment

Research on Virtual Embodiment is happening continuously. To stay updated on this area of VR research, I enjoy following Mel Slater, Mavi Sanches-Vives and Thomas Metzinger on Twitter. Last but not least, I would stay updated on Virtual Bodyworks at Twitter, of which both Sanchez-Vives and Slater are co-founders of.


N.B: This entry lies at the centre of Matrise’s interests, and we are planning on writing several entries on this topic further in philosophical directions. Have any ideas or want to contribute? Please contact us.

Literature list

 

 

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSaveSaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

Apple, Mac and Virtual Reality

N.B: This blog entry is in Matrise’s category “Lights”, which holds more technical, often smaller posts, that concern actual and recent events. These entries stand out from other entries at Matrise, which is often more conceptual, ideal and philosophical. Lights entries need not be very related to VR, though they will always be related to computer science. You can read about Matrise here.


Apple has never created computers capable of much graphical power. Although Mac’s are often preferred by those working with media applications for video and photo editing, etc., these kind of operations rather need a good CPU rather than GPU. This means that the Mac has never been a good candidate for gamers, who require heavy graphical power to run their games. Unfortunately, this bitter ripple effect of Mac’s crappy GPUs, also extends to VR support. As the Mac has not really been a candidate for good gaming, Apple has been left out of the loop by HTC Vive, Oculus, etc., simply because none of their machines would fit the minimum requirements of running VR.

So although the choice to not try to stuff a GTX 1080 ti into a Macbook has secured its ability to look pretty and slim it has been dissapointing for developers and VR enthusiasts with a fondness for the Mac OS X.

External GPUs for Mac

Last year, Apple revealed that their new operating system MacOS High Sierra would take steps to support VR on mac. As part of this, Steam VR for Mac was released — and support for external Graphical Processing Units (eGPUs) was added as well. Mac’s had unfortunately always have had terrible GPUs relative to their PC equivalents, which has limited their use for gaming- and VR purposes. Though this has secured the Macbook’s ability to look pretty and slim, it has been dissapointing for developers with a fondness for the Mac operating system.

Thunderbolt

The latest Macbook Pro series, for instance, has four slots for Thunderbolt 3. Now, the new Thunderbolt 3 support transfer speeds up to 40Gbps, which is significantly higher than the cables connecting your Mac to your internal GPU. This has opened the possibility of using the slim, pretty laptop for lectures, meetings or writing at home — all the while being possible to augment the same laptop to a graphical beast while coupling in the eGPU. You bring the light parts, and leave the heavy ones.

The Sonnet eGFX Breakaway Box for coupling graphics card externally via a Thunderbolt 3 port. In Matrise’s eGPU, we currently host an AMD Radeon RX 580 “Sapphire”. This does a good job at supporting the HTC Vive in a Macbook Pro 15.

In the fall of 2018, on the introduction of their new eGPU support, Apple partnered up with Sonnet to sell eGPU cards with a Sonnet cooling chassis from their Apple Store. As the support for eGPUs were still in beta, Apple only sold the eGPUs to registered apple developers. Matrise bought one, obviously, as this opened up for VR development, and testing, at the Mac.

In the beginning (the beta stages), the support for this was decent, but slightly annoying. Everytime you plugged in the eGPU you had to log in and out of your account — and sometimes there were trouble to get the screens connected. For the last months, however, the support feels more solid, with an icon in the menubar that can be used to eject the eGPU. You no longer have to log out everytime to connect it, which simplifies the workflow of those who use this to power , say, one 4K screen and another WQHD display at their work station.

The Office. Apart from VR development, the eGPU is useful in giving graphical power to external monitors, at the same time as providing electricity. For this setup of two >HD screens, only one Thunderbolt cable is used.

Apple and VR

Although Mac users now have the possibilities that come with increased graphical power — this does not mean that VR and Apple is a very great match yet. They have, however, lately opened their eyes to the fact that they need to support developers of this new medium. Last month they introduced their new MacOS “Mojave”, of which “Dark Mode” we discussed in our previous “Lights” entry. What is perhaps more important, however, is that the new Mac OS Mojave would have plug-and-play support for the new HTC Vive Pro (which Mac users now luckily can actually use thanks to the eGPU support). Matrise has ordered a HTC Vive Pro Kit, and will post a performance test using an eGPU in Mojave when it arrives.

The HTC Vive Pro is to receive plug and play-support in the new Mac OS “Mojave”

Although now Apple with their Mac’s have the technical solutions that make it possible to create and view VR in the same way that normal Windows PC’s have, this does not mean that Apple’s Mac stand equal before the task. The outcome of long years where Mac’s would not really be able to play any VR games still stand, and there are therefore very few games that bring support for Mac users. Hopefully this will change in the future, now that Apple at least actually plans the road ahead to be friendlier rather than hostile towards the technologies.

Modular Computing
What is an interesting in the way we see these eGPUs work, is how this kind of modular computing may be the future for laptops. Stationary computer parts have the benefit that they can be as big as they need to be, which reduces the cost of the labour of fitting these components into thin laptops. Scenarios could be imagined where it is normal to have a strong GPU and/or even CPU at home and at work, along with some monitors, to augment your computing once you are there — while always keeping the base parts (your laptop) in your bag to go. This workflow may remind us of the new Nintendo Switch — which can change from console to portable by simply removing the necessary parts and thus “switching” to portable.

What may be even more convenient than modular computing, we can admit, may be cloud computing. When web transfer speeds finally turns good enough in the future, we could upload all our computing into a queue in the sky, to be performed by some quantum computer centres in a desert somewhere… Probably.


What do you think of Apple and VR? Could you imagine the modular computing scenario working in your everyday life? Please comment below.

Inner as Outer: Projecting Mental States as External Reality.

Introduction to Mysticism

Within Mysticism, the merging of Self and World — Inner and Outer — is seen as the utmost aim. Mysticism can be found within most of the world religions, such as Buddhism, Christianity, Hinduism and Islam — and its aim is often formulated as union with God. Depending on the religion, however, the degree to which Mysticism is the common way of practicing the religion varies. Although many religions have such contemplative practices, they are not always adopted by the religion’s followers at large.

When discussing «Union with God», it should be noted that the term «God» varies in its meaning between these religions. The contemplative practices often have significantly varying metaphysics, for instance Monotheistic (Christianity), Polytheistic (Hinduism), and relatively Atheistic or Agnostic (Buddhism). Be this as it may, their descriptions of the experience of this merging of Self and God is often strikingly similar. These states of enlightenment are often described as ecstatic, in which the conscious experience can not be placed within our normal frames of language or understanding.

What also unites the different traditions, is that such states of consciousness is usually  worked towards through contemplative practice such as yoga, meditation or other disciplines of focus or conscious attention. Other techniques for achieving these ecstasies have have been ascetic ones, such as fasting, waking, isolation — or other ways of stirring the Self to war.

The experience of seeing the Inner as Outer, and the Outer as Inner, is often described as the feeling that living itself is an experience of seeing and perceiving Oneself and/or God. Within this worldview, there is no Self relating to anything external.

Non-duality: synchronization of Inner & Outer

The concept of merging Inner and Outer, or Self and God, can each be viewed either in very material or spiritual terms. Although materiality and spirituality do not have to differ metaphysically, separating these gives us some communicative benefits — and Mysticism may be explained and spoken of from both these perspectives. Discussing the Inner as Outer purely «scientifically», if you will, makes sense in that all our perceptions of the Outer world is indeed created Inner, and as such — Reality will always be a synergy of Inner and Outer. We know that we do not see, or have ever seen, anything which we ourselves do not actively generate. As neuroscientist and consciousness researcher Anil Seth put it, “our brains are actively hallunicating our conscious reality”.

States where a subject experiences the Inner and Outer as ‘one’, is often referred to as «non-dual».  Often while speaking of Inner and Outer, we tend to implicitly reinforce the Self and the World as a duality (when pitching a solution we often have to pitch the problem first). By using the word «non-dual» instead of ‘one’, we may pinpoint the nuance that it is not a duality in separation, but neither completely “same-same”. Although it is non-dual, neither is it all same or flat — least of all static!

Although we classify and divide our reality, fundamentally what we perceive is a stream of experience, which in every sense is simply “reality” before divided, and, again, actively created by us. This is not to say that there are no external reality or world — but it definitely is to say that all which is external is perceived first and foremost, solely, internally. Experientially — externality has never been perceived, except as a subcomponent of internality.

A vase, or two faces? Each defines the other, and neither exist without the other.

Experiencing and Sensing the Non-Dual

This causal explanation, however, leaves out the experiential aspects of the non-duality. Although it may make sense on paper, it matters little to us as we absolutely perceive the world as dual — as subjects relating to a World. Within Philosophy, this traditional way of adhering to and speaking of the world,  is referred to as the subject-object dichotomy. Although, between different cultures and continents, the degree to which we adhere to this way of thinking vary in its intensity, it is nevertheless definitely an essential part of the human experience which we share.

How the material explanation can be said to be different from the spiritual in this sense, is that the spiritual concern is to experience the Inner as Outer, not to understand it cognitively. As such, and towards that, meditation practices such as Mindfulness and Yoga have existed, to increase wellbeing by increasing the degree to which one feels in union with God, or for those who do not fancy the term; to the degree which one has peace with oneself and the world.

Contemplative practices such as yoga and meditation, has the last fifty years become more popular in western societies. Although they have been subject to a certain degree of metaphysical raffination the last years, these methods are nevertheless largely old and traditional. The most common of these contemplative practices we see today is adopted from the Vippassana practice, commonly known as Mindfulness. These methods are now commonly used in psychological treatment of anxiety and depression, and research has the latest years started to uncover the benefits of learning to be able to sit quietly with your mind and, well, deal with shit, or seeing it for what it is.

In the next section, we will discuss an approach utilizing Virtual Reality to aid in Mindfulness meditation — which can help to perceive the Inner as Outer.

A common belief is that the aim of contemplative practices is to empty the mind. In a sense, it can be said to be correct, in that meditation practices often seek to eradicate, dilute or cancel the self-referential narratives.

The effects of Mindfulness meditation

The essence of Mindfulness or similar contemplative practices, lie in their manipulation of identity. We stated “the problem” of Mysticism as the gap between Self and Other — and for this separation to be there, we must necessarily have a relatively thoroughly defined sense of self. For most of us, this tend to be limited to the cognitive processes that constitute our mental narrative (the personalized voice in our heads, our formulated will, and how it appears to direct our actions and plans our lives). It is actually to a far lesser extent our bodies, although this also attributes to our self-consciousness.

Mindfulness is about being present attentively in each moment to one’s state of mind. When doing such focus excercises directed at the mind, and observing these mental processes closely, the idea or view of them as solid things starts to unravel. When rather seeing them as thoughts from a distance, they appear untangled to us, and we perceive our own existence as distinct from those thoughts.

Virtual Reality Biofeedback as Meditation aid

One of the great benefits of VR is its ability to project and represent data in the format of the reality encompassing us. Within the context of this entry, we could say therefore that VR can simulate what we perceive as the Outer. The question may then be asked: how can we project our Inner in to this medium of Outer?

Although I believe we will see more work on VR biofeedback within this domain in the future, in this entry we will focus only on one research paper in particular to examplify our case. At last years CHI conference, the world leading conference on Human Computer Interaction, Joan Sol Roo and his colleagues presented their work on Inner Garden: a mixed reality sandbox for mindfulness. The artifact is a physical sandbox, which the user can shape to a given terrain. The sandbox is given generally visually augmented by a projector with colors and shapes — and physical changes to the sandbox will also alter the output of the projector, which deliver terrain information such as sea levels and green growth.

The sandbox is just not physical, however; by placing a physical avatar in the physical sandbox, you can enter into the land you created in Immersive VR. A 3D-model of the land you created physically can be seen virtually, from the viewpoint of your placed avatar.

The Sandbox, which heights of the sand have been turned into an island by the projector.

Attached, to measure your inner states, is both breathing- and heart rate sensors — which are coupled to provide visual and auditive feedback. In this way, you can synchronize your breath to control the environment and rythm and breaking of the waves. The Inner Garden represents your inner state, and. by practicising breathing techniques, the flora of your world will get greener and more animals will appear.

In this way, Inner Garden works as a great example of representing Inner phenomena as External Reality. Very conceptually interesting, and hopefully one day we will also see empirical studies on similar artifacts.

You can read more about Inner Garden, which received an honorable mention at CHI’17, here.


What do you think? Do you have any ideas for VR applications using biofeedback?  Please comment below.

Literature list

 

 

 

 

 

SaveSave

SaveSave

 

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

Camera Lucida

We have previously discussed several interesting optical technologies of relevance to VR. For instance we discussed the fascinating 17th century Camera Obscura and in our entry on a History of VR, we discussed the 19th century Stereoscope, which technology still is used in modern day VR Head-Mounted Displays.

In this entry we will discuss yet another,  similarly old, optical technology, which in category leans more towards that of Augmented Reality than Virtual Reality; the Camera Lucida.

Invented by phycisist William Hyde Wollaston in 1807, the Lucida was a device praised by artists and illustrators for its aid in their art. Similarly to the earlier dated Obscura, the optical artifact could project and redirect images from the external world, making it easier to recreate them in ink. While the Obscura required a dark room to project its images on a surface, the Lucida had the benefit of redirecting the light directly to its users eyes, and was thus more appropriate to use in a lit office, or even while travelling.

Apart from the underlying technical difference, the practice of use was relatively similar; the user would perceive the redirected light representing the object of projection on the surface that should be drawn, and by following the lines with a pen, the image could be reproduced in ink. To draw objects far off, the light could also be captured by a telescope, or for very small details, even a microscope, as seen in the illustration below.

Camera Lucida and Modern day AR technology
The camera lucida share many conceptual and experiential similarities with Augmented Reality (the concept of augmenting our real world with virtual phenomena). When a user is looking through the Camera Lucida,  a ‘virtual’ representation of what the Lucida is directed at, is added to and combined with the user’s normal vision. In AR goggles such as the Microsoft Hololens, this concepts remain the same, only the HoloLens’ holographic images originate from software and not redirected light from the external world.

 

The Microsoft HoloLens, an AR Head-Mounted Display by Microsoft.

Obviously, this is not the only difference between the two — compared to modern AR tech, the functionality and applicability of the Lucida is bleak. The HoloLens is  capable of stereo pictures, and features placement and projection of almost any virtually conceivable object in to the environment. Yet, the beautiful Camera Lucida artifact does share the essential underpinnings of augmenting the environment with re-presentations.

A curious example of the similarity between the two, is how the Lucida these days are being recreated with (mobile) AR. Using for instance an iPad with its camera, the canvas and your hand drawing is displayed to you on the screen, with a see-through image of that which you want to draw. Even better, a similar application has also been developed for the Microsoft HoloLens, called SketchAR HoloLensEDU — and is currently being employed teaching young artists.


Do you know of any good, useful applications within the AR domain? Please comment below!

 

SaveSave

Camera Obscura and the World of Illusions

A few years ago I visited the beautiful scottish city of Edinburgh. Apart from the old pubs, the whisky and its mighty castles, the city also have attractions for those interested in the art of illusion.  In a castle on one of the heights of the city, we can find an example of an ingenious yet simple optical technology, called the Camera Obscura. We have previously published an entry on the History of VR, where we discussed the invention of the Stereoscope as the first technology underpinning the VR of today. With a broader definition of VR, we could say that the Camera Obscura is an even earlier VR technology than the Stereoscope; in the mid 1600s,  by using the Camera Obscura, one could live stream a photographic segment of reality at much higher refresh rates than what we can do with information technology today.

 

Four people using a Camera Obscura, all the while remaining unseen behind closed doors.

The drawing above illustrates the workings of the Camera Obscura: In a dark room, the light from the world outside is directed by a mirror through a lens, which focuses the light on to a leveled surface. Often made of white stone, the surface functions as a canvas for the photographic reflection. As this is light straight from its source, the responsiveness is immediate and as the lens is continuously open, the pictures are moving. It is a very interesting experience to stay in the Camera Obscura of Edinburgh, and wholly undetected watch and perceive the actions of the masses of people walking the streets outside.

We should note, however, that even the mirrors and lenses are not necessary to create this effect. The camera obscura is in essence an extremely simple concept, and the simplest version of it is called a pinhole camera, which is as simple as a dark room with a hole for which the light to enter through. The light that enters through it represents what reflects it —which of course is the environment outside. As such, all light contains information, and pinhole cameras utilize this by letting the light enter through a small hole in a wall into a dark room, so the visual information can stand alone and be perceived relative to the dark background. In more complicated camera obscuras, lenses are used to strengthen and focus the light, and mirrors to redirect it.

Illustration of a Pinhole Camera, displaying an image upside down on a wall in a dark room.

As some may know, when light hits our eyes, the retina actually perceives the world upside down. Our brains, however, flips this back again — resulting in the world as you see it today. Traditional pinhole cameras or simple camera obscuras also suffer this effect, and so often the image is seen as upside down, as in the illustration above. In the Edinburgh Camera Obscura, they use lenses to maintain the normal orientation. Effectively, the image is inverted twice — once by the aperture, and further back using the lenses. For those who want to try to achieve this at home, we recommend this experiment, which highlights the workings of the lenses.

The Camera Obscura used for the art of drawing.

Another interesting use of the camera obscuras, and a source of their popularity, was for the art of drawing. By projecting directly to the canvas one is drawing on, the lines of the environment can be outlined more easily. What is becoming increasingly clear here, is the role of the camera obscura in the creation of the modern photographic camera. The technology is quite simply the same, only instead of a continuous stream of light to a canvas — we have a limited, controlled exposure to a surface that adapts to the light. It is related to this exposure where photography features make sense, such as aperture (how much light we let in); ISO (the sensitivity of the image sensor), and shutter speed (the amount of time that light should be let in). We are still playing with light and lenses.

The World of Illusions

If you visit Edinburgh to look at their old Camera Obscura from the 1850s, you will find in the same castle what they call «The World of Illusions»; five floors containing over 150 different optical illusions. Caleidoscope rooms; 3D stereoscopic mirrors; mazes of mirrors and much more. We will discuss and explain a few of these in more detail, the first being “The Ames Room”.

The Ames Room

The Ames Room, showing three men of similar size.
An overview of the Ames Room, dissolving the illusion. The illusion illustrates our lacking capability to perceive actual depth (3D).

For the illsion of the Ames Room to work, you have to see it from a certain perspective, which in the above illustration is referred to as the viewing peephole. The Ames Room in Edinburgh, unlike in our illustration, also use floor tiles as in a chessboard to further improve the illusion, which from the viewing hole appears to be of similar size. The illusion is a funny one, and an obvious photo-opportunity.

The Vortex Tunnel

Phtograph from the World of Illusion in Edinburgh.

Another illusion, which is more bodily, is their Vortex Tunnel. You are in a room, where a bridge connects the two ends. The task is to walk over the bridge (a fully stable, stationary bridge). Now, this shouldn’t necessarily be a problem, if it weren’t for the fact that the cylindric vortex walls are spinning around you. It doesn’t matter how hard you try, you simply can not walk a straight line: it is as if the gravity draws you toward the rails of the bridge. If you close your eyes, however, everything is fine.


Do you know of any other fun illusions or old optical technologies? Please comment below!

SaveSave

SaveSave

SaveSave