Apple, Mac and Virtual Reality

N.B: This blog entry is in Matrise’s category “Lights”, which holds more technical, often smaller posts, that concern actual and recent events. These entries stand out from other entries at Matrise, which is often more conceptual, ideal and philosophical. Lights entries need not be very related to VR, though they will always be related to computer science. You can read about Matrise here.


Apple has never created computers capable of much graphical power. Although Mac’s are often preferred by those working with media applications for video and photo editing, etc., these kind of operations rather need a good CPU rather than GPU. This means that the Mac has never been a good candidate for gamers, who require heavy graphical power to run their games. Unfortunately, this bitter ripple effect of Mac’s crappy GPUs, also extends to VR support. As the Mac has not really been a candidate for good gaming, Apple has been left out of the loop by HTC Vive, Oculus, etc., simply because none of their machines would fit the minimum requirements of running VR.

So although the choice to not try to stuff a GTX 1080 ti into a Macbook has secured its ability to look pretty and slim it has been dissapointing for developers and VR enthusiasts with a fondness for the Mac OS X.

External GPUs for Mac

Last year, Apple revealed that their new operating system MacOS High Sierra would take steps to support VR on mac. As part of this, Steam VR for Mac was released — and support for external Graphical Processing Units (eGPUs) was added as well. Mac’s had unfortunately always have had terrible GPUs relative to their PC equivalents, which has limited their use for gaming- and VR purposes. Though this has secured the Macbook’s ability to look pretty and slim, it has been dissapointing for developers with a fondness for the Mac operating system.

Thunderbolt

The latest Macbook Pro series, for instance, has four slots for Thunderbolt 3. Now, the new Thunderbolt 3 support transfer speeds up to 40Gbps, which is significantly higher than the cables connecting your Mac to your internal GPU. This has opened the possibility of using the slim, pretty laptop for lectures, meetings or writing at home — all the while being possible to augment the same laptop to a graphical beast while coupling in the eGPU. You bring the light parts, and leave the heavy ones.

The Sonnet eGFX Breakaway Box for coupling graphics card externally via a Thunderbolt 3 port. In Matrise’s eGPU, we currently host an AMD Radeon RX 580 “Sapphire”. This does a good job at supporting the HTC Vive in a Macbook Pro 15.

In the fall of 2018, on the introduction of their new eGPU support, Apple partnered up with Sonnet to sell eGPU cards with a Sonnet cooling chassis from their Apple Store. As the support for eGPUs were still in beta, Apple only sold the eGPUs to registered apple developers. Matrise bought one, obviously, as this opened up for VR development, and testing, at the Mac.

In the beginning (the beta stages), the support for this was decent, but slightly annoying. Everytime you plugged in the eGPU you had to log in and out of your account — and sometimes there were trouble to get the screens connected. For the last months, however, the support feels more solid, with an icon in the menubar that can be used to eject the eGPU. You no longer have to log out everytime to connect it, which simplifies the workflow of those who use this to power , say, one 4K screen and another WQHD display at their work station.

The Office. Apart from VR development, the eGPU is useful in giving graphical power to external monitors, at the same time as providing electricity. For this setup of two >HD screens, only one Thunderbolt cable is used.

Apple and VR

Although Mac users now have the possibilities that come with increased graphical power — this does not mean that VR and Apple is a very great match yet. They have, however, lately opened their eyes to the fact that they need to support developers of this new medium. Last month they introduced their new MacOS “Mojave”, of which “Dark Mode” we discussed in our previous “Lights” entry. What is perhaps more important, however, is that the new Mac OS Mojave would have plug-and-play support for the new HTC Vive Pro (which Mac users now luckily can actually use thanks to the eGPU support). Matrise has ordered a HTC Vive Pro Kit, and will post a performance test using an eGPU in Mojave when it arrives.

The HTC Vive Pro is to receive plug and play-support in the new Mac OS “Mojave”

Although now Apple with their Mac’s have the technical solutions that make it possible to create and view VR in the same way that normal Windows PC’s have, this does not mean that Apple’s Mac stand equal before the task. The outcome of long years where Mac’s would not really be able to play any VR games still stand, and there are therefore very few games that bring support for Mac users. Hopefully this will change in the future, now that Apple at least actually plans the road ahead to be friendlier rather than hostile towards the technologies.

Modular Computing
What is an interesting in the way we see these eGPUs work, is how this kind of modular computing may be the future for laptops. Stationary computer parts have the benefit that they can be as big as they need to be, which reduces the cost of the labour of fitting these components into thin laptops. Scenarios could be imagined where it is normal to have a strong GPU and/or even CPU at home and at work, along with some monitors, to augment your computing once you are there — while always keeping the base parts (your laptop) in your bag to go. This workflow may remind us of the new Nintendo Switch — which can change from console to portable by simply removing the necessary parts and thus “switching” to portable.

What may be even more convenient than modular computing, we can admit, may be cloud computing. When web transfer speeds finally turns good enough in the future, we could upload all our computing into a queue in the sky, to be performed by some quantum computer centres in a desert somewhere… Probably.


What do you think of Apple and VR? Could you imagine the modular computing scenario working in your everyday life? Please comment below.

The History of Virtual Reality

In recent years, Virtual Reality (VR) technology has finally reached the masses. 2016 was called “The Year of VR” as several actors released their Head-Mounted Displays (HMDs) on the consumer market. While HTC, Oculus and Playstation delivered high quality HMDs that require external computers to run, the year also opened up for high quality mobile VR. Both Google with their Daydream View and Samsung/Oculus with their GEAR VR have provided an easier step for consumers to enter the world of VR. These mobile VR solutions  offer better internal measurement units than the simpler Cardboard devices, and also feature simple controllers for interaction. We now see the market spreading out both  in quality and accessibility: in 2018 we have both seen the coming of the HTC Vive Pro, a more expensive high-end HMD with increased resolution, and the Oculus Go, which is a reasonably-priced ($200) stand-alone 3DOF (3 Degrees of Freedom) HMD for the starters.

It is natural to wonder how all of this started. Why did we for instance not see much VR before 2016? When it now seems to be relatively easy for commercial actors to push out HMDs down to $200, why did it not happen sooner? Of course, we have had Oculus’ development kits since 2013 — but even this is very recent.  When Google released their Cardboard (a simple HMD made out of cardboard  and some lenses), it seemed incredilous that VR could be attainable for the smartphone for only 50 cents. This, however, only points us toward how fascinatingly simple the underlying pinciples of VR technology actually are.

In this entry, we will trace the VR tech we see today back to its roots. We will go back about two hundred years, and work ourselves jumpingly forward to the very recent innovative technologies.

Stereoscope

A drawing of the Lothian Stereoscope, released in 1895; one of many different models.

In 1838, Sir Charles Wheatstone developed what would be the first Stereoscope. Even before the camera was invented, people were seeing (drawn) images with 3D effect through stereoscopes.  Stereoscopy, that is, perceptory illusion of depth, is achieved by displaying a slightly different segment of an image to each eye. Wheatstone achieved this by separating the two images by a piece of wood, and providing a lens directing the light, between each eye and the corresponding image. While looking through the stereoscope, our brains perceive the two images as one image, with the added 3D effect due to the varying segments of the images. This effect is simply caused by an utilization of how our eyes and brain work, by combining the sensory data from each eye into one. We may, for instance, most likely be able to recall  sometimes «seeing double», when our brains have yet not our varying visual impressions.

Since Wheatstone, different stereoscopes have been produced all the way up to the Google Cardboard or other HMDs; which instead of drawn images, or later photographs,  utilizes a screen to deliver the imagery to the eyes. Actually, in the early 1900s, Stereoscopes functioned as home entertainment devices, and «stereo cards» such as the image seen below could be purchased from photography companies.

Stereo card of St. Peters Church in the Vatican. Such cards, picturing tourist attractions all over the world, could be purchased and viewed at home in a Stereoscope.

Stereoscopes and modern day Virtual Reality HMDs share the essential feature of stereoscopic depth illusion (3D). Apart from that, however,  a lot has obviously happened since 1838, which we now regard as essential for the feeling of presence and realism, and which makes the technology capable of simulating realities. The most important of these have been moving images, 3D environments, interaction, and 360 degrees of orientation. With the stereoscope, images very static in every sense.

Sensorama

In the mid 1950s, however, some people saw the opportunity to spice up their stereoscopes a bit. A bold attempt at enrichening this, was the Sensorama. In addition to providing a stereoscope with motion pictures in 3D and color,  all quite revelutionary, the device had fans for simulating wind, odor transmitters for smell of the environment, stereo sound, and even a moving chair!

The Sensorama, or «Experience Theather». Illustration from Morton Heilig’s 1962 US Patent.

Pygmalion’s Spectacles

The idea of the Sensorama, or VR in general, can as many other innovative future-defining ideas, be found in science fiction literature. Before its conception, in the 1930s, the science fiction writer Stanley G. Weinbaum introduced the idea of «Pygmalion’s Spectacles». By wearing these, the user could experience a fictional, or virtual world, with holographs, smell, taste and touch, and make the virtual come alive. Pygmalion, which «Pygmalions Spectacles» were named after, were a Greek sculptor who fell in love with his sculpture, and so begged Venus that it would come alive. The Myth sheds an interesting light on VR as an ultimate dream of humanity, to create realities for ourselves to inhibit, or to create images in the format of reality.

Pygmalion, which «Pygmalions Spectacles» were named after, were a Greek sculptor who fell in love with his sculpture. He begged Venus that it would come alive. Painting by Jean-Baptiste Regnault.

Information Technology

To take a leap towards another paradigm shift in VR tech, we must enter the land of 1s and 0s.  The Stereoscope slowly moved from drawn images, to photographs, and further to moving images with the Sensorama. None of these, however, supported spherical environments that could be perceived in all their 360˚. To achieve this,  certain sensors and further computation based on their sensory input has been necessary.  The most important and interesting of these sensors, has been the Gyroscope.

The Focault Gyroscope, created by physicist Jean Bernard León Focault.

The Gyroscope was given its name by Phycisist Jean Bernard León Focault in 1852 who used the device as a means to prove the rotation of the Earth.  The gyroscope is a device consisting of a spinning top with a pair of gimbals. Its origin can not be traced to a single invention or inventor, as tops have originated in many ancient civilizations — however, unlike the «complete» Gyroscope, these were not necessarily used as instruments.  Although Focault’s gyro were not the first that were used as a measuring instrument, its affordances work well to examplify the usefulness of gyroscopes in VR HMDs; the important feature it affords is the measure of rotation, which key lies in the Gyroscope’s tops’ possibility for free rotation.

Gyroscopes are fun artifacts to play with as they seem to defy gravity. While spinning, they can remain stable in most positions. If placed on a platform, that unlike the gyro remain stable, the position in terms of rotation can be measured relatively to the platform, and as such we can also measure the rotation of a HMD. It should be noted, however, that the gyroscopes of today are not pretty mechanical objects of brass anymore, which, although they do no longer satisfy our aesthetic appetite, at least have the benefit of fitting into our smartphones and HMDs. Today, gyroscopes have heights, widths and lengths of only millimeters, which opens the possibility for placing them inside smartphones and HMDs.

The Sword of Damocles

The Sword of Damocles, an old greek cultural symbol of Mortality — ever close to those in power. We see the sword hanging from a single horse hair over the head of Damocles.

Fifty years ago, in 1968, Ivan Sutherland and his student Bob Sproull created the first computer-driven stereoscopic (3D) Head-Mounted Graphical Display with 360˚ head-tracking. The HMD was not exactly lightweight, and was named after the «Sword of Damocles» because of the heavy stand hovering over its users head. As can be seen in the illustration below, the head-tracking was mechanical, and did not in fact use a Gyroscope. Later, however, this became a more fruitful approach, so as to avoid the massive device rotating over the users head.

The field of view and graphical fidelity of the Sword of Damocles were obviously quite low, yet the Sword of Damocles is the first widely known HMD, and has since its dawn inspired and launched further decades of VR research.

The first Virtual Reality Head-Mounted Display, named after the Sword of Damocles, because of its great weight hanging over the user’s head.

Towards the modern HMD

Since the invention by Sutherland and Sproull,  creation and use of HMDs was seen more and more within research. As computational power became faster and cheaper, the HMDs decreased in size, and increased in field of view, graphical fidelity and refresh rates. Yet — even back in the 1990s for instance, the technology was still expensive, and poor in terms of graphical realism. It often caused cybersickness due to low refresh rates, and high motion to photon latency. Of this reason, as with any really powerful computer from that time, VR was reserved for research universities that could invest into the technologies, or businesses with resources to experiment with the technology. There were some attempts at commercializing VR for gaming purposes, such as the SEGA Genesis and Nintendo Virtual Boy — however, both of these remained largely as prototypes and were later discontinued. To this day, none of these companies has since experimented with the technology, although Nintendo in 2010  released the Nintendo 3DS which utilizes a stereoscopic display that does not require any glasses.

 

Image of a 3D model of the HTC Vive Pro.

Conclusion

Since the Sword of Damocles, VR technology has undergone small incremental changes leading to where we are today, mainly as a result of general computer and graphics research, and the natural progression of Moore’s Law; today our processors are smaller and more powerful, and our screens of higher resolution.

In addition to this, however, there are certain very recent technologies that have impacted the VR as we know it today as well. In Matrise’s glossary, we briefly present and define some of these technologies. Some that can be read about is Foveated Rendering and  Low Pixel Persistence Modes.


Did we miss anything? Any thoughts are welcome in the comments section.

SaveSave

SaveSave

Virtual Reality Journalism

Journalism is largely defined by which medium it uses to convey its message. The last hundred years, it has moved from medium to medium: from text to radio, and further from photographs to video. With each new medium the fidelity of the message it is providing is steadily increasing. This is perhaps especially clear now with the use of Virtual Reality (VR) technologies for journalistic purposes. By using 360° stereoscopic (3D) cameras, we are getting very close to capturing subjective realities at given points in spacetime.

The Ultimate Empathy Machine

In his Ted Talk, Chris Milk describes the potential of VR for creating “the ultimate empathy machine”, which later has been the subject for extensive debate. The message is that we may be more empathetic towards others if we can “literally”, or at least virtually, view the situation through their eyes. Many journalistic projects have focused on refugees or war zones, such as the stories “Fight for Fallujah” and “The Displaced“. In these stories, the camera works as the eyes of the observer, as the screen in the VR goggles is mounted directly on to the eyes. The user is presented to the story through a first person perspective, and may feel present in the story as if he or she is actually being transferred to the environment.  This presence to the stories, and the following perceived realism, is what is believed to be able to increase empathy in the viewers. Of this reason also, Google News Lab’s Ethnographic Study on Immersive Journalism  describes VR news as more fit for the term “storyliving” rather than “storytelling”, indicating that the user feels a part of the story that is being conveyed.

‘See for yourself’: Is the NYT VR pitching their VR production by indicating that you can see the situation yourself, instead of adhering to the mediated version by a journalist who’ve been there?

Critics of VR as the “ultimate empathy machine”, or as capable of delivering “storyliving”, say that you can not possibly know how it is to be in a refugee camp while lying on the couch a friday evening with your VR goggles and a glass of wine. And, of course, you can not. It may, however, prove to be a more empathetic instrument than a regular video as it may seem more real and thus affect us differently.  It is no wonder that it may seem more ‘real’, when it is presented in the ‘format of reality’. Some do agree, however, and believe that it may have the power to make viewers more empathetic – nevertheless, they think that it may be unethical to use it as such. Should journalists have this power, to distribute realities to news consumers, and by new technologies we are not used to, affect people this strongly? These critics may be afraid of the “brainwashing” potential of the technology. It is easier to distance one self from a 21 inch screen than it is to distance yourself from an encompassing, immersive virtual reality you inhibit. Perhaps the most crucial question is whether it is brainwashing if what you are showed in fact is real?

Fake News

This brings us to another point of VR Journalism: how hard it is to manipulate the content. The journalist can not hide behind the camera. There is no artificial lighting or too narrow segment of the shot. The camera shoots in 360° horizontally and vertically — it is a totally observant witness at that given space and time.  In a sense, news in 360° is far less directed than in traditional flat videos: the viewer chooses which part of the video he or she wants to see. Naturally, the journalist is still an active role which chooses where to shoot, and does the final edit, however, it is a great shift from traditonal video footage. As we all know, during the last few years the concept of ‘fake news’ and mistrust in the media has arisen. The transparency that shooting with 360° 3D offers may help combat this. Perhaps also Journalism has to adapt to these changes, and rather deliver (immersive) content on which its readers themselves can decide and conclude upon. Euronews, one of the largest European news agencies, argues that this is why they have produced several 360° videos each week for now two years. In the VR session of the Digital World Conference in 2016, Editor-in-Chief of Digital Platforms, Duncan Hooper, stated they “want[ed] to let [the users] make their own decisions, not tell them what they should be watching, not to tell them what they should be thinking”.

It is, however, rather naïve to believe that immersive content alone can deliver objective truths — no matter how close the images correspond to reality. When the videos themselves lack in clear message or narrative, it is natural to imagine how they may rather be used as building bricks for constituting a narrative elsewhere. Besides, imagine the concrete example of the news coverage of the Israel and Palestine conflict. In this case, we may ask whether the journalist will choose to show immersive footage of a knife attack in Jerusalem, or deadly shots by the border patrol in Gaza? Both would be correct to show, but by this example, we see that to a certain extent, in the problem of news objectivity and fake news, it is not a problem of facts vs. non-facts; but which facts are focussed on. Immersive Journalism is no silver bullet in this regard, however, that is not to say that it may not find a natural place in news coverage. 

VR Journalism at the University of Bergen

During the spring of 2018, I taught 20 undergraduate students in VR Programming, 360° video shooting- and editing and photogrammetry. The aim was that the students should be able to create their own prototype delivering Immersive Journalism. As the rules and practices within the new concept is not very well established, we did not teach the students exactly to solve their tasks, but rather how to experiment with the novelties of the medium and try to innovate and create new genres. This is often called ‘Innovation Pedagogy’. The end result have been four brilliant prototypes, that was presented at the Norwegian Centre of Excellence (NCE) Media’s media lab in Media City Bergen. We discuss two of these here. Interested in the other two? These are mentioned in an entry where we go more in depth, philosophically, on the concept of Experience Machines.

Drug addict
The first of the VR experiences is called “Narkomani” which from Norwegian can roughly be translated to “Drug addict”. The aim of the production is to see the world from the point of view of a drug addict, perhaps living on the streets in Bergen. How is it to be frowned upon, walking around the streets, uneasy to get the next shot of dope? As my colleague Nyre stated in the introduction the projects, this VR project features “not a first person shooter from Los Angeles, but first person social realism from Bergen”.

Schizophrenia
The second of the VR experiences, attempts to create understanding on how it is to live with schizophrenia. In the experience, the user perceives visual hallucinations, and audio of up to five different personalities. The concept is brilliantly illustrated by the poster, and the experience tries to portrait a subjective reality falling apart.

Conclusion

Journalism through the medium/technology of VR has great potential. Immersive Journalism is still in its infancy, but the projects done so far shows promising. Much will depend on VR goggles entering into people’s homes, as with any other technology. For insights into where we are in the terrain of VR technology in 2018 — take a look at our entry discussing the History of Virtual Reality.

Literature list


Are you interested in more reading on this subject?

This executive summary on “Virtual Reality Journalism” by Owen & Pitt at the Tow Center for Digital Journalism is one of the first reports on VR Journalism.

Further, the report by Doyle, Gelman & Gill at the Knight Foundation is a good background read.

Finally, the Reuters report by Zillah Watson is more recent and sheds light on the more current situation of the medium for journalistic purposes. This report  illuminates a change we have seen recently, with the use of consumer/”prosumer” cameras for easier production by newsroom. This will definitely turn out, as with traditional cameras, to be a prerequisite for the adoption of this medium across Journalism as whole. When it is easier to produce content in 360° video, more newsrooms will do it.

As the reports by The Knight Foundation, The Tow Center, and further Sirkunnen et. al indicate, Immersive Journalism has not been so prevalent in less-affluent media houses. We may know of VR stories such as 6×9 by The Guardian, but have not necessarily heard of any from our local newspaper. This may change in the near future due to better consumer products.