The Weight of a Shadow
AR projection systems can project objects into 3d space, but cannot project shadows. For the rendering artist, this leads to real problems. When moving through the graphics pipeline, a talented engineer will program the vectorized objects into a 3d virtual space, but that projection is lit based upon several factors - one of the largest factors is directional lighting and hard cast shadows. When an objects occludes a directional light in real life, photons bounce off the object in predictable ways. In a closed space, some will bounce off walls and use their energy to light behind the obstacle, but many just won't. Accurate light projection, reflection, and refraction has been a key process in rendering for the last 10 years. Much of the "wow" factor in modern AAA games comes from custom programmed "shaders", which tell the graphics card how the object should interact with light in a scene.
Without lighting, objects lose their 'character'. Part of the value in rustic living is the absence of shimmer. Part of the value of modernist techno-living is precisely the opposite. Both objects would need some form of shader when being rendered digitally in order to capture the sense of age that their designs deserve.
No Lighting
Rustic Lighting
Modern Lighting
VR technology experts, such as John Carmack, have encouraged developers to step away from the use of dynamic lighting and shadows in scenes for understandable reasons. When a shadow is drawn in a virtual scene, millions of individual light particles have to have their physics tracked, which requires many more draw calls to the graphics processor. When interacting in VR, latency is everything. If you can't turn your head and have the display/projector produce the image that you are "supposed" to see, the entire experience breaks. It's easy enough to explain, imagine your world including everything you see and hear right now being a recording, like a massive 360-degree YouTube video. While it's streaming, it feels great. I can look at different objects, I can walk around, I can interact. This provides the user with two paramount sensations: Choice, an Control. Now imagine that the video pauses so that it can buffer. It is frustrating enough to wait for your video to load on a 2D screen. You lose choice, insofar as you can no longer choose to watch the video. The medium is limiting what you can do. Often this means switching to read an article in another tab, scrolling down to read comments, looking at the sidebar at related videos, or just looking away from your screen and seeing all the dishes you should have cleaned instead of watching another cat video. In VR this experience is far more frustrating, and far more uncomfortable. If your entire world freezes, not only do you lose choice, but you also lose control.
An entire physical sense is taken away from you. Because of the unique circumstances of VR head-mounted displays, you can't just look somewhere else. EVERYTHING is loading. EVERYTHING is stopped. Not only that, but turning your head does nothing to the image in front of your eyes. Your physical actions do not result in perceptual changes. This is bad. And it is a prime reason for discouraging developers from letting it happen using asynchronous loading and low polygon counts, low lighting expectations and minimal shadows. VR is still in its infancy, and display technology, for all its advances, is trying to help by rendering at more frequent intervals, with smaller batches of data so that loading is near seamless -- 120 Hz, or around 8ms per frame. But this means that the graphics processor needs to be serving a frame AT LEAST every 8ms. To do the millions of particle physics calculations necessary to have realistic shadows is a herculean task.


