Wednesday, 11 December, 2019 UTC


Summary

Epistemic status: I know nothing about VR, it is outside my area of specialty. Most probably, what I’m writing here are obvious basic knowledge for anyone with any interest in VR, but not me. Under-qualified and over-opinionated.
TLDR: Software, both computer and brain. Need very high fps and very low lag, or else, the brain will get confused by the difference between what we see and what we feel.
Microsoft Outings demo at SigSpatial 2019.

Trending AR VR Articles:

1. Ready Player One : How Close Are We?
2. Augmented Reality — with React-Native
3. Five Augmented Reality Uses That Solve Real-Life Problems
4. Virtual Reality Headsets: What are the Options? Which is Right For You?
Microsoft has a booth at ACM SigSpatial GIS 2019. In it, there is a demo for Outings, basically a VR for Windows Maps. I was both impressed and not impressed.
Impressed: VR is a lot of fun. It is a unique experience. I learned how to move the “table” up and down, and nearly fell down the first time I did it. There was no sound, low polygon count, but it was still that immersive. It has a lot of potential.
Not Impressed: This is 2019. My first VR was around 20 years ago, in a small city, in a developing country. It feels like there are very minimal progress in 20 years. It feels like, theoretically, since we already have Windows Maps, how hard would it be to port it to VR?. Why? Where’s the bottleneck? I have always had this question in the back of my head, but never actually put effort in researching about it. Nevertheless, I did came up with a number of plausible reasons:
  • Input Hardware: Maybe we don’t have good and cheap localization sensors (gyro, accelerate, etc.)
  • Output Hardware: Maybe making 2 screens lightweight, and cheap, at a good resolution is extremely hard.
  • Software (Graphic Engine): Maybe we don’t have good engine to do this. For some reason, maybe it is harder than simply having 2 cameras on top of the existing graphic engine. Maybe all VR developers have to build their apps using assembly or C. After all, this would explain all the low poly aesthetic that we commonly see in VR (at least in my head it does).
  • Market: Maybe the actual market is just very small. People (including business, which are people, technically, legally) are simply not willing to pay money for VR. Or the learning curve is just too high. This seems unlikely though.
  • Creativity: Maybe designing VR is very hard. To the best of my knowledge, when video camera was first developed, for many years, they simply put it on a static platform, and then shoot a stage. They are still trapped in a “record a stage” mindset. It took years before someone managed to make movies using what we consider now as the most basic film “language”: zoom, pan, tilt, dolly, etc. And it is not just on the creator side. It took a while for the audience to learn the language too.
https://knowyourmeme.com/memes/human-eye-can-only-see-at-60-fps
The answer was totally unexpected. (This is from the person at the Microsoft booth, I didn’t even bother to double check). It all boils down to fps. Most desktop applications only need 30 fps max. However, if we simply put 2 cameras on the 3D scene and run it at 30 fps, a majority of people will get nauseous, because there is a small lag between what we see and what we feel (in the inner ear canal).
https://knowyourmeme.com/memes/human-eye-can-only-see-at-60-fps
The reason why our brain is very sensitive at detecting this difference, is because in the past, the only reason why there would be a difference between visual and equilibrioception, is because something is tampering with our nervous system, and usually, that would be poison. That’s why the most obvious response is to throw up, and hence, the nausea. I think this post is relevant here.
Thus, we need at least 90, or even 120 fps, with a lag of less than the reciprocal of the fps, to not have an epidemic of projectile vomiting. And even with 120 fps, some people are still reporting nausea. So this is the bottleneck. It is hard to render high poly detailed texture with 120 fps.
People do change. I’m not talking about ex-partners. People can learn to tolerate the nausea. Just use it as much as you can, until you feel nauseous, take few minutes break, and then try again. With practice, most people would eliminate the nausea. It always surprise me how much our conscious brain can modulate the rest. I was also told, that this is a very similar mechanism to how astronauts experienced space sickness until they adapted to it. I think this would also be similar to how fighter pilots are being trained. Reaching further, I’m wondering how related is this with Kelly McGonigal ted talk entitled: How to make stress your friend.
The idea being, your body is trying to be on your side. Given a stimuli that is naturally interpreted to be negative, your body could go down to 2 different path, depending on how our conscious brain modulates it. If we stand up tall and face it willingly, then our body will think, “oh, it is actually not that bad. I’m going to ring the alarm less louder next time.” But, if our conscious brain hate it, cannot see the silver lining, and didn’t prepare us mentally for it, our unconscious brain will go “This is not just bad, this is actually really bad, I’m going to ring the alarm louder next time”.
(We already got the hardware, both for input and output, although to get the best is still pricey, the market is huge, not just in gaming, but also in industry, data visualization for everyone, and regarding the software, they are basically building on top of unity.)

Don’t forget to give us your 👏 !

https://medium.com/media/1e1f2ee7654748bb938735cbca6f0fd3/href
What’s so hard about VR? Where is the bottleneck? was originally published in AR/VR Journey: Augmented & Virtual Reality Magazine on Medium, where people are continuing the conversation by highlighting and responding to this story.