If you’re trying to create high-quality virtual experiences for your company, you’ll be faced with the same hard truth at every level:
Hyper-Realism is HYPER-costly.
But that doesn’t mean it’s not important! Realism matters where results matter. And while you won’t be the technical expert on your team, you do need a high-level understanding of the essential elements of creating these experiences so you know where to invest in what matters most.
I’ll walk you through the primary elements so you know what kinds of decisions you’ll be making along the way.
Content
Everything that a user sees in an XR experience is called content, and it’s made up of 2D objects or 3D models.
The 2D objects and 3D models can either be static or dynamic. If the content looks cartoonish, everything will seem cartoonish. If the content looks real, you’re on your way to true immersion.
2D OBJECTS generally require very little storage space and runtime processing power. Some common types of 2D objects are:
- Image files (Jpg, PNG, tiff, etc)
- Text
- User Interface (UI) screens
- Video files (mp4, mov, etc.)
3D MODELS are what’s taking the internet to the next level. 2D has done a lot for the world, but 3D models are the next leap forward. To engage with 2D content, we naturally suspend reality; 3D content is actually easier for us to comprehend and interact with, which is why XR experiences are so compelling.
Let’s talk about what makes up a 3D-model (not so you’re an expert, but so you know how to invest in what matters):
- The “MESH” is what forms the structure of the 3D model. Meshes are made up of triangles / polygons. In the image below, it becomes pretty clear why a higher “polycount” generally helps — there is clearly a huge difference between these hands!
Higher "POLYCOUNT" = More Realistic
(and MORE time, money, and resources)
Lower "POLYCOUNT" = Less Realistic
(and LESS time, money, and resources)
- “TEXTURE” is like when an artist creates an outline and then comes back later to add the color. Look backat the mesh above. The mesh gives the structure, but the texture gives the color.
Higher RESOLUTION = Better-Looking
(and more time, money, and resources)
Lower RESOLUTION = Worse-Looking
(and less time, money, and resources)
- “ANIMATION” refers to how 3D-models move through virtual space. If a 3D model is going to move, it needs to be programmed with the movement’s exact specifications. In a game engine, the 3D model can be moved programmatically via code to move morph targets or to run through specific key frames.
If there are animals or humans in the XR experience, these are the questions you'll need to ask (and prioritize in order of importance):
► Do the muscles look correct when they are walking or moving?
► Is the overall motion correct? Does it match reality?
► Are the small subtleties captured? For example, when somebody coughs, their adam's apple moves up. If you see an animation where that doesn't happen, you can tell something's not quite right (even if you can't point out what is missing) For a truly immersive experience, all of these pieces have to be put together.
Interaction
If content is everything a user sees, “interaction” refers to everything a user does. The most important elements of interaction right now are precise hand tracking, object tracking, and tactile feedback.
Precise Hand Tracking
For nearly every virtual experience, Precise Hand Tracking is the most essential form of interaction, and it can be accomplished in three primary ways:
- VR CONTROLLERS have extreme accuracy for positional accuracy of a hand. On the flip side, many experiences don’t lend themselves well to using controllers.
- HAND TRACKING BY ON-HEADSET CAMERAS is a resource-intensive process, and it’s not yet nearly as effective. You can assume that essentially all headsets in the future will have this technology, but it’s not there yet.
- VR GLOVES provide the most precise hand tracking available, and feel almost completely natural. But they come with a VERY high price tag and much more work up front creating the experience.
Here's the question you have to consider:
does my experience require a more natural extension of the body?
(aka normal hand usage)
--OR--
is my experience better off going after accuracy of precise position data?
(aka a VR controller)
So if you’re creating a medical training, and you need to know exactly where in the flesh that needle needs to go and 1mm up or down really does matter… you’ll need VR gloves.
But for most other interactions, you can be off by 1–3 cm and it doesn’t inhibit the experience or effectiveness. For example, if you’re playing a game or even grabbing a beaker in an XR science laboratory it won’t matter if you’re 3 cm off… you can still grab the beaker perfectly well. And more importantly, the point of the training or game isn’t to prove you know how to grab a beaker…
…whereas for medical surgery you need to prove that you won’t slice an artery.
Here are some high-quality resources to consider or learn from for each of these categories:
VR Gloves
Hand Tracking with cameras
- ARKit (Apple) and ARCore (Google)
- SnapDragon
- Oculus
- MagicLeap
- UltraLeap
Precise Object Tracking
Precision Object Tracking refers to the ability to physically use an object in the real world while it is simultaneously brought into the virtual experience.
Precision tracking usually requires a very large setup, which makes sense for more complicated or costly instances such as the following:
- medical surgery training with highly specific instruments
- SWAT teams training without live weapons and equipment
- learning how to handle fragile equipment that is likely to be broken during training
- learning how to weld without the danger of broken equipment or injured employees, but with all the realism
In most other cases, either training or consumer-focused, it’s not very necessary — especially for how big of an investment it requires.
Additional hardware can be used, but is not great for scale at this point due to cost. Plus, it is impractical to attach a 5-inch cube onto a needle during a medical scenario.
And if you think about scale, schools probably won’t love doing 15 precise calibrations per tool every time a student does a training. Nor do they want to pay for all of these extra pieces of equipment.
The DodecaPen is an awesome example of how this all works, but won’t be very realistic in real time for under 1mm precision accuracy for a while. You CAN get less than 1mm precision but you might need to run something like, 30 frames per second using 124 cameras… using six threads on a 2.2GHz Intel E5–2698 Xeon processor and a single NVIDIA Tesla V100 GPU
(which is tech-speak for “A WHOLE LOT of power”)
https://medium.com/media/974d277b6f9cc2985054d8c4115524f8/href
MoCap provides another way for you to attach a specific ball and then track it using expensive cameras. Here’s a great example for medical training that can be incorporated into XR scenarios.
Here are some other great examples of companies that are moving object (and full body) tracking forward:
- Vive — Puck
- SensoryX — Object Tracker
Realistic Tactile Feedback to the user
Feedback is a crucial part of interaction, because “touch” is a two-way process! A lack of feedback breaks immersion, decreases training quality, and frustrates users.
- In training scenarios, mannequins can provide feedback to users due to the likeness of the body. For example, Syndaver has very realistic tissue boundary densities; when you stab a needle in, it will feel very real. The ability to overlay corresponding detailed visuals with Augmented or Mixed Reality creates an incredibly lifelike experience.
- Haptic touch feedback from HaptX VR Gloves can add to the effect of immersion, but it’s only perceived resistance. If you need to push or pick up something too heavy, the immersion breaks.
- VR Controllers are helpful for providing very basic feedback to users through vibration. However, the feedback is very limited, since vibrating can only be used to mean so many things in a complex scene.
https://arvrjourney.com/
A Business Leader’s Blueprint for Hyper-Realistic XR Experiences was originally published in AR/VR Journey: Augmented & Virtual Reality Magazine on Medium, where people are continuing the conversation by highlighting and responding to this story.