Tuesday, 21 July, 2020 UTC


Summary

As companies and relationships change to become increasingly remote, there has been a ubiquitous increase in the use of video chatting and video conferencing apps and programs. A common way people are making these video calls more interactive is by adding virtual backgrounds. At the click of a button, a still or moving background is added and users are transported to locations ranging from outer space to the deepest depths of the ocean.
What is the technology behind this?
Portrait Segmentation is a sub-concept of Semantic Segmentation. Image segmentation is simply classified as either Semantic segmentation or Instance segmentation. The figure below shows the difference between them.
Semantic Segmentation only cares about the segment and the class of the object. Portrait Segmentation effectively has the segmentation predict whether something is the object or background.

Trending AR VR Articles:

1. How to use subtle AR filters to survive your Zoom meetings?
2. The First No-Headset Virtual Monitor
3. Augmented reality (AR) is the future of Restaurant Menu?
4. Creating remote MR productions
Segmenting the image and the background.
To develop an accurate and real-time portrait segmentation system, developers need to create and train their machine learning model with a very large dataset. Acquiring this large data costs a lot of money and time to collect images and label them in a pixel level. However, to build a satisfactorily accurate model, a large dataset is not enough. Other techniques are required.
To get a model fast enough to serve in real-time base, the model’s size has to be small and operation in the model has to be simple. But, reducing size of the model and accuracy is a trade-off. To account for this, other, more complex techniques are required.
To deliver commercial grade segmentation, ARGear’s deep learning engineers developed and trained the model for months.
By using it, people can broadcast from anywhere, entertain with camera and easily extract foreground from the image at a glance. Now, ARGear supports Portrait segmentation model for frontal camera, and will support a model for rear cameras soon.
Use ARGear’s platform to add virtual background capabilities to your apps.
You can see ARGear’s virtual background segmentation technology in use in KT’s Narle 5G video chatting app. KT, one the 3 major telecoms in South Korea, powers its video app with ARGear’s AR SDK and content.
An example of what video chats look like with ARGear’s virtual backgrounds and effects.
For inquiries, please contact [email protected]

Don’t forget to give us your 👏 !

https://medium.com/media/c43026df6fee7cdb1aab8aaf916125ea/href
Making virtual backgrounds by utilizing portrait segmentation was originally published in AR/VR Journey: Augmented & Virtual Reality Magazine on Medium, where people are continuing the conversation by highlighting and responding to this story.