Wednesday, 21 September, 2022 UTC


Summary

Augmented reality’s use and development have trended up over the past several years. This follows investments in mobile hardware capability, interest in immersive virtual experiences with the metaverse, and rising industry competition. Due to these variables, the augmented reality market is set to reach a value of $502 billion by 2027, growing at a CAGR of 62.7%, according to Research and Markets. If you’re planning to become part of these percentages with your augmented reality project, you should know about all the nuances of AR app development.
In this guide, we’ll discuss the details of augmented reality development in 2022, including the choice of technologies and the development flow.

Types of Augmented Reality Apps

There are a number of different types of AR applications including marker-based, markerless, location-based, and superimposition AR solutions.

MARKER-BASED AR

These applications utilize a particular ‘marker’ like a QR code or other image. 3D content in the app is placed in the world relative to, or on top of the marker. An older, but interesting example of marker-based AR is the PlayStation 3 Wonderbook, a gaming ‘peripheral’ that allows players to view a spell book on their screen. The book rotates and moves when they pick up the device and move it around. The camera uses the patterns on the actual book as a reference to display the AR content, a technology that is often used today on Snapchat and Instagram.

MARKERLESS AR

Instead of using set patterns or codes to trigger the content, markerless AR utilizes a camera to detect environment patterns as well as motion sensors to detect surfaces and place 3D objects. This typically involves a number of different technologies working together, such a
  • GPS and other location tools
  • Digital compass
  • Camera
  • Accelerometer and Gyroscope
  • Depth sensors
Latest devices are equipped with depth-sensing hardware (LiDAR, ToF) to improve precision. So markerless AR is powered not only by depth sensors and other positioning data, but also by ML algorithms on top of this data. This allows for a more accurate rendering of 3D content and powers the illusion that digital objects are part of the real world. Apps like Pokemon Go utilize markerless AR.

LOCATION-BASED AR

When users enter a particular location, AR applications can use that data to accurately display virtual content. This is how location-based AR works. Instead of simply displaying an object in relative space, developers can show objects in geographical space for users to observe and interact with.
Technology-wise, location-based AR relies on GPS, a digital compass, and an accelerometer. Moreover, there are several approaches to narrow down the position of the device:
– BLE (Bluetooth low energy) beacons
– VPS (Visual Positioning System)
– Low-range Wi-Fi Direct
– UWB (Ultra-WideBand)

SUPERIMPOSITION AR

This type of AR involves digitally replacing an object or superimposing a virtual object on top of another. For example, an app that can digitally change the color of your couch could be considered superimposition AR. This technique is useful for cause-and-effect demonstrations. For example, a user could point a camera at areas of their city to see what it looked like ten years ago from an archive of Google Maps.

Technologies Used to Develop Augmented Reality

Technologies used in augmented reality app development can depend on a number of factors, like the type of hardware being used, the available power of the device, and what application AR is being used for.

Mobile Augmented Reality Platforms

Smartphones have unique advantages compared to other AR platforms. They’re prevalent in the market and are extremely portable. This makes AR more accessible to many consumers since bulky headsets and elegant smart glasses haven’t quite hit the mainstream just yet. Because of this, mobile AR is a prime target for business applications.
Although mobile AR may not be the most powerful or immersive, it certainly has the potential to be very profitable and is one of the most important augmented reality trends to keep track of. Moreover, mobile AR can be a cost-effective way for business owners to join the metaverse trend. Try-on solutions that allow you to test cosmetics or clothes before buying, AR avatars and filters available to users on a smartphone can help businesses to communicate with customers even in a virtual environment.
There are three approaches toward mobile AR that businesses need to choose from:
  1. Native Android AR applications with ARCore
  2. Native iOS AR applications with ARKit
  3. Cross-platform apps
Native augmented reality app development will allow developers to take advantage of the benefit of the power of a device. In turn, a cross-platform application may not be able to take advantage of powerful native features but minimize the development time. Creating the same app with native code on each platform will be more expensive, but if more power and features are required, it may be more effective. If the application is fairly simple, cross-platform code may be enough.
Despite Android dominating the global OS market, developers on GitHub have historically seemed to prefer ARKit based on the number of repositories over the past several years.

AR DEVELOPMENT FOR IOS DEVICES: ARKIT 5 AND ARKIT 6

LOCATION ANCHORS

Location anchors allow developers to affix virtual objects in the real world by using geographic coordinates. For example, location anchors could display a three-dimensional icon or text in space next to an iconic building. Since location anchors are dependent on Apple Maps data, this means that if the city is not supported by Apple Maps, functionality for location anchors may be limited.
One of the most groundbreaking elements of location anchors is how location is approximated. GPS is simply not precise enough to provide location anchors on a user’s screen. To assist, developers can look around with their camera to allow the device to scan for features on buildings around them. By using these architectural features in conjunction with Apple Maps Look Around data, a user’s location can be better approximated for location anchors.

DEPTH API

Depth API is another valuable feature of ARKit 4 which continues to play an important role in ARKit 5 and ARKit 6. It utilizes one of the most powerful hardware features for AR on a mobile device, the LiDAR scanner on the iPad Pro and Phone 12 Pro, 12 Pro Max, 13 Pro, 13 Pro Max and 14 Pro smartphones. This enables much better analysis of scenes and allows for real-world objects to occlude virtual objects with much better accuracy.

RAYCASTING API

Apple’s Raycasting API enables enhanced object placement in conjunction with depth data. This allows much more accurate placement of virtual objects on a variety of surfaces while taking into account their curvature and angle. For example, it’s possible to use this to place an object on the side of a wall or along the curves of a sofa rather than simply flat on the floor. The LiDAR sensor allows for the scanning of surfaces to take place much more quickly compared to traditional methods.

ARKIT 6 UPDATES

ARKit 6 is about to bring several new features to improve AR experiences on iOS devices.Some of the features that Apple is touting include better motion capture, improvements to camera access, and additional locations for Location Anchors. They also plan to include Plane Anchors, a feature that would allow tracking flat surfaces like tables. This makes it easier for those surfaces to be moved without disrupting the AR experience. ARKit 6 will be launched alongside iOS 16 this fall.
As part of our ARKit research here at MobiDev, we tested using ARKit for gaze tracking. This opens up a number of possibilities for iOS, such as eye-based gestures, vision-based website heatmap analytics, and preventing distracted or drowsy driving.
https://medium.com/media/c75cd9f17b26c4684de4768eb3cbab71/href

AR DEVELOPMENT FOR ANDROID DEVICES: ARCORE

In an effort to stay competitive, Google has pushed ARCore far to remain one of the most versatile AR development platforms in the world. Let’s explore some of the features that are used by developers.

CLOUD ANCHORS

This tool allows users to place virtual objects in physical space that can be viewed by other users on their own devices. Google has made sure that Cloud Anchors can be seen by users on iOS devices as well.
On the Android side, ARCore is introducing new features to match ARKit’s advances. ARCore v1.33.0 introduces new Cloud Anchors endpoints and terrain anchors. These both improve the geographic anchoring of virtual objects. Earlier this year, ARCore v.1.31.0 introduced ARCore Geospatial API, which, similar to ARKit Location Anchors, utilizes data from mapping databases. In this case, Google Earth and Street View image data is used to identify the user’s geographic location and display virtual objects in those locations accurately.
The following MobiDev demo demonstrates how ARCore object detection features for a virtual user manua work.
https://medium.com/media/69fbbfdab10f2224043c74d20e2b1685/href

AR FACES

ARCore enables developers to work with high-quality facial renderings by generating a 468-point 3D model. Masks and filters can be applied after the user’s face is identified. This is one of the most popular use cases for AR app features.

AR IMAGES

Virtual business cards and advertising posters are just a few of the applications that are possible with augmented reality images. 2D markers can be used to implement these features as well as more advanced solutions like AR indoor navigation.
We have tested ARCore with indoor navigation solutions at MobiDev. Over the past several years, applications like indoor positioning have gotten even easier since our first experiments, making this technology more feasible for use in the real world. Check out this video to learn more about it.
https://medium.com/media/0fa2ef3d65e0dac90f6bcd6bcb9ab0cc/href

ARKit vs ARCore Features Comparison

These two frameworks for Android and iOS respectively are nearly identical when viewed from a features-perspective. However, the real difference between these two devices is hardware consistency. Apple iPhone and iPad devices are largely more consistent when it comes to the behavior and capability of their hardware. Meanwhile, Android devices are built by a number of different manufacturers with different specifications. Because of this, it becomes more difficult to deliver a consistent experience across many different Android devices.
ince Android hardware is less consistent, it’s important to keep in mind how powerful your AR experience will be and what devices it should be running on. Should it primarily be running on the latest and greatest Samsung and Pixel devices with high-performance AR features, or should it be less intensive to run on more devices? The choice is yours but experienced AR developers are always there to help you find the best solution.

Cross Platform Mobile AR Development in Unity

If utilizing the full power of native AR frameworks on Android and iOS isn’t necessary and if your goal is faster time to market, cross-platform AR development in Unity may be a good option. Unity AR Foundation is a helpful framework for cross-platform augmented reality app development. Despite not being able to take full advantage of each device, Unity AR Foundation is still fairly powerful. It supports Plane detection, Anchors, Light estimation, 2D image tracking, 3D object tracking, body tracking, Occlusion, and more.
However, there are some missing features depending on the platform you’re on. For example, Unity AR Foundation has limitations with some features for ARCore at the moment, such as 3D object tracking, meshing, 2D & 3D body tracking, collaborative participants, and human segmentation. If you want your app to be cross-platform, you’ll have to keep these missing features in mind.

Web-based Augmented Reality Technologies

On one hand, web AR is an extremely accessible technology because it can run on a multitude of devices without installing any additional software. On the other hand, web AR is very limited in features and power.
Some businesses are already utilizing web AR for technologies like virtual fitting room solutions. For example, Maybelline, L’Oréal and other companies have the option for users to virtually try on cosmetic products using their front-facing camera and web AR software.
Web AR is best implemented for simple tasks like facial recognition filters, changing the appearance or color of an object in a scene like hair, replacing backgrounds for videoconferencing, and more. It’s important to remember these limitations when deciding what platform your application should run on.

Augmented Reality Development For AR Wearables

When we talk about wearable technology for augmented reality, we typically refer to gear like Microsoft HoloLens and more portable and comfortable glasses like Google Glass.
From a software engineering standpoint, Microsoft Hololence’s development is based on the Microsoft technology stack and Azure Cloud.
Check our demo that showcases the capabilities of the Microsoft HoloLens for product presentation and educational purposes.
https://medium.com/media/96a46391727b03965e5834813ac14586/href
As for AR Glasses, most of the hardware is Android-based, and manufacturers provide SDKs for engineers to create apps. There are still some substantial aspects to be considered. The first is User Experience and User Interface, as the pattern of using such software is entirely different from what we are used to with smartphones. Then, engineers must be capable of building lightweight and optimized apps, as energy consumption is still the main pain point for many devices.

Where to Start Building AR Applications

The development of any software product begins with the definition of business goals. Only when you clearly understand what results you want to achieve, can a development team help to create a workflow for you that meets your business needs. Traditionally, the process of creating an AR-powered product consists of 5 steps.
The process of building an augmented reality mobile app starts with the pre-contract phase. It’s important for you to discuss the project requirements with developers so that everyone is on the same page about the project’s objectives and scope. It also allows back-and-forth feedback between developers and the business to help them clarify the idea and choose the best possible route to success.
Following this, choosing a technology stack is the next logical step. This is essential for business and technical analysis, as it will decide what platform is the best to use and how the project will be built. Moreover, engineers have to come up with a clear vision on how they plan to achieve targeted results, as bringing AR to a product often has hidden pitfalls to be considered.
After that, prototypes and 3D models are created. Once this is done, more thorough development begins. This involves backend & mobile features, AR modules, and QA. Also, there are a number of different challenges with testing AR features that should be taken into consideration.
Finally, the app is finalized for deployment. The deployment will involve a shift from development to support. The app needs to be updated and supported to be compatible with new SDKs and new device requirements.

MAKING THE MAGIC HAPPEN

Businesses with experience in AR applications have a head start on achieving their goals. However, businesses that don’t have internal augmented reality development teams may find it challenging to achieve their vision without help from experienced augmented reality development professionals. Enlisting the help of experienced AR developers is a great way to build your product and get a return on your investment.
Written by Andrew Makarov, Head of Mobile Development at MobiDev.
The full article is originally published at https://mobidev.biz and is based on MobiDev technology research.

How to Create an Augmented Reality App: Technology Guide 2022 was originally published in AR/VR Journey: Augmented & Virtual Reality Magazine on Medium, where people are continuing the conversation by highlighting and responding to this story.