Google didn’t let me take my own photos, but this is strikingly similar to the demo I saw with my own eyes. | Image: Google
I demoed Google’s new Android XR platform, Samsung’s Project Moohan, and prototype smart glasses. I felt as close to Tony Stark in a controlled demo as I’ll ever be.
It’s an ordinary Tuesday. I’m wearing what look like ordinary glasses in a room surrounded by Google and Samsung representatives. One of them steps out in front of me and starts speaking in Spanish. I don’t speak Spanish. Hovering in mid-air, I can see her words being translated into English subtitles. Reading them, I can see she’s describing what I’m seeing in real time.
I mumble an expletive. Everyone laughs.
This is my first experience with Android XR — a new mixed reality OS designed for headsets and smart glasses, like the prototypes I’m wearing. It’s Google’s big bet to power a new generation of augmented reality devices that embody all our wildest dreams of what smart glasses can be.
Google is no stranger to augmented reality. Google Glass crashed and burned with the public more than 10 years ago before being repurposed for enterprise users and eventually discontinued. But things are different now. Apple has the Vision Pro. Meta has the Ray-Ban smart glasses, and their AI features have garnered positive buzz. That’s why Google is jumping back into the fray with Android XR.
Google wants everyone to know the time is finally right for XR, and it’s pointing to Gemini as its north star. Adding Gemini enables multimodal AI and natural language — things it says will make interactions with your environment richer. In a demo, Google had me prompt Gemini to name the title of a yellow book sitting behind me on a shelf. I’d briefly glanced at it earlier but hadn’t taken a photo. Gemini took a second, and then offered up an answer. I whipped around to check — it was correct.
On top of that, the platform will work with any mobile and tablet app from the Play Store out of the box. Today’s launch is aimed at developers so they can start building out experiences. The average person won’t be able to buy anything running Android XR right now, but in 2025, Samsung will be launching its long-rumored XR headset. Dubbed Project Moohan (Korean for infinity), the headset will be the first consumer product to ship with Android XR. Technically, it’s running the same software as the glasses I tried, but Project Moohan will also be capable of VR and immersive content — stuff that wouldn’t be suited to a pair of smart glasses. It’s essentially a showcase for everything that could be possible. Hence why Google is going with XR — a catch-all term that stands for “extended reality” and encompasses AR, VR, and mixed reality.
Image: Google, Samsung Project Moohan felt like a mix between a Meta Quest 3 and Vision Pro headset.
Samsung’s headset feels like a mix between a Meta Quest 3 and the Vision Pro. Unlike either, the light seal is optional so you can choose to let the world bleed in. It’s lightweight and doesn’t pinch my face too tightly. My ponytail easily slots through the top, and later, I’m thankful that I don’t have to redo my hair. At first, the resolution doesn’t feel quite as sharp as the Vision Pro — until the headset automatically calibrates to my pupillary distance.
It’s at this point when I start feeling deja vu. I’m walked through pinching to select items and how to tap the side to bring up the app launcher. There’s an eye calibration process that feels awfully similar to the Vision Pro’s. If I want, I can retreat into an immersive mode to watch YouTube and Google TV on a distant mountain. I can open apps, resize them, and place them at various points around the room. I’ve done this all before. This just happens to be Google-flavored.
I want to ask: how do you expect to stand out?
I don’t get the chance to before I’m told: Gemini.
For the skeptic, it’s easy to scoff at the idea that Gemini, of all things, is what’s going to crack the augmented reality puzzle. Generative AI is having a moment right now, but not always in a positive way. Outside of conferences filled with tech evangelists, AI is often viewed with derision and suspicion. But inside the Project Moohan headset or wearing a pair of prototype smart glasses? I can catch a glimpse of why Google and Samsung believe Gemini is the killer app for XR.
For me, it’s the fact that I don’t have to be specific when I ask for things. Usually, I get flustered talking to AI assistants because I have to remember the wake word, clearly phrase my request, and sometimes even specify my preferred app.
“One thing I’m really confident about, something that’s not just different from before, is that Gemini is really that great,” says Kihwan Kim, EVP at Samsung Electronics, who nods furiously in agreement when I mention this. To Kim, it’s the ability to fluidly speak to Gemini and the fact that it understands a person’s individual context that opens dozens of different options for the way each person interacts with XR. “That’s why I clearly see that this headset will give more insight about what [XR] should be.”
I was shocked at how well my translation demos went, which were in the same spirit as the video here.
In the Moohan headset, I can say, “Take me to JYP Entertainment in Seoul,” and it will automatically open Google Maps and show me that building. If my windows get cluttered, I can ask it to reorganize them. I don’t have to lift a finger. While wearing the prototype glasses, I watch and listen as Gemini summarizes a long, rambling text message to the main point: can you buy lemon, ginger, and olive oil from the store? I was able to naturally switch from speaking in English to asking in Japanese what the weather is in New York — and get the answer in spoken and written Japanese.
It’s not just interactions with Gemini that linger in my mind, either. It’s also how experiences can be built on top of them. I asked Gemini how to get somewhere and saw turn-by-turn text directions. When I looked down, the text morphed into a zoomable map of my surroundings. It’s very easy to imagine myself using something like that in real life.
But as cool as all that is, headsets can be a hard sell to the average person. Personally, I’m more enamored with the glasses demo, but those have no concrete timeline. (Google made the prototypes, but it’s focusing on working with other partners to bring hardware to market.) There are still cultural cues that have to be established with either form factor. Outside of Gemini, there has to be a robust ecosystem of apps and experiences for the average person, not just early adopters.
The headset demos felt more familiar, though Circle to Search was unique to Android XR.
“It’s not going to be a singular product. It’s Android,” says Shahram Izadi, Google’s VP of AR and XR, noting that Google has a three-pronged strategy for Android XR: laying the groundwork with devs is one element; Gemini’s conversational experience is another; and the third is the idea that no one device is the future of XR. Headsets, for example, may just be “episodic” devices you use for entertainment. Glasses could supplement phones and smartwatches for discreet notifications and looking up information.
“The way I see it, these devices don’t replace one another. You’ll use these devices throughout your day, and if there’s consistency with Gemini and generative AI experiences across these form factors, people will get more comfortable with wearing computers on their faces. That’s the on ramp to get to more immersive devices,” says Izadi.
Listening to Kim and Izadi talk, I want to believe. But I’m also acutely aware that all of my experiences were tightly controlled. I wasn’t given free rein to try and break things. I couldn’t take photos of the headset or glasses. At every point, I was carefully guided through preapproved demos that Google and Samsung were reasonably sure would work. I — and every other consumer — can’t fully believe until we can play with these things without guardrails.
But even knowing that, I can’t deny that, for an hour, I felt like Tony Stark with Gemini as my Jarvis. For better or worse, this example has molded so much of our expectations for how XR and AI assistants should work. I’ve tried dozens of headsets and smart glasses that promised to make what I see in the movies real — and utterly failed. For the first time, I experienced something relatively close.