Lightweight, but limited vision
One of the biggest revelations of Google I/O is that the company itself officially returns to mixed reality games. It has been a few years since the search giants on AR/VR/XR Front have seen it, but there is a range of hardware partners that can use their XR platform, which seems to have finally changed.
After the keynote, Google showed a brief demonstration of the prototype device we saw on the stage. I only used the device for a few minutes, so unfortunately I was very limited, but I was immediately impressed by the light of my glasses that matched the Orion prototype of Meta and the Augmented Reality Glasses of Snap. Although both are small, Google’s prototype device is lightweight and feels more like regular glasses. These frames are a little thicker than the one I usually wear, but not much.
Meanwhile, there are some significant differences between Google’s XR glasses and what we see from Meta and Snap. Google’s device only shows a monitor on one side – the correct lens, you can see it in the image at the top of this article – so the visuals are more “audible” than the fully immersive ones. I pointed out in Google’s demo on I/O that the field of view looks narrow, and I can confirm that it’s even more limited than the Snap’s 46-degree field of view (Google refuses to share details about the range of field of view on its prototype.)
Instead, the monitor feels a bit similar to the way you might have a foldable phone front display. You can quickly view the app’s time, notifications, and small clips, such as the music you’re listening to.
Obviously, Gemini’s purpose is to play an important role in the Android XR ecosystem, and Google guided me through some assistant demonstrations. I can read a book or some art display on the wall and ask Gemini questions about what I’m looking at. It feels very similar to the multimodal features we’ve seen on the Astra project and elsewhere.
But even in a well-planned demo, there are some mistakes. Gemini started telling me about my opinion before I finished the issue, followed by an awkward moment where both of us paused and interrupted each other.
One of the more interesting use cases Google shows is Google Maps in glasses. You can enjoy the views of your next turn from the next turn, just like Google’s enhanced reality walking route, looking to see a small portion of the map on the floor. However, when I asked Gemini how long it would take to drive from my location to San Francisco, so there was no answer available. (In fact, it says something like “tool output” and then my demo ends quickly.)
Like many other mixed reality demonstrations I’ve seen, it’s obviously early. Google cautiously emphasizes that this is prototype hardware designed to demonstrate the capabilities of Android XR, rather than devices planned to be sold at any time. So any final smart glasses we get from Google or the company’s hardware partners will look very different. My few minutes with Android XR were able to show how Google thought about how AI and mixed reality were fused together. It is no different from Meta, which believes that smart glasses are also the key to long-term adoption of its AI assistant. But now that Gemini will be on almost every Google product that exists, the company has a very solid foundation to achieve this.
develop..