News
2025-05-15 13:30

Google publicly unveiled working prototypes of new AR and XR glasses paired with AI Gemini for the first time

Shahram Izadi, head of Google’s AR/XR division, presented the smart glasses prototypes at TED 2025 in Vancouver.

Two devices were demonstrated in real-world scenarios: lightweight AR glasses that look like regular prescription glasses, and an XR mixed reality headset somewhat resembling Apple’s XR glasses.

Both devices heavily rely on AI Gemini, Google’s artificial intelligence model. The audience saw on a large screen exactly what the AR glasses wearer saw. Features demonstrated included real-time translation into languages like Persian with natural dialects and accents. Recognition worked both for voice-to-text translation and for reading text on signs via the camera.

Special attention was given to the “remembering what was seen” feature: the camera constantly records the surroundings, allowing the user later to ask about an object previously in view, and the system identifies its location or details, like the author of a book recently seen.

Izadi noted, “This is the second act of the computer revolution. AI and XR are merging, opening radically new ways to interact with technology on your terms. Computers will become lighter and more personal. They will share your perspective, understand your real context, and have a natural interface that is both simple and conversational.”

The first compact glasses include cameras and microphones, enabling AI to see and hear the world. Speakers let you listen to AI, play music, or even take calls. A small high-resolution color display is embedded in a transparent lens. The glasses work paired with a phone, which handles the main computations either locally or via the cloud, keeping the glasses light while supporting all phone apps.

During the presentation, AI multimodality was showcased on all devices, using visual data and natural language to process complex real-time queries while remembering previous context. For example, when flipping through a book before the glasses’ camera, the user could ask about the meaning of a diagram on a page; the AI, having remembered previous pages, explained the complex diagram in simple terms.

There was also an interesting navigation example: a user new to a city asked Gemini to find a route to a beautiful nearby park with an ocean view. This combined 5G, AI, AR, and other technologies in real time to fulfill the user’s natural spoken request instead of a precise “point-to-point” command.

Later, another Google employee demonstrated the impressive capabilities of the second XR device — a mixed reality headset — again highlighting scenarios combining XR and AI Gemini.

We highly recommend watching the full video here
2025