Google announced that the company will be conducting real world tests of its early AR prototypes starting next month.

The company says in a blog post that it plans to to test AR prototypes in the real world as a way to “better understand how these devices can help people in their everyday lives.”

Some of the key areas Google is emphasizing are things like real-time translation and AR turn-by-turn navigation.

“We’ll begin small-scale testing in public settings with AR prototypes worn by a few dozen Googlers and select trusted testers,” the company says. “These prototypes will include in-lens displays, microphones and cameras — but they’ll have strict limitations on what they can do. For example, our AR prototypes don’t support photography and videography, though image data will be used to enable experiences like translating the menu in front of you or showing you directions to a nearby coffee shop.”

Critically, Google says the research prototypes look like “normal glasses.” This was no doubt partially informed by their rocky experience with Google Glass starting in 2013 which spawned the neologism ‘glasshole’ due to the device’s relative high visibility and supposed privacy concerns of wearing a camera. Glass is still around, albeit only for enterprise users.

Google says it wants to take it slow with its AR glasses though and include a “strong focus on ensuring the privacy of the testers and those around them.” Although the units will clearly pack camera sensors to do its job, Google says after translating text or doing turn-by-turn directions, the image data will be deleted unless it will be used for analysis and debugging.

“In that case, the image data is first scrubbed for sensitive content, including faces and license plates. Then it is stored on a secure server, with limited access by a small number of Googlers for analysis and debugging. After 30 days, it is deleted,” the company says in a FAQ on the program.

SEE ALSO
Hands-on: Apple Upgrades Personas for True Face-to-face Chats on Vision Pro

Testers will also be prohibited from testing in public places such as schools, government buildings, healthcare locations, places of worship, social service locations, areas meant for children (e.g., schools and playgrounds), emergency response locations, rallies or protests, and other similar places. For navigation, testers are also banned from using AR prototypes while driving, operating heavy machinery, and engaging in sports.

Google’s inclusion of displays in its public prototypes is a step beyond Meta’s Project Aria, which started on-campus testing of AR prototypes in 2020 that notably included everything you’d expect from AR glasses but the displays. We’re waiting to hear more about Meta’s Project Nazare however, which are said to be “true augmented reality glasses.”

As for Apple, well, there’s only rumors out there for now on specifications and target launch dates for the company’s MR headset and follow-up AR glasses. It’s clear however we’re inching ever closer to a future where the biggest names in established tech will directly compete to become leading forces in what many have hailed as a class of device which will eventually replace your smartphone.

Newsletter graphic

This article may contain affiliate links. If you click an affiliate link and buy a product we may receive a small commission which helps support the publication. More information.


Well before the first modern XR products hit the market, Scott recognized the potential of the technology and set out to understand and document its growth. He has been professionally reporting on the space for nearly a decade as Editor at Road to VR, authoring more than 3,500 articles on the topic. Scott brings that seasoned insight to his reporting from major industry events across the globe.
  • Google Project Aria

  • kontis

    with limited access by a small number of Googlers for analysis and debugging

    LMAO. Will they also analyze the size of your PP? Don’t look down!

    I know this is currently only for “trusted testers”, but it’s pretty obvious where they are going with it. They will BS how much they need the power of cloud to provide valuable features.

    The data being accessible for anyone for even a millisecond is absolutely unacceptable. This is like getting access to someone’s eyes.

    You don’t need cloud to do computer vision and translation. If you don’t have the power on glasses, do it on the phone, which is more powerful than the largest super computer 25 years ago, especially when you consider the AI-dedicated ASIC.

    Stop being creepy, Google.

    Getting away with it with a toy brick in pocket is one thing.
    But trying to do that with the EYES is insane.

    This kind of policy should never leave the walls of the lab.

  • If they’re doing “analysis and debugging” then faces will be tracked. If it’s on their server for 5 seconds, faces will be found and tracked, much less 30 days. It’ll all feed into a web of personal connections the governments of the world use for law enforcement. They don’t need to keep the photos, just the meta data of who you are and where you were.

    Right now they already use street cameras and traffic cameras for face tracking. What they don’t have is many indoor cameras. They have to wait for somebody to walk by a webcam, posts on social media, or have their photos “backed up to the cloud”.

    Face tracking feeds into a nebula of other network data (Emails, IM, Calls, ect). This becomes a web of data showing who you know and how well you know them. If somebody you know becomes a “Person of Interest”, then so do you.