Skip to main content

Google’s life-changing AR smart glasses demo gave me shivers

When Viva and Yoshiko, one an English speaker and the other a Mandarin Chinese speaker, tried Google’s prototype smart glasses during a sneak peek at Google I/O 2022, I got shivers down my spine. The expressions on their faces, the sudden ability to meaningfully communicate with someone who before didn’t fully understand what was being said moved me a great deal. It’s an example of the type of technology that I truly love: One that can change lives.

‘Subtitles for the world’

You’d be forgiven if you’d missed this special moment during Google I/O, as it came right at the end of the marathon event, and lasted for just a few minutes. The actual smart glasses were not named, only shown in a demonstration video, and really only revealed as a concept. Google didn’t even show us the interface itself or hint that the smart glasses will ever be released as an actual product.

Breaking down language barriers with augmented reality | Google

It didn’t have to. Google sold the dream. Worn like a normal pair of glasses, the lens incorporates a small screen that shows a real-time translation of another language in augmented reality (AR), so it overlays what you see normally. Google product manager Max Spear summed up the functionality perfectly, saying it was like “subtitles for the world.” Seated opposite someone who doesn’t speak the same language as you, the glasses will provide a text-based translation of the conversation as it happens.

You may be thinking it’s similar to other translation technology — Google’s Pixel Buds have a translation feature, for example — but there are some distinct advantages here. For a start, seeing text on a screen inside a pair of glasses means you can maintain eye contact, you can follow along without pressing buttons or a long, awkward silence as a machine translates what’s being said. Text is less intrusive than hearing another voice, and because no one hears the translation, it doesn’t feel unnatural.

Potential uses

Anyone who has traveled abroad, or spent time in communities where languages differ, will instantly understand how this kind of technology would be of benefit. I got shivers not only because of the joy on the faces of those testing the glasses, but because I immediately thought of how times my own life would have changed if I had access to the same technology.

I remember having dinner with a friend in Japan, and although we both had a basic command of each other’s language, the conversation couldn’t flow. We ended up using Google Translate on a phone and typing rather than using voice through the app because the environment was quite noisy. It worked and was quite fun, but it wasn’t perfect, and at times was pretty awkward. The smart glasses would have changed that situation completely.

In app translation on the new Google Pixel 6.
Image used with permission by copyright holder

I lived in Greece for many years, and while I understand a fair amount of Greek, I can’t speak it well at all. I wonder how Google’s smart glasses and the translation system would have changed my time there? But, as I wonder about both these situations and many others I’ve personally come across where the smart glasses would have been really helpful, I quickly reach the big barrier that is not only Google’s prototype glasses, but any wearable translation tech.

The problem with any wearable device providing visual translations for two people who speak different languages is that all parties need to own and wear one of them. Conversations are two-way things, and if only one of you understands what’s being said, then it only becomes useful in situations where a response is not required. So, to make it work for my scenarios, everyone I know in Japan and Greece will have to wear smart glasses with Google’s translation technology inside. That seems … unlikely.

Where they would work without all parties wearing them is not for translation, but for transcription. This kind of visual transcription and enhancement could clearly be life-changing for Deaf or hard-of-hearing people. My father wears hearing aids, but I know he’d benefit from “seeing” the words, and wouldn’t miss the annoying audio feedback that comes with hearing aids in some situations. It reminds me of how transformational products like the eSight smart glasses are for blind and partially sighted people.

Probably not Google Glass 2

As much as I’d like to think we’re seeing an early version of Google Glass 2, I don’t think we are. Instead, we’ve seen an amazing demonstration of Google’s rapid advancements in the speed and accuracy of its translation and transcription tech.

Google's AR smartglasses translation feature demonstrated.
Image used with permission by copyright holder

There were several other examples of Google’s language skills improving during Google I/O 2022. It was announced that another 24 languages have been added to Google Translate to serve 300 million new people around the world. It takes the total to 133 supported languages and has been made possible by a new AI system called Zero-Shot Machine Translation, which learns new languages through a combination of existing knowledge and new information, even if the source of that information is limited.

Google’s AI is getting better at understanding natural language and the way it’s used in general, such as with the Look and Talk feature on the Nest Hub Max, also announced during I/O 2022. That’s before anyone with a Google Pixel 6 tries out Google Assistant’s ability to transcribe your voice into message replies, or watches video with live translations. Both are fast, and particularly in the case of the message replies, shockingly accurate.

I use Google Translate across different devices every day, usually translating Japanese, Korean, and Chinese to English. These are very challenging to do, and to use them effectively in conversation really requires a knowledge of how the language works, otherwise embarrassing mistakes will be made. Hearing and now seeing how Google is innovating and improving its translation tech means my world continues to open up even more, and I think it will slowly make actually learning those languages easier too.

Integrating Google’s enhanced language and translation technology through a pair of smart glasses is enormously powerful. If I can immediately understand the benefits that would bring to me and those close to me, I can only just begin to understand the excitement someone who can’t hear will feel. You can keep the Pixel 7 and Pixel Watch — demonstrations of incredible future technology like this are the reason I sit through more than two hours of Google I/O keynote presentations, and those shivers as I begin to understand how potentially transformational it all could be are my reward.

Editors' Recommendations

Andy Boxall
Senior Mobile Writer
Andy is a Senior Writer at Digital Trends, where he concentrates on mobile technology, a subject he has written about for…
Google’s ARCore platform can now track faces, boasts improved memory
google ar arcore face augmentation and improved cloud anchors mark

Google is continuing to improve its augmented reality platform, ARCore, and has released an update for Android and iOS that adds facial augmentation and improves the existing Cloud Anchors that allowed for AR object permanence.

That sounds complicated, but it's less complex than it initially seems. The new addition, Augmented Faces, allows users to attach fun effects to their faces that follow their movements and react to their expressions in real time. The technology is similar to iOS's Animojis and Memojis -- except with a key difference. While iOS's in-built AR effects require a depth-sensing camera (the TrueDepth selfie lens on the latest iPhones), ARCore's new feature is able to recreate this effect without the need for advanced hardware. To achieve this, Augmented Faces lays a 3D mesh over your face with 468 individually tracked points -- with each point corresponding to a specific point on the AR effect. Basically, think of Snapchat's filters, but better.

Read more
Internal iOS 13 code spills beans about new Apple AR headset
iOS 13 Hands-on

According to a report from MacRumors, internal builds of iOS 13 contain lines of code referencing what could be an augmented reality headset or pair of smartglasses.

We thought we'd heard the last about Apple's rumored AR smartglasses when the long-rumored project was widely reported to have been shelved in July 2019. Back then, it was assumed the departure of Apple designer Avi Bar-Zeev from the AR glasses team in January may have been what caused the shutdown, but now it seems that may have been premature. Instead, it seems possible the development was continued.

Read more
The third generation of Google Glass may be nearly ready for release
google glass patent touchpad batteries woman

It looks like Google hasn't given up on Google Glass just yet. According to a new report from DigiTimes suggests that the third generation of Glass has finished the development stage and is now in pilot production.

The report notes that the 3rd-gen Glass is as light as ever, weighing as little as a pair of regular glasses. There's a catch though -- the battery life will reportedly suffer as a result of the light build. In fact, battery life is so low that users may only get 30 minutes of use in between charges.

Read more