Google to Introduce 'Scene Exploration' to Lens

The feature will allow users to pan their camera across a scene and get instant information about multiple objects, but no release date has been announced.

We may earn a commission from links on this page.
Stock photo of eye with Google search page reflected in it
Google’s scene search takes the technology platform one step closer to seeing things the way you do.
Photo: Flystock (Shutterstock)

Google I/O, the tech giant’s annual developer conference is happening this week. Among many new features and reveals came the Tuesday announcement of “scene exploration,” and search tool that combines panoramic photos and text. 

The new search tool will allow users to us pan their device cameras sideways to capture a whole scene in front of them. Then, using both visual and text-based search, users can pair that array of objects with specific terms and keywords. From there, Google will overlay information about searchable items within the scene. Google Lens will now appear as an option in the Google App search bar by default.

Advertisement

“Scene exploration is a powerful breakthrough in our devices’ ability to see the world the way we do,” said Google senior VP, Prabhakar Raghavan in the I/O presentation. “It gives us a superpower, the ability to see relevant information overlaid in the context of the world around us,” he added. Google did not say when the feature will debut. 

Advertisement

In an explanatory and hypothetical example, Raghavan presented the idea of looking for the perfect candy bar for a chocolate-snob friend. You could, in theory, use scene exploration to pan across the candy selection at a grocery store, and then pair that with the search terms “dark,” “nut-free,” and “highly-rated” to find the perfect treat for at the intersection of your friend’s tastes. Interactive information about each of the candy bars would then show up on your phone screen, enabling you to quickly make an informed chocolate choice.

Advertisement

“Scene exploration uses computer vision to instantly connect the multiple frames that make up the scene and identify all the objects within it. Simultaneously it taps into the richness of the web and Google’s knowledge graph to surface the most helpful results,” said Raghavan.

Last month, Google rolled out multisearch in beta testing, which lets users add text to visual searches for more targeted results using the Google app or Lens in iOS and Android. The feature is mostly useful for shopping, helping you identify furniture, clothes, or other covetable real-world items and find them for sale online.

Advertisement

Scene exploration is multisearch leveled-up. Multi-multi search. Multiverse search, even. However, unlike multisearch, Google didn’t announce testing or when scene exploration might be available.

Advertisement