Skip to content

Google ARCore Update Brings Changes To ‘Visual Processing In The Cloud‘

Google ARCore Update Brings Changes To ‘Visual Processing In The Cloud‘

Google is updating its augmented reality cloud anchors system which takes camera data from your phone, processes parts of it on their servers, and produces a 3D map of the environment.

The technology allows for shared AR experiences where multiple camera-based gadgets can see the positions of one another. The change to the “Cloud Anchors API” is included in the latest version of Google’s augmented reality software ARCore, according to a Google blog post for developers published today.

”We’ve made some improvements to the Cloud Anchors API that make hosting and resolving anchors more efficient and robust. This is due to improved anchor creation and visual processing in the cloud. Now, when creating an anchor, more angles across larger areas in the scene can be captured for a more robust 3D feature map,” according to a post by Christina Tong, Product Manager, Augmented Reality at Google. “Once the map is created, the visual data used to create the map is deleted and only anchor IDs are shared with other devices to be resolved. Moreover, multiple anchors in the scene can now be resolved simultaneously, reducing the time needed to start a shared AR experience.”

I put a few pointed questions to Google representatives this morning for clarity on how exactly this functions. I asked for detail on what exactly “visual processing in the cloud” means and whether anything more than 3D pointcloud and location data is passed to Google servers. I also asked Google to specify how this API functioned differently in the past. Here’s the full response I received over email from a Google representative:

“When a Cloud Anchor is created, a user’s phone provides imagery from the rear-facing camera, along with data from the phone about movement through space. To recognize a Cloud Anchor, the phone provides imagery from the rear-facing camera,” according to Google. “Using the cloud (instead of the device) to do feature extraction allows us to reach a much higher bar of user experience across a wider variety of devices. By taking advantage of the computing power available in the cloud, we are able to extract feature points much more effectively. For example, we’re better able to recognize a Cloud Anchor even with environmental changes (lighting changes or objects moved around in the scene). All images are encrypted, automatically deleted, and are used only for powering shared or persistent AR experiences.”

For comparison, Apple is due to release its iOS 13 software soon and its iOS 12 documentation explains a method of producing a shared AR world map between local devices without sending data to a remote server.

Persistent Cloud Anchors

Google’s ARCore update also added “Augmented Faces”  support for Apple devices and the company says it is looking for developers to test “Persistent Cloud Anchors” with a form to fill out  expressing interest in “early access to ARCore’s newest updates.”

“We see this as enabling a ‘save button’ for AR, so that digital information overlaid on top of the real world can be experienced at anytime,” the Google blog post states. “Imagine working together on a redesign of your home throughout the year, leaving AR notes for your friends around an amusement park, or hiding AR objects at specific places around the world to be discovered by others.”

Member Takes

Weekly Newsletter

See More