The past few months have seen the standard run of summer developer conferences. Some featured AR platforms include Apple (ARkit), Google (ARCore), and Niantic (Lightship). So we’re synthesizing their top AR updates for our XR Talks series (video and takeaways below).

After covering Apple’s ARkit updates last week, we move on to Google. It recently announced several updates, including its Geolocal API, “Multisearch Near Me” and advancements to the ARCore platform. And it rolled out a concept design for its second swing at AR glasses.

That last piece got the most attention, and rightly so. It was a glimpse at a potential AR killer app in “captions for the real world.” But we must remind ourselves that this isn’t a shippable product today. Like a concept car, it’s a design principle and a peek at what could be.

AR Platforms Speak, Part 1: Apple

Finding the Beer Tent

Beyond its headline-grabbing – though non-existent – AR glasses, a few I/O updates are live today. For example, in mobile, ARCore’s upgraded depth capabilities enable more realistic AR by being able to place virtual objects in front of or behind real-world items (occlusion).

Google also says that it has reduced the time to get a depth map for an 8-meter room by about 15 percent. And it can achieve overall coverage of 95 percent (depending on a room’s geometry and surface textures). ARCore can also now find surface planes 17 percent faster.

Meanwhile, things are improving in outdoor AR. Google’s long-range depth capability now reaches 20 meters, even in direct sunlight. This lets it achieve occlusion in outdoor AR contexts. For example, the destination pin in AR navigation can sit realistically in front of or behind objects.

Speaking of AR navigation, Google’s Live View urban navigation feature was also recently advanced. As we’ve examined, Google has spun out the capability to developers through its Geospatial API. This takes its localization and mapping engine and spins it out for developers.

Use cases will only be limited by developer creativity…as APIs often go. We could see clever variations on 3D navigation such as micro-mobility features like finding the closest scooter dock; or event-based AR activations like finding a stage or beer tent at a festival.

XR Talks: Google Scales Up Geospatial AR

Exercising Muscles

All in all, Google can achieve depth mapping on commodity hardware that’s comparable to the dedicated depth sensors required a few years ago (some may remember Google Tango). All the above is the baseline, which can be improved with a second camera, such as the Pixel 6.

Speaking of which, Google has a challenge in that 1.2 billion Android devices represent a fragmented mix of firmware versions and device manufacturers. Things are more simple in the vertically-integrated Apple stack where there are only a few hardware variations to design around.

This challenge impacts ARCore updates like the above depth sensing in that the primary component to define depth is the camera. Since ARCore is installed on several devices with varying camera quality, it must approximate feature parity across the board through software.

Fortunately, Google has exercised this muscle. Innovating with software to gain better optical capabilities is the name of the game in smartphone photography. With physical constraints on factors like focal length, software has stepped in to achieve effects like background blur.

And since the camera has become a central competitive battleground between iOS and Android over the past few years, Google has gotten good at bridging the gap in optical hardware deficiencies with software. The point: ARCore has benefited as a byproduct.

We’ll pause there and cue the full video below. Stay tuned for more platform updates and breakdowns in the coming weeks as we continue this series with Niantic and others…

https://youtu.be/PM5rl4z9mto

More from AR Insider…