News

Apple’s ARKit 3 Introduces Human Occlusion, Body Tracking, And Motion Capture

Apple showcases new upgrades to its AR platform during WWDC.

First introduced during the 2017 Worldwide Developer Conference, Apple’s ARKit platform has since grown into a powerful set of tools for developers looking to create high-end AR experiences on iOS devices. Keeping in line with their past two conferences, Apple used this years WWDC to debut ARKit 3, which introduces a slew of new upgrades to the already-powerful AR platform.

Image Credit: Apple

A majority of the upgrades included with ARKit 3 revolve around human presence within an augmented environment. Whereas previous versions of the platform were limited in regards to human interaction, ARKit 3 introduces human occlusion to the mix. This means augmented objects rendered over a real-world environment will appear in front of and behind a person depending on where they are in relation to the iOS device. While this may seem like a small addition, perspective is everything when it comes to AR, and being able to physically move around an AR object adds a significant amount to the overall immersion.

Motion tracking also makes an appearance, allowing you to track a persons exact body position and movement down to their joints and bones. This opens up the opportunity for more detailed interactions with AR elements and experiences. The demonstration shown at WWDC was rough, but its an exciting step towards the exciting future of AR-based motion capture.

Image Credit: Apple

Thanks to the TrueDepth camera featured on the iPhone X, iPhone XS, iPhone XS Max, iPhone XR, and iPad Pro, ARKit face tracking now supports up to three faces at once for front-facing apps such as Memoji and Snapchat. Speaking of cameras, ARKit 3 allows for simultaneous use of the back and front cameras, opening up the potential for unique experiences where AR content projected via the back camera can be affected using facial expressions captured via the front camera; imagine an AR character who can identify and respond to the emotions of a user, or an AR game controlled entirely by facial expressions.

For those interested in taking advantage of ARKit 3’s exciting possibilities, Apple has announced RealityKit, a high-level framework built specifically to assist less-seasoned developers with creating AR experiences on the iOS platform. RealityKit offers a variety of AR-based services, including photo-realistic rendering, multiple camera effects, various animations, advanced physics, and a Swift API. RealityKit even assists with networking during multiplayer experiences and features automatic scalable performance for all compatible iOS devices, meaning any experience you build on the iPhone will work on the iPad.

Image Credit: Apple

Working in tandem with RealityKit, RealityComposer allows you to create custom interactions using a large pool of animation and audio resources. Developers are free to import their own personal USDZ files to use in their experience, or use the hundreds of pre-built digital objects offered within the AR library. Thanks to the inclusion of procedural content generation, the size, style, and various other attributes of all the objects are highly customizable.

Each object can be paired with an interaction that’s triggered by a specific action, such as tapping on the screen or coming within close proximity of the object itself. Developers are also free to play with spatial audio as well as a variety of animations. And thanks to Xcode and live link integration, you can move easily between your devices to test your experience on a variety of formats. A full list of all the new improvements can be found here.

About the Scout

Former Writer (Kyle Melnick)

Send this to a friend