Trying to capture what someone is experiencing when they’re head-first in a VR game has been an interesting problem to crack. By matching up live video and a digital environment with the help of a green screen setup though, you can essentially create a video that places you straight in the action—one that better communicates the immersion of a VR game than a simple first-person screen capture. Oculus recently revealed in a blog post that in the past months they’ve been working on bringing native mixed reality capture support to Oculus Rift, and it’s available today for developers to start creating mixed reality videos.

Oculus has recently published a guide to teach developers how to capture mixed reality with two important tools (besides a VR headset): a green screen and an external camera. The company says developers can create mixed reality content with either a stationary camera or a mobile camera that can be attached to a virtual, in-game object—letting you capture the scene from various vantage points and making the action even more immersive for non-VR viewers.

[gfycat data_id=”IckyQuaintGermanshorthairedpointer”]

To help create a fixed or mobile camera that lets you capture you while in the physical world, Oculus has also provided a 3D-printable CAD model so webcams and small DSLRs can be attached to an extra Oculus Touch controller and either mounted on a tripod or supported by hand.

image courtesy Oculus

It’s not to say everyone with a VR-ready computer can create these sorts of videos though, as the requirements for mixed reality capture are likely higher than the ‘minimum spec’ published by Oculus that allowed computers as affordable as $500 to run VR games. To help out with the additional bandwidth requirements of mixed reality capture, Oculus suggests a number of components including its Oculus-approved Falcon Northwest Tiki computer (MSRP $2,899). The company hasn’t released any hard and fast requirements for mixed reality capture, but has mentioned its selected motherboards “work well” when paired with 16 GB RAM, an SSD and a GTX 1080.

SEE ALSO
Meta Releases New Mixed Reality Showcase for Unreal Engine Developers

As for cameras, Oculus provided support for “any USB camera,” but says that higher-spec cameras, including HDMI cameras, will predictably result in higher-quality mixed reality capture scenes.

3D printable mixed reality mount, image courtesy Oculus

Mixed reality setups are notoriously fiddly to build, and the company says their native mixed reality integration “does require numerous steps and a certain level of technical proficiency,” and that users should follow documentation “as precisely as possible, and pay special attention to the directions regarding Oculus sensor setup, USB ports, and chipsets.” Among Oculus’ setup guide, other guides for integrating mixed reality capture support Unity and Unreal apps are also available (UnityUnrealNative).

As pioneers of mixed reality capture, developers of breakout success Fantastic Contraption (2016) Northway Games published an extensive guide on how to set up mixed reality capture for HTC Vive headsets last year. Valve later incorporated the same green screen setup in their official announcement of the HTC Vive.

Newsletter graphic

This article may contain affiliate links. If you click an affiliate link and buy a product we may receive a small commission which helps support the publication. More information.


Well before the first modern XR products hit the market, Scott recognized the potential of the technology and set out to understand and document its growth. He has been professionally reporting on the space for nearly a decade as Editor at Road to VR, authoring more than 3,500 articles on the topic. Scott brings that seasoned insight to his reporting from major industry events across the globe.
  • Nate Vander Plas

    This seems cool, but what I think would be an awesome use of a very similar tech would be VR-aided 3D scanning! Rather than an expensive 3D scanner with proprietary sensors, etc, why not attach a VR controller to a DSLR and capture photogrammetry except instead of needing heavy computation to triangulate where each photo is taken, the position of the camera would be determined by the VR controller’s position in space. I imagine an app where you could even see the progress of your scan in 3D within the headset. Would be awesome if integrated with something like this: https://rgl.epfl.ch/publications/Schertler2017Field