Back in June Meta revealed its latest XR R&D efforts through a handful of different prototypes designed to prove out various concepts. The next step, the company says, is to bring it all together into a single package that could one day reach real users.

If you spend time keeping up with the latest XR research (as we like to do), you’ll know the vast majority of the work being done is about proving that something is possible, but not necessarily about whether or not it’s viable as tech that could be made into a real product—where challenges like cost, manufacturing, and durability can nix even the most innovative technologies.

Image courtesy Meta

Take, for instance, the ‘Starburst’ HDR VR headset prototype that Meta recently revealed. The goal of the research was to simply build something that worked so the team could quantify the impact of 20,000 nit HDR on immersion. But from a market-viability standpoint, the tech used in the prototype is far too big, draws too much power, and gets way too hot to be built into a real product without completely changing the underlying architecture.

And that’s the case for many of Meta’s R&D prototypes—they are designed to demonstrate a concept, but they aren’t necessarily designed to be market-viable.

But Meta’s next prototype, the company said alongside the reveal of its latest R&D in June, is an effort not just to demonstrate a concept, but to bring much of the company’s latest research together into a single headset—and in an architecture that could actually form the foundation of future products.

That doesn’t mean it would be cheap and it doesn’t mean it would easy, but if it comes together as the company hopes, it will “be a game changer for the VR visual experience,” says Michael Abrash, Chief Scientist at Meta Reality Labs.

Image courtesy Meta

The prototype concept is called Mirror Lake, and while Meta has been careful to say that it hasn’t even been built yet, the company says its goal is to pack many of its latest innovations into this single system.

Image courtesy Meta

According to Douglas Lanman, Director of Display Systems Research at Meta Reality Labs, Mirror Lake will include electronic varifocal lenses (including prescription correction), multi-view eye-tracking, holographic pancake lenses, advanced passthrough, reverse passthrough, and a laser backlight in a goggles-like form-factor. Let’s break those down briefly one-by-one.

Electronic Varifocal Lenses

Electronic varifocal lenses would allow the headset to show correct depth to each eye individually and also eliminates the need for eyeglasses. This would not only make the headset more immersive, but also more comfortable too, as it would fix the longstanding ‘vergence-accommodation conflict’ that afflicts most headsets.

Multi-view Eye-tracking

Image courtesy Meta

Multi-view eye-tracking could increase the accuracy of eye-tracking (which is a keystone for other features like dynamic distortion correction) by giving the system more views of the user’s eye. Most eye-tracking headsets today have a camera inside the lens at a sharp angle that watches the user’s eye to estimate its movement. But that sharp angle makes it harder to get an accurate estimate. Meta says Mirror Lake could include an additional eye-tracking camera in each strut of the headstrap near the user’s temple. This camera would get a better view of the user’s eye by looking at a reflection from a holographic film embedded on the headset’s lenses.

Holographic Pancake Lenses

Image courtesy Meta

Holographic pancake lenses are novel lenses that are thinner and lighter while allowing the user’s eye to be closer to the lens, reducing headset bulk considerably.

Advanced Passthrough

Advanced passthrough features will allow a sharper and more realistic view through the headset for mixed reality capabilities. Lanman didn’t go into much detail about a so-called “neural passthrough camera” proposed for Mirror Lake, but ostensibly this will be akin to a next-gen version of the Passthrough+ feature on Quest 2 which aims for accurate depth representation and low latency.

Reverse Passthrough

Prior reverse passthrough research | Image courtesy Meta Reality Labs

Reverse passthrough is a somewhat funky technology that projects the user’s eyes onto the outside of the headset for others to see. Since eyes are such an important part of communication, this feature aims to make it less weird to have a conversation with someone who is wearing a headset.

Laser Backlight

Laser backlighting would allow the headset to display a wider range of colors to more accurately represent what a user would see in real life. It could also make for a brighter display with a wider dynamic range.

– – — – –

Meta has demonstrated many of these technologies independently in various prototypes, but Lanman says the goal is for Mirror Lake to be built around a “practical architecture;” meaning something that’s viable outside of a lab setting—something that could actually form the basis of a real product.

Image courtesy Meta

Abrash, Chief Scientist at Meta Reality Labs, said Mirror Lake could glimpse what a “complete next-gen display [VR] system could look like,” but also warns that the architecture won’t be conclusively proven (or disproven) until the headset is actually built.

It will be years yet before we see something like Mirror Lake actually reach the market. Meta’s upcoming headset, currently known as Project Cambria, will only include a fraction of the capabilities of Mirror Lake. Even the Holocake 2 prototype—which is more advanced than Cambria—is still several steps behind what Meta is envisioning with Mirror Lake.

Still, Meta CEO Mark Zuckerberg insists that the billions the company is throwing at its XR R&D efforts is not merely academic.

“We’re the company that is the most serious and committed to basically looking at where VR and AR need to be 10 years from now. [We ask] ‘what are the problems that we need to solve’ and just systematically work on every single one of them in order to make progress,” says Zuckerberg. “There’s still a long way to go, but I’m excited to bring all of this tech to our products in the coming years.”

Newsletter graphic

This article may contain affiliate links. If you click an affiliate link and buy a product we may receive a small commission which helps support the publication. More information.


Ben is the world's most senior professional analyst solely dedicated to the XR industry, having founded Road to VR in 2011—a year before the Oculus Kickstarter sparked a resurgence that led to the modern XR landscape. He has authored more than 3,000 articles chronicling the evolution of the XR industry over more than a decade. With that unique perspective, Ben has been consistently recognized as one of the most influential voices in XR, giving keynotes and joining panel and podcast discussions at key industry events. He is a self-described "journalist and analyst, not evangelist."
  • Duane Aakre

    I’m not clear how the Electronic Varifocal Lenses would remove the need for prescription lenses. Would they have to be designed for your specific prescription or would they somehow be adjustable to correct for whoever puts the headset on? The first would be acceptable, but the second would certainly be desirable if you were going to demo the headset to others who also wear glasses.

    • Ben Lang

      If users know their prescription they should be able to enter it into the headset for it to adjust. However, Meta also talked about thin holographic inserts for correcting vision, so both approaches are possible.

      • psuedonymous

        Varifocal lenses would correct for Myopia and Presbyopia by varying lens power in software. That would not correct for Astigmatism and other non-spherical-power vision disorders, which would require lens inserts.

        • kontis

          Wasn’t the near-eye light field display research talking about correcting Astigmatism with software?

          • Sven Viking

            Lightfields could do it but not plain varifocal.

  • Bartholomew

    Sony can’t wait to copy.

    • VR5

      Sony came up with the Halo Strap, which is an original design and not copied. Zuck is also very upfront about Meta copying competitors’ ideas and solutions, it is not wrong to learn from others. And if you patent an idea, you can also make competitors pay up.

      • kontis

        So impressive. They made a strap.

        • Christian Schildwaechter

          That Facebook liked so much that they decided to get one for the Rift S too. Which was built by Lenovo. Who had licensed the strap design from Sony for their WMR headset the Rift S was based on.

        • VR5

          A strap that solves the problem of weight and enables better comfort. PSVR also had an RGB screen when everyone else had pentile (which is probably the biggest factor why it was so expensive, other than to make back R&D). Comfort and IQ are big factors in HMDs so they did cover those without copying anyone.

          As for tracking, visual inside out tracking was more of a byproduct of Microsoft’s Hololense and outside in was considered good enough. It was only Valve that did inside out (the prototype that can be seen in the HL:A Final Hours documentary was even visual inside out but plastering a room with markers was not practical of course). Markerless tracking that can recognize random surfaces is very computational expensive though and it is mostly AR where it is necessary where the tech was advanced until it made its way to VR HMDs.

          And while Sony’s light based outside in tracking was inferior to Oculus’s IR based tracking, it was an original design differing from competitors: IR based inside out with markers of the Wii remote and the markerless outside in Kinect.

  • xyzs

    Meta Says New Prototype Will Put Its Cutting-edge R&D in a Market-viable Headset
    … but they added that people need to be patient, the first sneak peek of the teaser of a blurred image of a product concept won’t be shown until 2035.

  • Rogue Transfer

    You want its low FOV for gamers? Personally, this is more a pair of glasses for casual social & business use, rather than immersive gaming. Just look at the small lenses covering the eyes, can’t have more than 85° – just hold your fingers cupped round your eyes to see how limited the FOV will be with this concept Mirror Lake.

  • namekuseijin

    imagine getting amazing resolution, varifocal lenses, super bright HDR, small form factor… to run Super Hot (pun intended too)

    they need better SOCs… mobile chips evolve slowly because the masses at large only play crappy 2D casual games like candy crush and clash of clans. Ok, so most Meta audiences only chop boxes or onions in minigames too – but the difference is that all complain about cartoon avatars. People want to look like themselves in the metaverse and plus the metaverse needs to look better than a bland cartoon room. So the need is there for GPUs capable of photorealism… and this incidentally benefits games with more performance.

    Zuck should spend money in custom mobile GPUs, not in idiotic cartoon worlds nobody wants…

    • Lhorkan

      I think the bet for realistic avatars is on cloud rendering, not high performing mobile chipsets.

      • VR5

        No, it’s an AI. Takes a lot of energy to train (one time, one instance) but only little energy when performing (on many devices, many instances).

    • VR5

      By the time this tech is ready, the GPU market will have advanced on its own, even without Meta investing in it. Mobile GPUs are seeing wide use and have been advanced for decades. It is an established market with many players supporting it and developing better chips all the time. As opposed to this kind of VR tech that isn’t seeing funding at the scope Meta is pouring into it.

      Also, with codec avatars Meta is researching a path for more efficient rendering on mobile, instead of calculating the images they are “imagined” by a neural network that has learned to create by drawing on memory, memory of intuitive knowledge acquired by repetitive training. There’s no guarantee that this can replace traditional math based rendering for gaming but looking at what codec avatars and Dall-E already can do, there’s reason to be optimistic.

    • Christian Schildwaechter

      Mobile chips evolve very fast, much faster than desktop chips. Qualcomm has had a yearly GPU performance increase of > 30% between SD820 (Oculus Go) and SD865 (Oculus Quest), and Apple showed in several presentations that they got 40%+. The constraint for mobile is always power, and if you compare performance/watt, everything Nvidia has to offer is trash compared to mobile SoCs. Nvidia is currently going the way Intel went with the Pentium 4, increasing power consumption to get more speed with dwindling benefits. While each new mobile SoC has to stay at roughly the same TDP, rumors indicate that the RTx 4070/4080 will need at least 33% more power than the 3070/3080. And all of this has a lot to do with physics and is unrelated to casual 2D games, esp. since modern game engines and operating systems have been using 3D acceleration functionality for 2D for more than a decade.

      And the idea that Meta could simply create their own, better SoC is extremely unrealistic. Meta is a software company by heart, the only hardware they really developed themselves was for improving data center server infrastructure. They use Qualcomm SoCs for the Quest, and every single of their HMDs except Rift S (Lenovo) was built (and at least partly designed) by Chinese XR giant Goertek, who also creates all the Qualcomm reference HMDs and all the Pico HMDs, which is why these are all almost identical.

      What makes the Quest special/better is all the software, not the hardware. The chance that a company of software engineers with lots of money could just decide to beat the world leading SoC companies is basically zero. It took Apple more than a decade plus buying SoC companies with leading efficiency ARM designs to get to where they are today, and if you look at Intel struggling with ARC despite having 20 years of iGPU experience, Meta sticking with Qualcomm is probably the much safer bet.

      • Cl

        The upcoming gpus are more power efficient than this gen. They only require more power because they are trying to squeeze all the performance they can.

  • Thud

    Reverse passthrough is WORSE than a blank headset

    • VR5

      For AR while still talking to people around you, not it isn’t. But it is worse than regular AR glasses (which suffer from low FoV though).

      • Thud

        Disjointed eyes are more disconcerting and uncanny valley than no eyes at all in my opinion.

        • ViRGiN

          and… what if it’s “very good”? all you’ve seen is just a short preview of a lab product lol

          • Sven Viking

            To be honest I think I might still prefer to make them look like sunglasses and save the weight, heat, and power consumption from the additional displays.

          • David Wilhelm

            If they share a backlight it’s probably a relatively low impact but I hear ya.

          • Sven Viking

            Would that work with a “laser backlight”? (Or alternately would it risk outside lights bleeding through?)

          • Thud

            I knmow ViRGiN. I mentioned META without fawning and kissing there ass endlessly like you. Sorry.

        • VR5

          You might be right but I reserve judgement until I see the tech in action, and for a prolonged time. It might be better than one would expect and you can get used to a lot of things, it might not be an issue after a while. The important thing is that you can make eye contact.

    • kontis

      It’s a research showing possible things, not the final version. You are focusing too much on the polish instead of the technical aspects.

  • Newlot

    The day something akin to just 80% of the capability of such a headset is released for a price of >500 is when Zuck will get his Iphone moment.

  • Keng Yuan Chang

    Please provide robust hardware and flexible software APIs for people to build things around.

  • Newlot

    Does anyone happen to know how much compute relative to XR2 would be required to run a 10k nits, 8k resolution, full human FOV headset? By how much does the compute requirement decrease with foveated rendering?

    If Moore’s law holds we could thus make an approximation for when this could hit consumer markets. On the other hand the compute may be transfered to local devices (phones) or the cloud.

    • Rudl Za Vedno

      8K is basically 4x4K. Add 1.5x to accommodate for barrel distortion and you get 6x4K resolution. So something non existent with around 145 FP32 (float) performance should be able to render such resolution at around 72-90Hz. XR2 has around 1.3 teraflops. If you add dynamic eye tracking with FOV rendering you could theoretically get away with 14 teraflops (10%), but GPU with 42 FP32 (30% render resolution) is more realistically achievable atm. That’ll probably be roughly the performance of RTX 4090 and Navi 31 (7950XT).

      • Newlot

        Thank you for your informative comment. That’s really interesting. Do you have any speculations as to by how much HDR (say 10k nits) improves the need for higher computation if at all? How much faster do you think next year’s XR3 will be? At what number (or when) do you think will Qualcoms XR chips match the 4090TI? We could backtrack and see what GPU the XR2 is matching, unfortunately I don’t understand enough about computers to do that. Would you mind explaining barell distortion, 1.5SS, 1.3 Teraflops and 143 FP32 (float) to me? I don’t have a formal education in technology but am willing to learn.

        • Christian Schildwaechter

          HDR and nits are related, but not in the way you assume. High Dynamic Range usually means finer color grades than the typical 8bit/256 steps per color for R/G/B, usually 10bit/1024 steps or 12bit/4096 steps. Going much higher makes no sense, AFAIR humans can distinguish more than 256 shades of green or grey, but less than 256 of blue. Increasing the bits per pixel also increase the computation required, although not linearly. Computers usually operate in powers of 2, both a 24bit or 30bit RGB pixel would fit into the same 32bit/pipeline, but when you add another 8 bit for transparency you still can fit 32bit RGBA into the same 32bit pipeline, but 40bit HDR RGBA now needs double that with 64bit, as would 36bit RGB (very, very oversimplified).

          The nits just describe how bright the image is. Good laptop displays feature 500nits, professional monitors for film productions offer sustainable 1000nits and higher peak values. 1000 nits is about what you’d need to read a display with the sun shining directly on it. Higher nits are necessary in AR due to the very bright environment, and to make use of the larger contrast of HDR displays, but have no influence on computational requirements.

          As for the XR3: assuming that Qualcomm chips improve at the same speed as in the past, a 2023 XR3 would be 2.3 times faster than the XR2. Some time ago I did the math in relation to the PS5, which has a GPU about as fast as an RTX 2070 super, about twelve times faster than the Quest 2. Again assuming continuous development, a Quest released in 2029 would be about as fast as a PS5 released in 2020. We don’t have performance numbers for the 4090Ti yet, but the answer to when an XR chipset will be able to match it is “much, much later”.

          That doesn’t mean we will have to wait that long for decent graphics. Your initial question referencing Moore’s law assumes that we will get their by brute forcing it with adding more computational power. It is much more likely that we will get there by doing it smarter. Dynamic foveated rendering is one way to do that, instead of using 10x the rendering power it simply reduces how much has to be rendered in the first place. Other approaches are smart upscaling and image reconstruction, or projecting static images on simplified dynamic models with fast and low power neural chips, but it is much harder to estimate how well these will work and when they will become available.

          • XRC

            Some numbers from one of the quickest desktop GPU currently available Galax HOF 3080ti(12GB GDDR6X)

            384-bit memory interface
            Memory bandwidth 912 GB/sec
            Pixel Rate 199.9 GPixel/s
            Texture Rate 571.2 GTexel/s
            FP32 (float) 36.56 TFLOPS
            FP64 (double) 571.2 GFLOPS (1:64)

            Downside is power consumption, measured peak load nearly 500watt (HOF has 24 power phases) so ideally 850+ watt or higher platinum PSU.

            4XXX power consumption will be troubling for many systems requiring PSU upgrade

          • Christian Schildwaechter

            On the bright side, you’ll save a lot on gas heating during winter.

          • Newlot

            Appreciate the reply, it’s very interesting. Thank you! Where did you learn all of that?

          • Christian Schildwaechter

            See the answer I just posted under the Pico 4 Controller article.

  • kontis

    Important point for OLED absolutists:
    – Douglas Lanman, one of the best world HMD researchers specifically asked manufacturers to focus on developing new LCDs.

    • Newlot

      Why though? Why?

      • Christian Schildwaechter

        One reason may be the lenses. So far we use traditional optics, including pancake lenses, to magnify the image from a traditional display. There OLED have benefits due to the brightness coming from the pixels themselves instead of the LC pixels just filtering out some of the light coming from a bright backlight.

        But traditional optics will always be rather thick. Ways around that are using waveguides that channel the light in from the side, and using holographic lenses that construct the optical path of traditional lenses in a thin film. Both require separating the colors that are usually mixed in a display into separate channels, with holographic lenses only working with coherent laser light. This is what even enables going for 10000 nit brightness, but would also allow for an LC based filter to work much more efficient, because the filter can be tuned to exactly one wavelength. I’m not even sure if LCs would be used here (in the same way), but laser illuminated LCD could offer similar contrast to OLED displays.

        • David Wilhelm

          iirc BMW offers laser based headlights, but they don’t emit the laser beam directly. Instead they illuminate a target that is responsive to the laser light frequency and converts the light to a different frequency and then outputs that. The power consumption to brightness ratio is more efficient than traditional lamps.

    • Rupert Jung

      >So impressive. They made a strap.

      And a 120 Hz (!) OLED with RGB Panels. In 2016.
      In 2022 they’ll release a headset with built-in eyetracking. And perfect black levels. Both absent in Meta’s Oculus Quest 2. And pretty much all other headsets today.

      • MeowMix

        the 120hz panel was made by – Samsung
        the eye-tracking tech used in the PSVR2 is made by – Tobii
        the panel rumored to be used by the PSVR2 is again made by – Samsung

        please do list some tech actually made by SONY.

        I mean, at least the QuestPro uses Meta developed Eye Tracking. Meta is the only headset that doesn’t use Tobii eye tracking.

  • Brian

    Tech is just moving to fast. I just have a feeling some disruptive technology will wash away all there work and be superior

    • dk

      the disruptive tech is all the layers listed in this headset ….they have never been implemented befor

      • Brian

        See Apple vision pro…

        • dk

          the only way that is disruptive is by a trillion dollar company getting into ar/vr …..there is no new technology implemented like varifocal ….and even without anything innovative it has to be $3500
          ….but I do like it someone had to make high res(still not perfect) headset with powerful chips and integrating it in an existing ecosystem with massive fanbase and only they could make it and charge that much

  • Lulu Vi Britannia

    Can they stop talking about PROTOTYPES and advertise the headset they’re supposed to sell in the next 6 MONTHS? I love prototypes, I love tech, but this sounds like Meta desperately trying to get more investors.

    • David Wilhelm

      “We’ve given up on PCVR! Check out all these awesome new PCVR HMDs we are making in the lab and refuse to produce!”

      Joy

      • Lulu Vi Britannia

        I don’t why you’ve brought up PCVR, this is off-topic.