New research from Kent State University and Meta Reality Labs has demonstrated large dynamic focus liquid crystal lenses which could be used to create varifocal VR headsets.

Vergence-Accommodation Conflict in a Nutshell

In the VR R&D space, one of the hot topics is finding a practical solution for the so-called vergence-accommodation conflict (VAC). All consumer VR headsets on the market to date render an image using stereoscopy which creates 3D imagery that supports the vergence reflex of pair of eyes (when they converge on objects to form a stereo image), but not the accommodation reflex of an individual eye (when the lens of the eye changes shape to focus light at different depths).

SEE ALSO
Studio Behind 'DOOM 3' VR Port Announces Layoffs

In the real world, these two reflexes always work in tandem, but in a VR they become disconnected because the eyes continue to converge where needed, but their accomodation remains static because the light is all coming from the same distance (the display). Researchers in the field say VAC can cause eye strain, make it difficult to focus on close imagery, and may even limit visual immersion.

Seeking a Solution

There have been plenty of experiments with technologies that could be used in varifocal headsets that correctly support both vergence & accommodation, for instance holographic displays and multiple focal planes. But it seems none have cracked the code on a practical, cost effective, and mass producible solution to solve VAC.

Another potential solution to VAC is dynamic focus liquid crystal (LC) lenses which can change their focal length as their voltage is adjusted. According to a Kent State University graduate student project with funding and participation from Meta Reality Labs, such lenses have been demonstrated previously, but mostly in very small sizes because the switching time (how quickly focus can be changed) significantly slows down as size increases.

Image courtesy Bhowmick et al., SID Display Week

To reach the size of dynamic focus lens that you’d want if you were to build it into a contemporary VR headset—while keeping switching time low enough—the researchers have devised a large dynamic focus LC lens with a series of ‘phase resets’, which they compare to the rings used in a Fresnel lens. Instead of segmenting the lens in order to reduce its width (as with Fresnel), the phase reset segments are powered separately from one another so the liquid crystals within each segment can still switch quickly enough to be practical for use in a varifocal headset.

A Large, Experimental Lens

In new research presented at the SID Display Week 2022 conference, the researchers characterized a 5cm dynamic focus LC lens to measure its capabilities and identify strengths and weaknesses.

On the ‘strengths’ side, the researchers show the dynamic focus lens achieves high image quality toward the center of the lens while supporting a dynamic focus range from -0.80 D to +0.80 D and a sub-500ms switching speed.

For reference, in a 90Hz headset a new frame is shown to the user every 11ms (90 times per second), while a 500ms switching time is the equivalent of 2Hz (two times per second). While that’s much slower than the framerate of the headset, it may be within the practical speed when considering the rate at which the eye can adjust to a new focal distance. Further, the researchers say the switching time can be increased by stacking multiple lenses.

Image courtesy Bhowmick et al., SID Display Week

On the ‘weaknesses’ side, the researchers find that the dynamic focus LC lens suffers from a reduction in image quality as the view approaches the edge of the lens due to the phase reset segments—similar in concept to the light scattering due to the ridges in a Fresnel lens. The presented work also explores a masking technique designed to reduce these artifacts.

Figures A–F are captures of images through the dynamic focus LC lens, increasingly off-axis from center, starting with 0° and going to 45° | Image courtesy Bhowmick et al., SID Display Week

Ultimately, the researchers conclude, the experimental dynamic focus LC lens offers “possibly acceptable [image quality] values […] within a gaze angle of about 30°,” which is fairly similar to the image quality falloff of many VR headsets with Fresnel optics today.

To actually build a varifocal headset from this technology, the researchers say the dynamic focus LC lens would be used in conjunction with a traditional lens to achieve the optical pipeline needed in a VR headset. Precise eye-tracking is also necessary so the system knows where the user is looking and thus how to adjust the focus of the lens correctly for that depth.

The work in this paper presents measurement methods and benchmarks showing the performance of the lens which future researchers can use to test their own work against or identify improvements that could be made to the demonstrated design.

The full paper has not yet been published, but it was presented by its lead author, Amit Kumar Bhowmick at SID Display Week 2022, and further credits Afsoon Jamali, Douglas Bryant, Sandro Pintz, and Philip J Bos, between Kent State University and Meta Reality Labs.

Continue on Page 2: What About Half Dome 3? »

1
2
Newsletter graphic

This article may contain affiliate links. If you click an affiliate link and buy a product we may receive a small commission which helps support the publication. More information.


Ben is the world's most senior professional analyst solely dedicated to the XR industry, having founded Road to VR in 2011—a year before the Oculus Kickstarter sparked a resurgence that led to the modern XR landscape. He has authored more than 3,000 articles chronicling the evolution of the XR industry over more than a decade. With that unique perspective, Ben has been consistently recognized as one of the most influential voices in XR, giving keynotes and joining panel and podcast discussions at key industry events. He is a self-described "journalist and analyst, not evangelist."
  • disviq

    Yes! Hopefully in consumer products within 10 years…

  • xyzs

    Yeah they have technologies form the future that can achieve the toughest technical dreams…blah blah blah.

    Meanwhile, half of year 2022, Meta’s only VR product:
    Fresnel lens, 90 degrees FOV, single LCD panel, not even 2k per eye, half a kilo weight, with integrated graphics from PS2~PS3 era.

    It’s been 10 years now we are told amazing things are coming soon, and the hardware is still not even better than the Oculus dev kits…

    • eljik

      The Quest 2 is miles ahead of DK1. The DK1 had a 720p screen and only 3DOF. Also, no one promised you anything. Just because a prototype exists doesn’t mean that it’s financially viable for a consumer product.

      • xyzs

        If you take into account the years and the billions of dollars invested, it’s miles ahead nothing. It’s still the very same base tech: only external components evolution from not Meta allowed these barely superior specs.

        • Hugh Bitzer

          Quest 2 has almost 8 times the resolution of the DK1, I wouldn’t call that barely superior.

    • Jistuce

      It does seem like we’re going backwards with technology in a lot of ways. It is more frustrating in a young market like this one.
      Heck, when DK1 shipped we listed actual resolutions(640×800 per eye!) instead of just saying “2k” and pretending that meant anything!

      In fairness, though… DevKit 1 was 60 Hz, 3-axis tracking, with no tracked controller support or internal computing, and two eyes running close to VGA resolution. The Quest 2 is far ahead of it, even if it isn’t where we want it to be. It is even ahead of the Quest 1 and Rift, both of which were FAR higher in resolution than DevKit 1.

      (Q1 is 1 1440×1600 per eye, Rift is 1080×1200. Though both have the whole “pentile” caveat, and debatably should be scored as having roughly a somewhat lower resolution than those numbers suggest.)

      Incidentally, since we’re complaining about resolution… 1832×1920 per eye is the Quest 2 resolution, and that’s as close to 2 kilopixels wide as the 3840×2160 of “4K” is to 4 kilopixels wide.

      By the standards of the current day, I’d say the Quest 2 is definable as 2k per eye, though it is actually far more pixels than the measure implies due to the squareness of the view. A single Q2 eye is slightly less than half the pixels of a single “4K” display. If it was, as implied, half the horizontal AND vertical resolution, it’d be an even quarter the pixels.

      And that’s why we should use actual resolutions instead of picking a random number and putting a K on the end.

      • d0x360

        What about p, can be just use p? It’s 1800p! What’s 1800p? That’s only 1 axis… Well I’m not telling. I’ll tell it’s 3200×1800 but I have no idea why we do the k and p without having proper resolutions other than maybe the games don’t necessarily run at the max panel resolution.

        • Jistuce

          While I never did like the computer industry adopting the HDTV resolution names… at least 1080p IS 1080 progressive-scan lines, and there’s a reasonable train of logic to get to these names. The K numbers are just completely made up, especially once people started trying to apply them to resolutions lower than UHD… which should’ve properly been named 2140p.

          Historically, the 480/720/1080 i/p resolutions were inherited from the dawn of HDTV, which to my understanding built upon common terminology in analog TV. And since in analog TV, there’s only discrete “pixels” in the vertical axis, and HDTV specified two distinct aspect ratios with different horizontal widths but similar vertical heights… There IS a logic there.

          4K comes from marketing latching onto early videophile excitement as they hoped that UHD TV was going to adopt the DCI 4K standard used in theaters(which is exactly 4000 pixels wide, if I recall, though I REALLY want it to be 4096). Some marketing guys decided “yeah, let’s call this 4K, the videophiles have a good idea with the name, it makes us sound LOTS bigger than 1080p! What’s a K mean, anyways?”

        • Bjørn Konestabo

          Raster displays are a rarity these days. CRTs are usually only found in museums, so we don’t really need the p to distinguish from i.

      • Charles

        I’m still waiting for a headset better than the Odyssey+ that still has respectable contrast / black levels and binocular overlap. Everything since the O+ in 2018 has been a “sidegrade” at best.

    • d0x360

      I had a DK1 and a DK2… and a Rift and a Rift S. The Quest 2 has better image quality than all of them even if the fov is a bit more narrow and that’s only compared to the Rift & S.

      The DK1 had absolutely horrible image quality. It was a tablet inside the housing and aside from the fact that it constantly gor dusty was maddening it was also so low res that you couldn’t see much of anything and it had a huge breakout box so moving around was awful.

      The Rift had major screen door effect issues, was lower res with better lenses and an OLED which is something I wish they would go back to. The S also was lower res but had better lenses except the LCD panels we are 80hz, you could see pixels very easily and if the image was dark at all you couldn’t see ANYTHING.

      Playing Vader Immortal on the S was a waste of time. 70% of the game was dark gray and you couldn’t see anything.

      So the quest 2 might not be perfect but it’s better then either DK and has elements that are better than both Rift’s.

      The visuals are also far better than PS2 level. They are essentially just below 360 level… So PS3 except at a significantly higher frame rate. I don’t remember my PS3 ever hitting 120fps it also ran at about half the resolution. Most games weren’t even 720p.

      Now I’m a high end PC gamer and I was VERY AGAINST the Quest & Quest 2 because I figured it meant the end of Oculus competing at the high end of PC VR.

      Hopefully that doesn’t turn out to be the case and with the right FOV, lenses and even 2k per eye 120hz OLED displays it could be plenty good hooked up to a PC. Hell I run the quest 2 on PC over WiFi and image quality is perfect… To hell with the wire. Have a decent enough router and your good to go.

      Plus if they are using eye tracking and foveated rendering in the next HMD they could push the visuals quite a bit further than the quest 2 even on the same SOC. If 80% of your rendering budget is only covering 20% of the screen then you can take advantage of that to boost visuals significantly. It’s a feature PC VR should have had years ago but only companies that always have issues with compatibility are making higher end HMD’s.

      No ty. I like it when my stuff works. Why spend $2000 on am HMD that has issues with everything?

      • namekuseijin

        > PS3 except at a significantly higher frame rate

        not quite. some close details like guns in your hands look from PS3 era, but mostly empty rooms with bare PS2-like detailing… we have RE4 and Doom 3, but not anything akin to Uncharted, Killzone or Far Cry…

    • namekuseijin

      you forgot it doesn’t require anymore to be tethered to a jurassic beige box in your basement/dungeon. those home mainframes belong to the days of the PS2/PS3 too…

      by the time displays like these are ready for prime time, mobile chips will be handling current graphics fine… heck, physics-based materials gave modern graphics its current edge from the PS4 era til this day, and current cutting-edge mobile chips are well entering this era…

      • Jistuce

        The box plugged into the wall will always have better performance than the GameBoy, because it doesn’t have to worry about power limits or heat dissipation or weight. Everyone’s always saying “in the next couple of years portable/mobile devices will be more powerful than the best we have now!”, and they always miss that the portable chips aren’t the only ones getting better.
        Yes, in a few years, mobile chips will probably handle high-end current graphics just fine, but that will be low-end at that point.

        Certainly, the GameBoy approach to VR has advantages, but there’s always going to be a place for being connected to a larger system with room for more powerful, hotter parts.

        Also, I haven’t had a BEIGE box in many years. I think my last beige case was a Thunderbird.

        • namekuseijin

          oil prices skyrocketing and lil VR boy thinking they don’t need to worry about power limits…

          > I haven’t had a BEIGE box in many years

          I was joking. but tbh, I’d rather have one of those old beige boxes than current atrocious rainbow-colored neon gaymer pc boxes…

          • Jistuce

            You are thinking an entirely different scale than I am. Technically, yes, PCVR does have to worry about power limits because a standard electric outlet in the US is rated for 1200 watts, and at the top end the hardware is nudging up against that limit right now.

            But the standalone computer offers more room for an effective cooling solution that isn’t going to put weight cantilevered out on the user’s face, and lacks battery life concerns. Or skin burns, for that matter.

  • Very interesting project. I personally think that 500ms is a bit long as a switching time, because I think my eyes are faster in changing focus, but this is a research project, so there’sa plenty of time to improve on this

    • Jistuce

      I do wonder what the speed for a “traditional” lens of this type was, if half a second is considered fast.

      I’m not actually sure how long my eyes take to change focus, and I know my brain likes to lie about imperfections in the vision system, but that does sound wrong to me.
      And search engines just want to tell me “how many FPS” I can see. Useless.

    • Kim from Texas

      Based on my limited Internet research, it seems like your eyes change focus in 50 ms (10 times faster than this solution).

      • kontis

        Change is one thing, perception is another.
        There is an nvidia paper where researchers are surprised about foveated rendering lag (eyetracking + update of sharp rendering area) not being noticeable around below 40 ms, even though for headtracking you have to get lower than 20 ms to stop noticing difference. The reason was that eyes need some time before they can really see sharply again after the saccade movement to another point of focus.

  • David

    I bought a VR headset just to see what the hype is about. It was a fairly decent experience, I forget the model, but something like 1600×1440, 90hz, 110 FoV, camera tracking (no base station) – so fairly mediocre. After a month, I put the headset aside and have watched all metrics keep getting better. But I will not be getting back to VR until I am comfortably able to read a book in VR – and that means at least some attempt at solving the VAC problem. Without a VAC solution, VR is akin to black/white screens and 3D graphics before acceleration.

  • JakeDS

    Small correction: without high speed eye tracking, most VR headsets can’t simulate vergence either. Their virtual cameras point straight forward and don’t converge on a point. It’s why small objects up close in VR are hard to converge on. The Z Space display makes close objects look amazingly real by converging the virtual cameras.

  • Kim from Texas

    Tilt5 has fixed the VAC problem by turning the problem inside out where the image is projected onto a surface in the outside world (although officially Tilt5 is AR and not VR).

    • benz145

      It’s not exactly a solution but the extent of the VAC in Tilt5 is less than in most current VR headsets.

    • kontis

      Rather it’s simply not having the problem in the first place by not having an actual display on the headset.

      I also don’t have VAC problem when I watch films on my TV, but I wouldn’t say TVs “fixed the problem”.

  • Ron Padzensky

    Further, the researchers say the switching time can be increased by stacking multiple lenses.

    Shouldn’t this read “decreased” or “improved” rather than “Increased”?

  • Powerchimp

    All this R&D so Zuckerberg can shove ads and misinformation directly into our brains.

    Facebooks business model was never games or interactive fun.

  • Andrew Jakobs

    I would like to see regular glasses using these type of dynamic lenses, so you don’t have to have multiple glasses for reading, computer and far off, yeah you have multifocal glasses but those just have regions where you need to look for the needed distance.

    • Kim from Texas

      So like the Deep Optics company which had a kickstarter? (PixelOptics was also trying to create this with their EmPower product before they went out of business)