Avegant Claims Newly Announced Display Tech is “a new method to create light fields”

17

Avegant, makers of Glyph personal media HMD, are turning their attention to the AR space with what they say is a newly developed light field display for augmented reality which can display multiple objects at different focal planes simultaneously.

Most of today’s AR and VR headsets have something called the vergence-accommodation conflict. In short, it’s an issue of biology and display technology, whereby a screen that’s just inches from our eye sends all light into our eyes at the same angle (where’s normally the angle changes based on how far away an object is) causing the lens in our eye to focus (called accommodation) on only light from that one distance. This comes into conflict with vergence, which is the relative angle between our eye eyes when they rotate to focus on the same object. In real life and in VR, this angle is dynamic, and normally accommodation happens in our eye automatically at the same time, except in most AR and VR displays today, it can’t because of the static angle of the incoming light.

For more detail, check out this primer:

Accommodation

accomodation-eye-diagram
Accommodation is the bending of the eye’s lens to focus light from objects at different depths. | Photo courtesy Pearson Scott Foresman

In the real world, to focus on a near object, the lens of your eye bends to focus the light from that object onto your retina, giving you a sharp view of the object. For an object that’s further away, the light is traveling at different angles into your eye and the lens again must bend to ensure the light is focused onto your retina. This is why, if you close one eye and focus on your finger a few inches from your face, the world behind your finger is blurry. Conversely, if you focus on the world behind your finger, your finger becomes blurry. This is called accommodation.

Vergence

vergence-diagram
Vergence is the rotation of each eye to overlap each individual view into one aligned image. | Photo courtesy Fred Hsu (CC BY-SA 3.0)

Then there’s vergence, which is when each of your eyes rotates inward to ‘converge’ the separate views from each eye into one overlapping image. For very distant objects, your eyes are nearly parallel, because the distance between them is so small in comparison to the distance of the object (meaning each eye sees a nearly identical portion of the object). For very near objects, your eyes must rotate sharply inward to converge the image. You can see this too with our little finger trick as above; this time, using both eyes, hold your finger a few inches from your face and look at it. Notice that you see double-images of objects far behind your finger. When you then look at those objects behind your finger, now you see a double finger image.

The Conflict

With precise enough instruments, you could use either vergence or accommodation to know exactly how far away an object is that a person is looking at. But the thing is, both accommodation and vergence happen in your eye together, automatically. And they don’t just happen at the same time; there’s a direct correlation between vergence and accommodation, such that for any given measurement of vergence, there’s a directly corresponding level of accommodation (and vice versa). Since you were a little baby, your brain and eyes have formed muscle memory to make these two things happen together, without thinking, any time you look at anything.

But when it comes to most of today’s AR and VR headsets, vergence and accommodation are out of sync due to inherent limitations of the optical design.

In a basic AR or VR headset, there’s a display (which is, let’s say, 3″ away from your eye) which shows the virtual scene and a lens which focuses the light from the display onto your eye (just like the lens in your eye would normally focus the light from the world onto your retina). But since the display is a static distance from your eye, the light coming from all objects shown on that display is coming from the same distance. So even if there’s a virtual mountain five miles away and a coffee cup on a table five inches away, the light from both objects enters the eye at the same angle (which means your accommodation—the bending of the lens in your eye—never changes).

That comes in conflict with vergence in such headsets which—because we can show a different image to each eye—is variable. Being able to adjust the imagine independently for each eye, such that our eyes need to converge on objects at different depths, is essentially what gives today’s AR and VR headsets stereoscopy. But the most realistic (and arguably, most comfortable) display we could create would eliminate the vergence-accommodation issue and let the two work in sync, just like we’re used to in the real world.

Solving the vergence-accommodation conflict requires being able to change the angle of the incoming light (same thing as changing the focus). That alone is not such a huge problem, after all you could just move the display further away from your eyes to change the angle. The big challenge is allowing not just dynamic change in focus, but simultaneous focus—just like in the real world, you might be looking at a near and far object at the same time and each have a different focus. Avegant claims it’s new light field display technology can do both dynamic focal plane adjustment and simultaneous focal plane display.

Avegant Light Field design mockup
Avegant Light Field design mockup

We’ve seen proof of concept devices before which can show a limited number (three, or so) of discrete focal planes simultaneously, but that means you only have a near, mid, and far focal plane to work with. In real life, objects can exist in an infinite number of focal planes, which means that three is far from enough if we endeavor to make the ideal display.

Avegant CTO Edward Tang tells me that “all digital light fields have [discrete focal planes] as the analog light field gets transformed into a digital format,” but also says that their particular display is able to interpolate between them, offering a “continuous” dynamic focal plane as perceived by the viewer. The company also says that objects can be shown at varying focal planes simultaneously, which is essential for doing anything with the display that involves showing more than one object at a time.

Above: CGI representation of simultaneous display of varying focal planes. Note how the real hand and rover go out of focus together. This is an important part of making augmented objects feel like they really exist in the world.

Avegant hasn’t said how many simultaneous focal planes can be shown at once, or how many discrete planes there actually are.

From a feature standpoint, this is similar to reports of the unique display that Magic Leap has developed but not yet shown publicly. Avegant’s announcement video of this new tech (heading this article) appears to invoke Magic Leap with solar system imagery which looks very familiar to what Magic Leap has teased previously. A number of other companies are also working on displays which solve this issue.

SEE ALSO
'HOLOSCOPE' Headset Claims to Solve AR Display Hurdle with True Holography

Tang is being tight lipped on just how the tech works, but tells me that “this is a new optic that we’ve developed that results in a new method to create light fields.”

So far the company is showing off a functioning prototype of their light field display (seen in the video) as well as a proof-of-concept headset that they represents the form factor that the company says could eventually be achieved.

We’ll be looking hoping to get our hands on the headset soon to see what impact the light field display makes, and to confirm other important information like field of view and resolution.

This article may contain affiliate links. If you click an affiliate link and buy a product we may receive a small commission which helps support the publication. See here for more information.


Ben is the world's most senior professional analyst solely dedicated to the XR industry, having founded Road to VR in 2011—a year before the Oculus Kickstarter sparked a resurgence that led to the modern XR landscape. He has authored more than 3,000 articles chronicling the evolution of the XR industry over more than a decade. With that unique perspective, Ben has been consistently recognized as one of the most influential voices in XR, giving keynotes and joining panel and podcast discussions at key industry events. He is a self-described "journalist and analyst, not evangelist."
  • Super excited to try these out!

    • VR Geek

      Really really glad to read they have a functioning prototype to demo as the other light field tech companies seem shy which has me wondering if they really have anything at all. Cannot wait to read our hands on report Ben!

      • benz145

        Interestingly enough, that’s a different @benlang:disqus lol! But yes, hands-on soon : )

        • Mei Ling

          Are you 100% sure?

          • Caven

            Considering benz145 is the one who wrote the article, I think he’d know if the other Ben Lang is him or not.

  • lovethetech

    Here goes magic Leap Billion $

    • OgreTactics

      You mean Magic Vaporware?

  • OgreTactics

    Anybody left to believe that Magic Leap vaporware bullshit?

    Also I’m skeptical of this until they have a technical explanation, not trusting companies with a patent circuit that somehow can’t explain the logical mechanics that makes it work.

    • Mei Ling

      Magic Leap’s technology works however they are having extreme difficulty scaling it down into a usable form factor and this is evident from them running into a couple of hurdles within the past year from the UX side of things. The idea is not to show off anything until the device resembles something that you could wear on your head without looking like somebody whose head is about to collapse.

      • OgreTactics

        Have you tested it? I don’t think the size has anything to do with the actual science of what’s possible in the various lightfield lense/projection technics.

      • yag

        I’m guessing they also have a hard time scaling down the price ?

  • Mike

    It seems to me that the vergence problem can be solved with eye tracking alone. Since the vergence problem is the result of the pupil positions changing, why not just continuously adjust the in-game camera for each eye based on the eye’s position?

    • piecutter2

      Sorry, doesn’t work that way. Just making an object in the foreground that you’re not looking at blurry doesn’t change the fact that your eye is still focused on the same depth plain across the entire image. If your eye then tries to look at that object it will try to focus back as the tracking tries to make it coherent again, by which time your eye is now focused at the wrong depth. Trust me, you will rip the HMD off and rub your eyes vigorously.

      • Paul Schuyler

        How about if you took this light field signal, and steered it across a reflective surface using eye tracking, like a spotlight following your fovea? Maybe across a micromirror array ( to solve the vergence), like the Glyph has, in a closed (i.e. not translucent) HMD?

        Avegant has good, real tech…they’re no gimmick. But I don’t understand why every company going for MR seeks out a translucent solution. If you look at the image above you can see in that prototype optic all kinds of secondary reflections. Surfaces like that reflect and refract light all over the place. If you can see it outside, its going to be that way on the inside in some form. The computer is going to have to manage all of that, interpret what is real, and across changing lighting conditions! It seems every company is an electronics company trying to solve basic optics problems.

        Electronics has evolved leaps and bounds whereas optics crawls along at a snails pace. A closed HMD with pass-through cameras seems a smarter play for mixed reality. Less liability, too.

  • Smokey_the_Bear

    looks cool.

  • beestee

    It is amazing to me that Microsoft’s HoloLens has been available for almost a year. The lack of competition is telling in how difficult it must be to get this technology bundled up for consumers.

  • Zobeid

    This is exactly what us old-timers don’t need, when our eyes just can’t accommodate any more. In that respect, today’s VR headsets are ideal (better than real life, even), since everything appears at the same focal distance, and I can adjust that to something my eyes can handle. No need for bifocals or progressive lenses in VR.