New Video Shows Off CREAL’s Latest Foveated Light-field VR Headset

27

CREAL, a company building light-field display technology for AR and VR headsets, has revealed a new through-the-lens video showing off the performance of its latest VR headset prototype. The new video clearly demonstrates the ability to focus at arbitrary distances, as well as the high resolution of the foveated region. The company also the rendering tech that powers the headset is “approaching the equivalent of [contemporary] VR headsets.”

Earlier this year Creal offered the first glimpse of AR and VR headset prototypes that are based on the company’s light-field displays.

Much different from the displays used in VR and AR headsets today, light-field displays generate an image that accurately represents how we see light from the real world. Specifically, light-field displays support both vergence and accommodation, the two focus mechanisms of the human visual system. Most headsets on the market today only support vergence (stereo overlap) but not accomodation (individual eye focus), which means the imagery is technically stuck at a fixed focal depth. With a light-field display you can focus at any depth, just like in the real world.

While Creal doesn’t plan to build its own headsets, the company has created prototypes to showcase its technology with the hopes that other companies will opt to incorporate it into their headsets.

CREAL’s VR headset prototype | Image courtesy CREAL

We’ve seen demonstrations of Creal’s tech before, but a newly published view really highlights the light-field display’s continuous focus and the foveated arrangement.

Creal’s prototype VR headset uses a foveated architecture (two overlapping displays per eye); a ‘near retina resolution’ light-field display which covers the central 30° of the field of view, and a larger, lower resolution display (1,600 × 1,440, non-light-field) which fills the peripheral field of view out to 100°.

In the through-the-lens video we can clearly see the focus shifting from one part of the scene to another. Creal says the change in focus is happening entirely in the camera that’s capturing the scene. While some approaches to varifocal displays use eye-tracking to continuously adjust the display’s focal depth based on where the user is looking, a light-field has the depth of the scene ‘baked in’, which means the camera (just like your eye) is able to focus at any arbitrary depth without any eye-tracking trickery.

In the video we can also see that the central part of the display (the light-field portion) is quite sharp compared to the rest. Creal says this portion of the display is “now approaching retinal resolution,” and also running at 240Hz.

SEE ALSO
‘Lawn Mowing Simulator’ Lets You Touch Grass in VR, Now Available on Quest

And while you might expect that rendering the views needed to power the headset’s displays would be very costly (largely due to the need to generate the light-field), the company says it’s rendering tech is steadily improving and “approaching the equivalent of classical stereo rendering of other VR headsets,” though we’re awaiting more specifics.

While Creal’s current VR headset prototype is very bulky, the company expects it will be able to further shrink its light-field display tech into something more reasonable by the end of 2022. The company is also adapting the tech for AR and believes it can be miniaturized to fit into compact AR glasses.

This article may contain affiliate links. If you click an affiliate link and buy a product we may receive a small commission which helps support the publication. See here for more information.


Ben is the world's most senior professional analyst solely dedicated to the XR industry, having founded Road to VR in 2011—a year before the Oculus Kickstarter sparked a resurgence that led to the modern XR landscape. He has authored more than 3,000 articles chronicling the evolution of the XR industry over more than a decade. With that unique perspective, Ben has been consistently recognized as one of the most influential voices in XR, giving keynotes and joining panel and podcast discussions at key industry events. He is a self-described "journalist and analyst, not evangelist."
  • Till Eulenspiegel

    That’s an ugly headset, looks like it’s made from Lego.

    • Bob

      Until you put it on and completely forget about what it looks like ;)

    • xyzs

      it’s a prototype design…….

    • Hivemind9000

      Your avatar is appropriate.

  • Adrian Meredith

    Very cool indeed

  • wheeler

    Looking forward to you guys getting your hands on their next prototype. Sounds amazing. But also not sure about that transition between the central lightfield area and the fixed focus periphery

  • Paul Schuyler

    As fast as things have developed for the industry, its just not (visually) comfortable to be in conventional VR headsets for long periods of time. VR’s future entirely depends on this type of natural-eye hardware advancement in my view. Nice to see they’re making progress!

    • wheeler

      Couldn’t agree more. When I started out with VR several years ago, I learned to put up with it just to experience VR. But over the long term it’s been getting harder to harder to tolerate this kind of thing. In terms of what one has to “tolerate” for a casual entertainment device, the bar is very high
      (and probably more so with a productivity device). Especially when you’re past the novelty phase and looking for the more fundamental benefits of VR. I think what we have now is a stopgap. This is not to say that this stopgap wasn’t worthwhile or an extraordinary accomplishment on its own.

      • Cheryl Johnson

        Get $193 per h from Google!…~a1230x~ Yes this can be best since I simply got my initial payroll check of $24413 and this was just of one week…I am aslo purchased my Mclaren P1 right after this payment…~a1230x~ it is really best job I have even had and you will not for~give yourself if you not check it >>>> http://www.riverbridge.cf/check2020 !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

  • kontis

    Amazing work!

  • okExpression

    I did a review on the video here. To summarize they seem to be hiding obvious issues with their tech if used for VR. For AR sure the artifacts are less jarring: https://www.reddit.com/r/AR_MR_XR/comments/mikiid/vr_on_steroids_creals_new_demo_shows_a_central_30/

    Here is a review: https://imgur.com/a/jvY5UYu

    Green arrows show the direction of the camera movement, artifacts are highlighted in red.

    I’ve worked with DLP for a decade, it’s a marvelous technology that has made bright home cinema affordable, however for VR it is a dead-end alongside LCoS. When you generate colors or even shades time-sequentially, the trailing artifacts can’t be avoided even at several thousands Hertz. At these very slow camera movements it’s not as noticeable, but any realistic head and eye movement will make the color breakup, not just sub-frame breakup unbearable.
    Same is true with FLCoS.
    Blur from a slow exposure settings of the camera would look different to this.

    • Tomas S

      Thank you for this analysis. The camera has actually 30fps mainly to avoid flicker of the flat screen (which is at 90Hz). It means that each video frame exposed >200 light-field frames and 6 content frames. The effect which the image shows is all about the camera. Notice that the smeared image is white (no color split). Indeed, if the time sequential light-field has super big advantage, beside the fact it is light-field, it is the super high framerate.

      • okExpression

        I’m doubtful it’s the camera: the camera does not produce a trailing image when the framerate is low or exposure is high but instead we would see motion blur. Nor can it show color separation as well as the human eye at just 30 fps. If what you described was due to the camera, we would notice the same issue in the lowres periphery during these very slow camera rotations but we do not.

        Of course with a headset that large you can fit 3 microdisplays per color channel to eliminate color rbeakup as well but you say it’s not the case.

        It’s basic math that this can’t work without motion artifacts and at the same time without full frame persistence. The FLCoS or DLP frames are monochromatic, stated as 6000Hz.
        With 6000Hz monochromatic frames, if you need 1ms frame persistence, at 240Hz imaging you can only have 1ms frame per 4.17ms interval or around (6000/3)/240 = ~8 subframes per 1frame at 240Hz claimed framerate. 8 subframes are enough to just do RGB at 2 bit depth, not even 24bit , let alone with many light fields, so this is clearly not low-persistence.
        Even with a hypothetical tech to do 6000Hz 24bit RGB we would get only 8 perspectives/light fields at 240Hz frame rate, which won’t be enough. On the other hand, full persistence would allow 3x more, but still with motion artifacts.

        One more point: without eye tracking, vergence is incorrect for the
        peripheral display, which means there will be a noticeable frame breakeup and distortion due to vergence. Eye tracking may solve this but the point is not to use it.

        It’s fine to market your product but you shouldn’t be marketing it for more uses than it’s practical for because the reviews in the end will backfire. Time-sequential artifacts are not a big deal for AR, why try marketing it for VR as well when it has clear issues?

        • Tomas S

          No problem, doubts are always healthy position. And thank you for the compliment at the end. To us it actually nicely proves that we do incredible stuff :-).
          To shorten it: discussing the framerate issue is like discussing whether Ferrari isn’t too slow (instead of its fuel consumption, space, or price). The high framerate is among the biggest advantages of this concept.
          Hard to say what the camera frames show (there might be some stroboscopic effect, plus video compression, plus youtube… I don’t see any problem), but for the camera the image stream is practically continuous. Eye has no issue with it at all. Indeed, practically every other conventional characteristic is much more worth of questioning.

          About the “incorrect vergence”. I don’t know if this is clear, but the peripheral screen is flat only optically, but it still provides stereoscopic effect. The images are two, one for each eye, like in every 3D headset.
          If you meant something else, then I probably don’t understand, but what I can say is that there is no big perceived conflict between the light-field and the periphery apart from the different resolution (the high resolution part of our vision, however, spends most of the time within the light-field). I know it may sound strange now, but the blending of the light-field with the flat periphery is probably the second best conventional characteristic (although we can compare practically only with Varjo), it is actually really good.

          As I mentioned, there are many weak points worth of rightful questioning. But I am glad to see that we probably solve best the most important issues.

          • okExpression

            Well it’s indeed a pointless discussion if you are going to claim that in your experience the time-sequential nature of DLP isn’t an issue at 240Hz while I claim the opposite. I’ve done my own tests at 240Hz and marked it as unusable.

            Vergence: your peripheral image won’t shift horizontally as vergence happens, only the middle ligthfield 30 degrees will. This separation will be noticeable from the blending region. A single camera can’t capture it. Accomoidation won’t be as noticeable when viewing the middle of the display as your vision in the periphery is blurry anyway, but vergence will.

            For sure every single technology has pros and cons, none work for everybody or perfectly. But here you claim it doesn’t have the common issues DLP or FLCoS would for VR regarding time-multiplexed artifacts and persistence and we will have to go with only your claim until a comprehensive review with head tracking. A 30fps camera video definitely isn’t enough for me.

          • Tomas S

            Sorry for not being more concrete. I don’t really want to be unpleasant or avoid important discussion, but this is really not relevant. Unfortunately only I know it while you may not be able to see it similarly until you try. It is difficult today, but you are welcome to see it if you can. I feel completely confident to guarantee you that you wont notice any of the problems you mention as a problem.

            We did extensive texts of subjective experience and long list of problems from big ones to the smallest ones appeared. Here we are talking about issues which are not even at the end of the list. They did not make it to the list at all. In fact it is even opposite of a problem – one of the strongest characteristics of the headset.

            I am still not sure if I understand the vergence problem. Do you mean ocular parallax? (perceived displacement between close and far objects when eye moves in the eye socket?). The ocular parallax is correct in the light-field and absent at the periphery. This is true. And thank you for pointing to this important effect, but any conflict did not make it to the list either. It is unnoticeable.

            (of course I cannot comment on what you have tested and experienced with some other system, apart from the fact that it couldn’t be equivalent of our system, because it is unique and that it is really questionable to make such projections. The content in our case refreshes at 240Hz only because it is convenient for rendering, it has not much to do with color sequence. Colors cycle at 2kHz).

          • okExpression

            Okay so “everything is not a problem/noticeable”. Colors with FLCoS and DLP always cycle at several kHz, yours is no different. I think there’s nothing left to discuss, I’ve wait for some real reviews.

          • Tomas S

            Just to refine the actual quote, not to leave wrong conclusion: “long list of problems from big ones to the smallest ones”

          • okExpression

            Actually:

            “this is really not relevant…you wont notice any of the problems you mention as a problem…They did not make it to the list at all. In fact it is even opposite of a problem”

            yeah…

          • Tomas Kubes

            I am still not sure if there is not a misunderstanding emanating from the fundamental nature on how fast SLMs are most commonly driven (considering that the fast ones are all binary, only being able to display light or no light and nothing in between).
            Given your Glyph running at 120 Hz using DLP, what does it really do? You have 8,33 ms per frame, right? How is the frame composed? As far as I know all off the shelf drivers for SLMs do it 8 bit R, 8 bit G, 8 bit B. This means that for 2,77 ms lamp shines red on the SLM, and for 1,38 ms SLM displays the red MSB (most significant bit), then for 0,69 ms SLM shows the 2nd bit … until for ~2us the SLM displays LSB (least significant bit). Then you switch the shining lamp to green and whole process is repeated. Just for sake of clarity, the actual shine time is shorter, since there is some dead-time when the image on SLM is changing, this varies significantly according to technology.
            We can agree that since you shine 1 color for 2.77 and then the color goes away (is not persistent) and you shine other colors you might perceive splitting due to this. Is this what you mean? I guess we can agree on this.
            It is not clear to me why the SLMs are driven this way, perhaps you might know if you worked with them. I would guess it is some legacy, since all other displays (OLED, LCD) are driven with data being processed per color, SLM drivers were made the same even if it is not optimal and might lead to splitting.
            But, what if CREAL could not use the off the shelf drivers due to need to display many different viewpoints and developed own driver that works in different cycle: 1 bit R, 1 bit G, go to next viewpoint, 1 bit B, 1 bit R, 1 bit G, 1 bit B, … would you still be considering defending your argument?

            2nd thing that is not clear from your comment is, whether in your detailed images, you are showing an “in focus region” or “out of focus region”? Since if you go to some other CREAL’s video, you can see that the “out of focus regions” can show what can be called “non-overlapping viewpoints”: https://www.youtube.com/watch?v=LnwzTyWINjU But is that a problem? Since that is the region where your eye is not focusing ergo it cannot see the detail.

          • okExpression

            I’m sorry but your understanding of how DLPs work is a bit lacking.

            DLP pico chipsets for several years now actually do 10 bits per color channel and the color channels are not displayed in sequence, but rather the different colored subframes are mixed as Creal would do it. This is because the speed is not limited by a color filter wheel anymore and the LED and laser switching speeds allows this. What you described would be more similar to non-ferroelectric LCoS, but even then there are more than 3 R,G,B “fields” (usually 8), still nowhere as fast as pico DLP.

            DLP4710 has had a monochrome ASIC controller for “light control” rather than “imaging” for quite some time as well. You don’t need a custom one for FPGA unless you want to overdrive it or use DLP470TP instead. Other than speed the only difference would be since you want to drive 3 LEDs rather than 1 you would skip the LED controller TI provides and use a smaller and cheaper FPGA for that function instead.

            With this monochrome “imaging” ASIC there are no long or short subframes and all are equal as well. Quite a number of research in volumetric and lightfield imaging has been done with monochrome DLP, that also includes me. If the monochrome refresh rate was enough for RGB volumetrics or lightfields we would be doing it rather than triple DLP or some complicated tricks just to get two colors. How much the monochrome LEDs are on each so called subframe is not really relevant because the main question isn’t how intense they are but rather how apart the different color pulses are. Besides, you’d want to compensate longer pulses with higher current/luminance short pulses.

            So obviously I still defend my argument since I’ve not only dealt with kHz DLP but test-driven such display technologies by other groups. It’s a different question what I and someone else like Creal considers acceptable or a non-issue. As an analogy the TI DMD team claims that with pico chipsets illuminated by LEDs the rainbow artifact “has been eliminated”, but research shows it’s marketing rather than some objective truth. For sure it’s way better than color wheels but still not nearly eliminated.

            Of course in my image I’m showing the in-focus (the actual DLP) image, you can tell by the sharpness.
            The issue is the subframe breakup/trailing artifact plus rainbow artifact which can’t be captured at 30fps. Obviously the camera is not much wider depth of field than the human eye, in fact one would choose a camera with narrow depth of field as here to demonstrate the accommodation advantage of this tech clearly. So it’s not about the camera also capturing out-of-focus frames which it does not.

            This is before we discuss other issues like 20-21 bit depth color and resulting banding, the peripheral vergence issue I mentioned or the likely long persistence (4ms) frames.
            Again, if limited resolution, color and color uniformity Hololens2 or color-sequential LCoS Hololens1 can work for some passhtorugh AR uses then this definitely can, but VR has its own different requirements.

          • Tomas S

            Thank you for more details about your assumptions, it makes your skeptical position more understandable. I think that this could be the last and key part of misunderstanding: “Besides, you’d want to compensate longer pulses with higher current/luminance short pulses.” Such classical brightness modulation and buildup of image makes every (typically) 8th bitplane much brighter than the other bitplanes, which, no matter how you mix them temporally, will effectively reduce the frequency of perceived cycles 8 times compared to CREAL. In reality almost certainly more because off-the-shelf drivers and modulators, especially those you name, have additional bottlenecks reducing the speed by additional factor of 2.5 in the best cases. I presume you saw or worked on such optimized, but still conventional, projection system. I believe, that the problems you describe are there although I wonder how serious. However, in this sense, CREAL’s system is yet at least 10x and probably >20x faster (you name concrete devices) than what you could have seen. But more importantly CREAL’s system is very much proprietary at every level and works also quite differently with human vision than any other system. There are two more misleading assumptions in your description, but not so important. I hope this makes thinks slightly more clear.

          • okExpression

            Sorry Tomas but again from all you wrote there’s pretty much nothing to take. This sounds like a comment by a marketing guy rather than any kind of real info. If you can’t disclose the info maybe we should keep it where we left it: that you claim what I say is a non-issue.

            What we’ve worked on with non-pico chipsets has been at least 3 times faster than 6kHz. So while the DLPCs of 4710 or 470TP model are slower than what you can achieve with an FPGA, that’s not quite the case with standard (non-pico) chipsets.
            Besides, volumetric devices like Perspecta use dithering to achieve shades, they don’t modulate sub frames for that at all.

            At a certain points you can’t defy laws of physics when it comes to how fast these micromechanical components can rotate and tilt or how fast your specific ferroelectric mixture is. I’ve seen too many such claims about proprietary tech. As long as you do DLP or FLCoS there’s nothing proprietary that can change my points as they are about the physical limits of these technologies. You can disagree all day but if you’re going to be criptic then this discussion is more of a marketing message than a discussion so I don’t see a point in participating.

            Thanks for your time.

        • Tomas Kubes

          Hi, I thought about what you wrote and would give one more shot at discussing the effect you highlight in your images.
          Let’s step away from light-field for now and assume normal flat screen headset with a nice let’s be ambitious and say even OLED display running at 240 Hz and a camera running at 30 fps (so 30 Hz). So per each frame of camera, you capture 8 frames from the display.
          If the scene is static, the frames align perfectly and you do not notice. But if scene moves or objects on the scene move, those 8 frames would not be perfectly overlapped, or parts of these frames would not be. Indeed there would be something most people would just call “natural motion blur” simply the fact that the objects imprinted themselves at different positions on the camera sensors in each frame due to their movement.
          In real life such motion blur is analog, but in digital world, it is discrete. In our case, where there are 8 display frames per 1 camera frame, the “blur” would be composed of 8 frames, so with high enough resolution, you should be able to spot 8 outlines.
          This blur would be a direct result of disparate frame rates of display and camera, and would be ALWAYS present if display frequency = n * camera freq. for n > 2 and would be completely independent of the display technology.
          Would you be willing to consider that this discrete motion blur is the effect that you captured on your screenshots?

          • okExpression

            I think you missed the point in my first post that if it was what you described, which I also described, the difference between the peripheral display wouldn’t be so much as to not be captured as well.
            And that’s not the only issue here either.

  • Krozin

    Very exciting stuff, it to me feels like when i was imagining what vr was like before getting vr. This tech is like wow i can imagine, but cant wait to get my hands on it at some point. Theres a lot of innovation going on in VR, but I feel display is the most important and I simply cannot complain about such pioneers.

  • okExpression

    On the site I noticed the colors are listed as >1M (1 million?). Typical 24bit displays provide 16M. Is it 21bits? Still usable, but worth mentioning.