While most of us are used to dealing with resolution figures that describe pixel count (ie: a 1920×1080 monitor), pixel density stated as pixels per degree is a much more useful figure, especially when dealing with AR and VR headsets. Achieving ‘Retinal resolution’ is the ultimate goal for headsets, where at a certain pixel density, even people with perfect vision can’t discern any additional detail. This article explores those concepts, and takes a look at how far today’s headsets are from retinal resolution.

yuval boger

Guest Article by Yuval Boger

Yuval is CEO of Sensics and co-founder of OSVR. Yuval and his team designed the OSVR software platform and built key parts of the OSVR offering. He frequently shares his views and knowledge on his blog.

If the human eye was a digital camera, its ‘data sheet’ would say that it has a sensor capable of detecting 60 pixels/degree at the fovea (the part of the retina where the visual acuity is highest). For visual quality, any display above 60 pixels/degree is essentially wasting resolution because the eye can’t pick up any more detail. This is called retinal resolution, or eye-limiting resolution.

This means that if there an image with 3,600 pixels (60 x 60) and that image fell on a 1° x 1° area of the fovea, a person would not be able to tell it apart from an image with 8,100 pixels (90 x 90) that fell on a 1° x 1° area of the fovea.

Note: 60 pixels per degree figure is sometimes expressed as “1 arc-minute per pixel”. Not surprisingly, an arc-minute is an angular measurement defined as 1/60th of a degree. This kind of calculation is the basis for what Apple refers to as a “retina display”, a screen that when held at the right distance would generate this kind of pixel density on the retina.

If you have a VR headset, you can calculate the pixel density—how many pixels per degree it presents to the eye—by dividing the number of pixels in a horizontal display line by the horizontal field of view provided by the lens. For instance, the Oculus Rift DK1 dev kit (yes, I know that was quite a while ago) used a single 1280 x 800 display (so 640 x 800 pixels per eye) and with a monocular horizontal field of view of about 90 degrees, it had a pixel density of just over 7 pixels/degree (640 ÷ 90). You’ll note that this is well below the retinal resolution of 60 pixels per degree.

SEE ALSO
The First $100 You Should Spend on Meta Quest Games

Not to pile on the DK1 (it had many good things, though resolution was not one of them), 7 pixels/degree is the linear pixel density. When you think about it in terms of pixel density per surface area, it’s not just 8.5 times worse than the human eye (60 ÷ 7 = 8.5) but actually a lot worse (8.5 × 8.5 which is over 70).

The following table compares pixel densities for some popular consumer and professional HMDs:

VR Headset Horizontal Pixels Per Eye Approximate Horizontal Field of View (degrees per eye) Approximate Pixel Density (pixels/degree)
Oculus DK1 640 90 7
OSVR HDK1 960 90 11
HTC Vive 1080 100 11
Sensics dSight 1920 95 20
Sensics zSight 1280 48 27
Sensics zSight 1920 1920 60 32
Human fovea –  –  60

Higher pixel density allows you to see finer details—read text; see the grain of the leather on a car’s dashboard; spot a target at a greater distance—and in general contributes to an increasingly realistic image.

Historically, one of the things that separated professional-grade VR headsets from consumer headsets was the a higher pixel density. Let’s simulate this using the following four images. Let’s assume that the first image (taken from Epic’s Showdown demo) is shown at full 60 pixels/degree density (which it could be, depending upon the resolution and distance you sit from your monitor). We can then re-sample it at half the pixel density (simulating 30 pixels/degree) and then half again (15 pixels/degree) and half again (7.5 pixels/degree). Notice the stark differences as we go to lower and lower pixel densities.

Photo courtesy Epic Games
Full resolution (simulating 60 pixels/degree) | Photo courtesy Epic Games
Photo courtesy Epic Games
Half resolution (simulating 30 pixels/degree) | Photo courtesy Epic Games
Photo courtesy Epic Games
Simulating 15 pixels/degree | Photo courtesy Epic Games
Photo courtesy Epic Games
Simulating 7.5 pixels/degree | Photo courtesy Epic Games

Higher pixel density for the visual system is not necessarily the same as higher pixel density for the screen because pixels on the screen are magnified through the optics. The same screen could be magnified differently with two different optical systems resulting in different pixel densities presented to the eye. It is true, though, that given the same optical system, higher pixel density of pixels on the screen does translate to higher pixel density presented to the eye.

As screens get better and better, we will get increasingly closer to eye-limiting resolution in the headset and thus closer to photo-realistic experiences.

Newsletter graphic

This article may contain affiliate links. If you click an affiliate link and buy a product we may receive a small commission which helps support the publication. More information.


  • second image could be very close to the first with 2X SSAA.

    30 pixel/degree VR headset should be not perfect, but near perfect for me

  • Nyco30

    So if I get this right, an 8K would be sufficient to have a 30 pixel density for a 60 FoV, which is not exactly ideal but would be enough to have a good gaming experience. And optimally, you would need 16K for a 60 pixel density and a 100 FoV, which is what we should achieve without any graphic artefact like SSAA.

    • Bryan Ischo

      But we certainly should not be treating 100 degree FoV as a standard to achieve. I’d rather have 170 degree FoV at 18 pixel density than 100 degree FoV at 30 pixel density.

      • Xron

        ye human fov is over 200+ though most of pixels are needed in the center.

      • OhYeah!

        No way would I want that sacrifice. the current Oculus is great so bump that to even 120 FOV and also a minimum of 90 FPS and then give me the max resolution you can cram in with those specs. Who needs 170 FOV if it’s all jagged and ugly! Now if we get some 16 K screens, 120 fps, OLED, and some killer foveated rendering we can have the best of all worlds! I want the future now ;)

    • How did you calculate that? Also sorry for my english.
      Number of pixels in a horizontal display line divided by the horizontal field of view equals pixel destiny per degree, so
      “y”px / 60degrees = 30px destiny, so
      “y” = 60*30 = 1800px
      and according to wiki https://en.wikipedia.org/wiki/Display_resolution 8K is 7680x4320px, not sth around 1800xsth

      Speaking about 8K, if we split horizontal 7680px for 2 eys, we get 3840px per eye, with gives us 32px per degree with 120FoV
      3840px / 120degree = 32
      Or I get sth wrong?

      • Nyco30

        My bad, it was maybe too early in the morning for my mind to do simple equation lol. From what is stated up above, the global calculation is simply: Pixel Density = (Resolution total/2) / FoV
        So taking back what I just said earlier today:
        – For a 60 FoV and 30 Pixel density: 3600 on horizontal needed. So a little less than 4K.
        – Looking for an optimal 100 FoV with 60 Pixel Density: Resolution = 2x (60 (Pixel Density) x 100 (FoV)) = 12.000. For the sake of it, let’s say 12K

  • atat

    Those pics are wrong, I have a DK2 and it feels less pixelated than your 15ps/deg showed in your pictures.

    • benz145

      You’re right, it’s because we’re limited to using your monitor’s resolution as the baseline, so unless your monitor is retinal resolution then it isn’t going to be accurate, which is why we call it “simulated”… it’s just to give a rough idea of how the changing pixels/degree changes the visual experience. FOV also factors in here as well.

  • Interesting article

  • Ryan

    I think people are underestimating how important this is to the VR experience. Once “retinal” VR displays are a reality, the experience (and consumer adoption) will be massively improved.

    • raz-0

      Massively improved. Or possibly a vomit machine or any of a number of other ergonomic failures. There’s a lot going on and an assumption that improving this or that particular item will = better and more widely accepted as good and usable. Simply removing known impasses doesn’t mean you have built a path to your destination.

      Higher pixel density displays are their own reward, so we will head that way regardless, but it is far form the only issue in seeing VR adopted at the rate of smartphones have been or PCs were with the internet access becoming ubiquitous.

      • SimonH

        I dont think they ever will be. Not even at ipad rates. More like projectors. They are something people who liek high end stuff will have. But maybe will eventually spread to gaming console levels of use. I’m not convinced of this VR for everyone argument. AR… maybe. Sat nav, shopverts with offers, continous social media updates, 200″ TV screen with your personal movie choices playing. The future is be isolated but connected.

        • RFC

          AR is actually more useful in everyday life; think of google map overlaid on the street in front of you, or a repair technician shown an exploded view with parts list of an unfamiliar machine, or emergency services personal provided with real time route mapping to drive to the accident as fast as possible in a congested city without taking eyes off road.

          you’ll see widespread adoption on the consumer side once the tech (specifically AI) is brought to market correctly, with minimal cost to entry. the futuristic notion of contact lenses is not that far fetched given time and tech.

          I liken VR as a personal, more intimate experience like listening to an album or watching a film. Not something to be done all the time, but with time put aside.

          I certainly looked forward to VR sessions sometimes 1 hour every other day, sometimes a whole morning once a week.

          But perhaps, for many consumers with the financial ability (i.e. full time job) to put together a high-end PC and buy a Vive or Rift, actually having spare time for VR can be the problem.

          • SimonH

            This we both agree on. I literally had VR blinkers on till I went to the VR/AR show in London. I left thinking AR is going to be the mass market tech. While VR will be reserved for experiences. After thinking about it for a while I realised there will be a convergence. You’ll have a high Res AR set that can display signage and play videos and help guides, just like a phone can do nav and movies. In fact I’ll will be a see through phone screen with the electronics in your pocket. When you get home, you’ll enable VR mode and black LCD will block the outside world (like those electronic privacy LCD windows now available). You 1180 based gaming rig will then render up a full 8k foveated virtual world for you to play in, along with a 3d scanned avatar of yourself. It all sounds far fetched… But isn’t. I’m having mine done soon by a company located near to me. !! Backface.co.uk (checkout page 3 of their gallery)

        • usapon

          5 years and we now have a better glimpse of the future metaverse. Reply back in 5-10 years.

  • SimonH

    Should we be expecting linear resolution on future VR displays? How about 60 pixels per degree in the central 45 degrees dropping to 30 pixels per degree out to 90 degrees and then 15 pixels beyond that. Why? How much time do you spend looking out the corners of your eyes Vs looking in the central part of your vision and turning your head. These would be custom VR screens aka more expensive, but it means less load on the GPU so cheaper. Combined with foveated rendering this could provide cost effective high detail where it matters, and less detail where it doesn’t.

    • Caven

      One major reason for foveated rendering is to reduce GPU load by rendering at full resolution only where the eye is looking. Making a custom display with different resolution bands would add complexity to the display while making foveated rendering much less useful. Simply filling a high-res screen with pixels isn’t a problem, as games have been doing this for a long time. The 3D view in Doom, for instance, could be resized, with the UI and sometimes a tiled background filling the non-3D areas. The entire screen was still being updated every frame, but the expensive part to render took up a smaller proportion of the screen as the size was reduced. Some newer games utilize a similar trick at lower settings by rendering UI at full screen resolution, but rendering the 3D view at a lower resolution which is scaled to fit the display.

      • SimonH

        I don’t think it adds that much complexity. You are basically building a screen with a machine capable of building a 60 ppd screen, but let it do only 30ppd at the edges. The stuff to do rendering gets a bit harder, but that’s what GPU drivers are for. Let the hardware deal with the maths. Devs just build a 3d scene like they always do and the GPU has to work out what pixels go where. You’ll bascially end up with larger number of cores working on the central high density area, (where the foveated rendering in most needed) and then out at the edges you add less cores doing the work. Do a quick experiment – do a 90 degree right angle from your nose, with your hands on your chin, then halve it to 45 degrees. Now look around. You’ll find that looking out beyond 22.5 degrees either side isn’t something you want to do for long. You’ll want to turn your head and let the muscles relax again. Someone working in psychovisual research can probably work out the best numbers, but it seems a decent approach to getting super hi res where it’s needed most without needing Quad SLI TitanX and a £4,000 7″ screen. For each eye.

        • Caven

          But in order to get any value out of that sort of display, the renderer still needs to be designed to render less pixels in the low-resolution areas. If the renderer doesn’t do that, you end up with the same sort of scenario as supersampling, where a scene is rendered at a higher resolution than the hardware can actually display. The end result is a custom display that offers no performance benefit.

          Of course the renderer needs to be updated to account for the reduced resolution of the edges. But once you’ve done that, you get the same benefit even if you don’t make the custom display. There’s no need to build the custom display because a software solution alone is already enough, and by doing it in software only you don’t force any display limitations. Even if the display isn’t “that much more” complicated, it’s still a custom display that needs more effort to design and will cost more due to its inherently limited uses outside of VR.

          As for your 45 degree example, while it is indeed uncomfortable to stare to one side for extended periods of time, I find that if I turn my head parallel to my monitor, then turn my eyes all the way to one extreme, I can still read this text on my monitor, even though I’m looking at such an extreme angle that the bridge of my nose obscures the far eye. It’s not comfortable, but I can do it, and I don’t want to find myself staring at blurriness each and every time I glance to the side in VR. Maybe glancing to the side in a 100 degree headset isn’t particularly useful, but once headsets widen the FOV, this will become a bigger concern. With foveated rendering based on eye tracking, there is no benefit to forcing a hardware limitation to solve what’s actually a software issue. It’s easier to solve it in software alone than to alter hardware and still have to implement the same software solution anyway. Perhaps if such a custom display managed to be cheaper somehow, that might make some sense. But a custom, limited-use display is not going to come with cost savings.

          • SimonH

            I think what being missed is the reason I’m suggesting a variable pixel density display is not just to cut down on the rendering pipeline cost, but I’m pretty sure it will drop the cost of the display. I.e. it’s cheaper to build a display with 25 megapixels instead of 50. Foir the same reason when building a VR display, why print pxiels outside the visual circle. Other than that you can use cheaper phone screens, the longer term solution would be surely cheaper to just fab the area you will see.

    • DaKangaroo

      In theory yes, that would be ideal, but in practice, it’d be a nightmare. For a start, unless you smoothly transitioned between the varying degrees of resolution on the display, you’d have visible ‘bands’ where the resolution changes. If you did smoothly transition between varying degrees of resolution, even just finding a way to describe a pixel image for the display would be hard because it wouldn’t be a regular grid shape anymore.

      Edit: Hm.. actually maybe not.. Just thinking about how it could be done now..

      Actually, I can imagine ways of programmatically representing an image for a display with no regular grid structure. It’s different but it’s very possible. A bit more processing, but not that much. It could be performed as a parallel operation on a GPU.

      But I wonder how much more complicated that would make manufacturing such displays. I just did a search and I can’t find anything about a display that’s ever been made like that.

      • awilko

        I’d say that a more practical way is to use lenses with a more extreme barrel distortion effect, that is they don’t magnify as much in the center so you get higher pixel density there, with more distortion around the edges which is compensated for in software, basically like what they’re already doing, only moreso. Probably needs more than one lens stacked, or a very special lens that doesn’t exist yet.

        • raz-0

          I would think what would make more sense is inducing a pincushion effect and rendering on the linear grid of the display to compensate.

          First up, making the physical display not a more or less rectilinear grid is all sorts of complicated. So don’t. Make it have a pixel size to get you to 30 pixels per degree. Use lenses to add pincusion, thus squishing the pixels down and making the fovea be exposed to 60 pixels per degree. you just use more than one degree of screen space to render them. And you keep the load on the GPU down by keeping the overall pixel count down.

          • awilko

            You’re right, of course, I meant pincushion in the lenses, the barrel distortion is in the image. That said, it depends how far into the future you’re thinking, I imagine that many years from now, screens built specifically for VR will be closer to hemispherical than flat, perhaps with a single tapered coil of pixels, and lenses integrated into the displays to reduce bulk. Or something equally as unheard of today, who knows?

        • sl044

          What if only the rendering on the gpu side was at a lower res around the perimeters and then resampled to the native res of the headset?
          It would not need any special modification on the headset and it would take away some load from the gpu.

  • Shame, this is more like an advert for Sensics zSight. What about 4k and even 8K Pimax and other higher end VR headsets in the table too, then we could see what difference they make across the board rather than sticking to just a few manufacturers for comparison.

    • VRguy

      The zSight is an old product (6-7 years old) and has much narrower field of view than current consumer headsets. It is also much more expensive. Thus, it is not a comparable product to consumer headsets. The point the article was making is that it’s not just field of view that is important but also pixel density. Also take a look here: http://sensics.com/is-wider-field-of-view-always-better/ on a discussion when wide field of view is important and when higher pixel density is preferable

  • David Ivey

    And don’t forget that GPU’s will have to be built that can drive all these pixels.

    • yexi

      Not necessarily. Of course it’s better if the game run on 8K on a 8k-screen, but components can upscale a Full-HD or 2K signal into 8K without artefact. You loss some details, but you lose the screendoor effect and it seem a lot more natural.

  • psuedonymous

    “If the human eye was a digital camera, its ‘data sheet’ would say that it has a sensor capable of detecting 60 pixels/degree at the fovea (the part of the retina where the visual acuity is highest). For visual quality, any display above 60 pixels/degree is essentially wasting resolution because the eye can’t pick up any more detail. This is called retinal resolution, or eye-limiting resolution.”

    This is incorrect, though Apple’s marketing department has been effective at embedding the idea. The human eye can resolve far finer details than the ‘line-acuity’ threshold of 60 pixels/° (one Arcminute). Vernier acuity alone can resolve down to the Arcsecond level (1/60 of an Arcminute). And Vernier acuity IS relevant for a HMD: while the display is stationary relative to your face, your eyes are physically translating as the pupil and retina move about a common centre. The AFRL has a good paper on the resolution you;d need to really match reality (for a flat plane), and it;s much, much higher than qhat can be acheived today: http://www.itcexperts.com/wp-content/uploads/Capability-of-the-human-visual-system.pdf

    • benz145

      @disqus_3XIPBVqES2:disqus any thoughts on this?

      • VRguy

        As with many physiological questions, the answer is: it depends.

        60 pixels/degree (1 arcmin/pixel) is often considered eye limiting resolution (for instance, see here: https://en.wikipedia.org/wiki/Naked_eye)

        Some of the factors it depends on:
        1. What portion of the retinal (central vision vs. peripheral vision). Resolution is different paving the way to foveated rendering.
        2. What person. Some have “super vision” and others have vision disabilities.
        3. What is the goal: detection? recognition? identification? See this blog post regarding wide field of view and the Johnson Criteria: http://vrguy.blogspot.com/2013/10/is-wider-field-of-view-always-better.html

        • Stijn de Witt

          I found contrast to be very, very important. Put a single white pixel on a black background. You can see it at any resolution I’ve come across so far.

  • Albert Hartman

    There’s an interesting relationship between resolution and framerate. For VR immersion you need both – and I’m wondering what the “realism” formula would look like. You can tradeoff framerate for pixels/deg.

  • REP

    So the Pimax 8k is 3840 x 2160 per eye resolution @200 FOV. So, it’s 3840/200=19.2 pixels/degree horizontally and 2160/100=21.6 pixels/degree vertically.

    It doesn’t appear to be that impressive. It’s like twice the pixels/degree of Vive or Oculus. They need to reduce the FOV to maybe 130. That way, it’s 3840/130=29.5 pixels/degree horizontally and 2160/70=30.9 pixels/degree. Now, that’s much better!

    • VRguy

      FOV of PiMax is not 200 deg/eye. Per eye is probably smaller – perhaps 100+. 200 degree might be their binocular field of view but pixel density calculations should take monocular resolution and monocular horizontal FOV

  • Atanas Ctonlob

    Currently the average VR headsets look to be about similar to the 3rd image aka 15pixels / degree

  • Could you possibly update your chart to include PSVR? Fascinating stuff!

  • Andrew Jakobs

    I know the images are simulated, but even with the DK2 I certainly don’t perceive the image like the 15pixel/degree.. more between the 15 and 30 and closer to the 30 than 15.. There is a big difference between screendoor effect and real lower resolution.
    And seeing how they did it with projectors, screendoor reduction without upping the resolution is perfectly possible. The avagant glyph for instance is using a micro DLP of 1280×720 and it seems to have much MUCH less SD as the DK2, so upping the micro DLP to fullHD (per eye) would propably even eliminate the SD. Not saying DLP is perfect, but neither is the current crop of LCD/OLED screens.
    And at least IMHO, I don’t really care if it’s not photo-realistic, they can’t even get games to look photorealistic on fullhd (even though a normal bluray movie does). To be honest, I don’t think I even WANT photorealism for most games..

    • GunnyNinja

      It is a comparison of the difference between where we want to be and where we are. It does not represent what you see.

  • yexi

    Good article, but it’s not only the number of pixels who have a big impact, but also the technology of the screen and the gap between the pixel.

    The screen-door effect is cause only by the gap between each pixel/ (and sub-pixels), and not by the resolution… a better resolution is of course better, but if they manage to make 2k Oled-like screen without an important gap between pixels, it will be very good.

    Look at the OSVR, it’s not a ‘good’ screen on the paper, but they manage to make it work very good (on some points better that Oculus and Vive) because they practically remove the screen-door effect.

    Also, people need to understand that have a 8k screen doesn’t mean that your game NEED to run on 8k, on headset they can (and they do) place components that can upscale a Full-HD signal to make it match the resolution without artefacts. It’s not as good, but it’s very acceptable for the consumer (actually, it’s not for a lot of people)

  • Albert Hartman

    micro head motions with constant view redraws makes up somewhat for low resolution. kinda like being able to read a magazine through a screen door if you constantly move your head back & forth. Explanation for why images look better in HMDs than their pixel density would lead you to expect.

    • Stijn de Witt

      At the same time, the micro movements are jumpy as they move from pixel to pixel so to say. Like a moire effect or something. Higher res helps with more fluid movements.

  • Anyone know what the resolution is on the current Rift headsets?

  • Rose Ann Haft

    This isn’t 100% true. Our headset has better image quality and fewer pixels per degree because we use a different method.

  • yoann

    Maby an update for all new vr headsets ?