Held from May 8-11th in Silicon Valley at the San Jose Convention Center, NVIDIA’s GTC 2017 session schedule is chock full of deep tech talks that we’re looking forward to. Among them, Senior NVIDIA Research Scientist Anjul Patney will overview the company’s latest learnings from their study of the ‘perceptually-based’ approach to foveated rendering.

Simply put, foveated rendering in VR aims to render the highest quality imagery only at the center of your vision where your eye can detect sharp detail, while rendering low quality imagery in the periphery of your vision where your eye is not tuned to pick up high resolution details. Combined with eye-tracking, it’s widely believed that foveated rendering is an important pathway to unlocking retinal-resolution VR rendering in the near future (imagery which is so sharp that any additional detail would be indiscernible).

SEE ALSO
NVIDIA Demonstrates Experimental "Zero Latency" Display Running at 1,700Hz

But foveated rendering is still in its infancy, and early attempts at using simple blur masks over the peripheral view has been shown to be too visible and distracting; a bad approximation of the limits of our peripheral vision.

Last year, NVIDIA researchers demonstrated a compelling new approach to foveated rendering (they call it ‘perceptually based’) which aims to let the end experience of human perception drive the outcome of foveated rendering techniques, rather than the other way around. The new work, which involved a ‘contrast-preserving’ rendering approach, showed a major improvement in making foveated rendering difficult to notice, and was faster than other common techniques to boot.

At GTC 2017, one of the researchers leading NVIDIA’s investigations into perceptually based foveated rendering, Anjul Patney, will take to the stage to outline the latest developments. The session description reads:

Foveated rendering is a class of algorithms which increase the performance of virtual reality applications by reducing image quality in the periphery of a user’s vision. In my talk, I will present results from our recent and ongoing work in understanding the perceptual nature of human peripheral vision, and its uses in improving the quality and performance of foveated rendering for virtual reality applications. I will also talk about open challenges in this area.

Patney’s talk is just one of several deep technical talks that we’re looking forward to at GTC 2017.

Register for GTC 2017

Here’s a number of others that have caught our eye so far from NASA, Oculus, Pixvana, OTOY, NVIDIA and Stanford’s Computational Imaging Lab.

NASA’s Hybrid Reality Lab: One Giant Leap for Full Dive – Matthew Noyes, NASA

This session demonstrates how NASA is using consumer VR headsets, game engine technology and NVIDIA’s GPUs to create highly immersive future astronaut training systems augmented with extremely realistic haptic feedback, sound, and additional sensory information, and how these can be used to improve the engineering workflow. Examples explored include a simulation of the ISS, where users can interact with virtual objects, handrails, and tracked physical objects while inside VR, integration of consumer VR headsets with the Active Response Gravity Offload System, and a space habitat architectural evaluation tool. Attendees will learn about how the best elements of real and virtual worlds can be combined into a hybrid reality environment with tangible engineering and scientific applications.

Light Field Rendering and Streaming for VR and AR – Jules Urbach, OTOY

Jules Urbach, Founder & CEO of OTOY will discuss OTOY’s cutting edge light field rendering toolset and platform. OTOY’s light field rendering technology allows for immersive experiences on mobile HMDs and next gen displays, ideal for VR and AR. OTOY is actively developing a groundbreaking light field rendering pipeline, including the world’s first portable 360 LightStage capture system and a cloud-based graphics platform for creating and streaming light field media for virtual reality and emerging holographic displays.

The Virtual Frontier: Computer Graphics Challenges in Virtual Reality – Morgan McGuire, NVIDIA

Video game 3D graphics are approaching cinema quality thanks to the mature platforms of massively parallel GPUs and the APIs that drive them. Consumer head-mounted virtual reality is a new domain that poses exciting new opportunities and challenges in a wide-open research area. We’ll present the leading edge of computer graphics research for VR across the field. It highlights emerging methods for reducing latency, increasing frame rate and field of view, and matching rendering to both display optics and the human visual system while maximizing image quality.

Insights From the First Year of VR – Jason Holtman, Oculus

There are a myriad of choices to make when jumping into VR development. We’ll explore how to navigate those decisions, and what the lessons from this first generation of VR content means for future titles.

Streaming 10K Video Using GPUs and the Open Projection Format – Sean Safreed, Pixvana

Pixvana has developed a cloud-based system for processing VR video that can stream up to 12K video at HD bit rates. The process is called field-of-view adaptive streaming (FOVAS). FOVAS converts equirectangular spherical format VR video into tiles on AWS in a scalable GPU cluster. Pixvana’s scalable cluster in the cloud delivers over an 80x improvement in tiling and encoding times. The output is compatible with standard streaming architectures and the projection is documented in the Open Projection Format. We’ll cover the cloud-architecture, GPU processing, Open Projection Format, and current customers using the system at scale.

Computation Focus-tunable Near-eye Displays – Nitish Padmanaban, Stanford Computational Imaging Lab

We’ll explore unprecedented display modes afforded by computational focus-tunable near-eye displays with the goal of increasing visual comfort and providing more realistic and effective visual experiences in virtual and augmented reality. Applications of VR/AR systems range from communication, entertainment, education, collaborative work, simulation, and training to telesurgery, phobia treatment, and basic vision research. In every immersive experience, the primary interface between the user and the digital world is the near-eye display. Many characteristics of near-eye displays that define the quality of an experience, such as resolution, refresh rate, contrast, and field of view, have been significantly improved over the last years. However, a pervasive source of visual discomfort prevails: the vergence-accommodation conflict (VAC). Further, natural focus cues are not supported by any existing near-eye display.


Road to VR is a proud media sponsor of GTC 2017

Newsletter graphic

This article may contain affiliate links. If you click an affiliate link and buy a product we may receive a small commission which helps support the publication. More information.


Ben is the world's most senior professional analyst solely dedicated to the XR industry, having founded Road to VR in 2011—a year before the Oculus Kickstarter sparked a resurgence that led to the modern XR landscape. He has authored more than 3,000 articles chronicling the evolution of the XR industry over more than a decade. With that unique perspective, Ben has been consistently recognized as one of the most influential voices in XR, giving keynotes and joining panel and podcast discussions at key industry events. He is a self-described "journalist and analyst, not evangelist."
  • This is great news. The only hurdle I think at the moment is the cost of fast 4k panels. Even with SXRD, D-ILA, LCoS projectors, they cost thousands and I am not sure anybody has tried to make them smaller for the VR headset market which requires even smaller panels than projectors and phones. I think SXRD can do 200 fps but unsure which panel type other than OLED would be used in super hi-res VR. I see Pimax have used a 60Hz 4K panel in their VR headset at a decent cost and have an 8K one (2 x 4K) too so maybe it is not that far away.

    http://www.roadtovr.com/hands-pimaxs-8k-headset-proves-high-fov-vr-coming/

  • yexi

    This technology was already presented by Nvidia at the Laval Virtual, in France.
    It’s realy impressive, and work very well. Can’t wait to use it :)