In your peripheral vision, less is more.
NVIDIA researchers are using SMI’s latest eye-tracking technology to develop a new technique that matches the physiology of the human eye to heighten visual fidelity in VR.
The demo — which we’re bringing to the annual SIGGRAPH computer graphics conference in Anaheim, Calif., July 24-28 — is simple. Strap on a head-mounted display with integrated eye tracking. Look around the virtual scene of a school classroom with blackboard and chairs. Looks good, right?
Now gaze at the teacher’s chair, turn off the eye tracking and look around again. Only the area around the chair is rendered in detail. In your periphery the demo was rendering a less detailed version of the image — and you couldn’t tell.
This new perceptually based foveated rendering technique is more than just a neat trick. This technique promises to put computing resources to work where they matter most, letting developers create more immersive virtual environments.
What Makes NVIDIA’s Foveated Rendering Technique Special
Human vision can be thought of as having two components: foveal and peripheral vision. The small region of your retina called the fovea is densely packed with cones — a type of photoreceptor cell — providing sharp and detailed vision. Peripheral vision covers a much wider field of view but lacks acuity.
This acuity difference has inspired foveated rendering systems, which track the user’s gaze and seek to increase graphics performance by rendering with lower image quality in the periphery. However, foveated rendering taken too far will lead to visible artifacts, such as flicker, blur or a sense of “tunnel vision.”
Our researchers used SMI’s prototype eye-tracking HMD to perform a careful perceptual study of what people actually see in their peripheral vision in VR. Our researchers then used those insights to design a new rendering algorithm that enables much greater foveation, or reduction in rendering effort, without any discernible drop in visual quality.
The NVIDIA Research team, led by Anjul Patney, Joohwan Kim, and Marco Salvi, discovered that existing foveated rendering techniques tend to generate
either blur or flicker in the peripheral vision. So the team worked to understand the details that humans pick up — like color, contrast, edges and motion — in the periphery.
For example, they found that the traditional approach of rendering lower-resolution images for our peripheral vision results in distracting flickering if the foveation is too aggressive. They also found that simply blurring images picked up by the eye’s peripheral region reduces contrast, inducing a sense of tunnel vision.
However, by combining blur with contrast preservation, they found that users could tolerate up to twice as much blur before seeing differences between the foveated and non-foveated images.
“That’s a big deal,” explains Patney, “because, with VR, the required framerates and resolutions are increasing faster than ever.”
The demonstration wouldn’t be possible without our partners at SMI, the Germany-based maker of computer vision applications. Its technology works by surrounding the edge of each of a head-mounted display’s lenses with infrared lights. Combined with SMI’s software, this allows computers to detect where your eye is looking precisely and with incredible speed.
Foveated rendering is just one application for SMI’s eye-tracking VR technology. It also promises to enable personal display calibration, barrier-free natural interaction, social presence and new analytical insights.
For all the details, visit our project page. To see our demonstration for yourself, visit our booth in the Emerging Tech area at SIGGRAPH. And follow the latest at the show at #SIGGRAPH2016.