NVIDIA’s Lens Matched Shading Technology Takes Everest VR to New Heights

by Brian Burke

When we launched our Pascal architecture we introduced a bevy of technological advancements that were created to deliver a new level of presence and immersion for virtual reality. We packaged them up for the creators of VR games and experiences as our VRWorks software developers kit. Today one of those technologies, known as Lens Matched Shading, appears for the first time in Solfar’s stunning VR experience Everest VR.

As a VR experience, Everest VR is compelling. The sense of presence, that fundamental element of VR that pulls you into the virtual world and convinces your brain into believing the experience is real, is so high that I have seen people rip off their headset because their fear of heights overcame them. When I tried it the first time, I had butterflies in my stomach as I crossed over one of the great crevasses found on Everest on a ladder. My fear was real.

Delivering immersive VR like that is a complex challenge, especially since immersive VR requires seven times the graphics processing power compared to traditional 3D applications and games. To make games look the best they can on today’s GPUs you need some smart technologies to maximize performance.

With our Maxwell architecture, we broke new ground when we invented Multi-Res Shading — an innovative rendering technique for VR in which each part of an image is rendered at a resolution that better matches the pixel density of the warped image. Multi-Res Shading uses Maxwell’s multi-projection architecture to render multiple scaled viewports in a single pass, delivering substantial performance improvements. It is used by Everest VR, VR Funhouse, Pool Nation VR and Raw Data to take their graphics to new heights.

Lens Matched Shading uses the new Simultaneous Multi-Projection architecture of NVIDIA Pascal-based GPUs to improve upon Multi-Res Shading by rendering to a surface that more closely approximates the lens-corrected image that is output to the headset display. This avoids rendering many pixels that would otherwise be discarded before the image is output to the VR headset. Because VR discards a large number of pixels during the lens warp post-process, Lens Matched Shading can provide substantial performance improvements while still maintaining full pixel coverage of the final warped image. And developers can ultimately use that reclaimed horsepower elsewhere — to increase graphics settings, add more eye candy or lower GPU requirements.

Solfar’s use of Lens Matched Shading in their Everest VR experience provides up to a 15% performance improvement over Multi-Res Shading, allowing the user to increase quality settings such as supersampling, weather effects and significantly improve the overall experience in VR.

Solfar’s use of Lens Matched Shading to avoid the performance cost of rendering unnecessary pixels in their Everest VR experience is the latest example of how smart rendering can be used to increase presence in VR.

Developers can learn more about these kinds of techniques on our developer website.