Virtual reality is going to get a lot more real.
With a vision of VR serving as the future of computer interfaces, NVIDIA has set its sights on refining the rendering process — to increase throughput, reduce latency and create a more mind-blowing visual experience for users.
That effort was the subject of a presentation Tuesday at the GPU Technology Conference, where Morgan McGuire, a professor of computer science at Williams College who will soon join NVIDIA as a distinguished research scientist, told attendees that there are significant challenges to overcome.
For instance, McGuire said that to match the capabilities of human vision, future graphics systems need to be able to process 100,000 megapixels per second, up from the 450 megapixels per second they’re capable of today.
Doing so will help push the vastly higher display resolutions required and push the rendering latency down from current thresholds of about 20 milliseconds towards a goal of latency under one millisecond — approaching the maximum perceptive abilities of humans.
“We’re about five or six magnitudes of order [between] modern VR systems and what we want,” McGuire said during a well-attended talk. “That’s not an incremental increase.”
What makes latency an even more pressing problem is the fact that as VR systems seek to increase resolution by increasing pixel throughput, they need to avoid adding extra stages that fuel latency.
“You can’t process the first pixel of the next stage until you’ve completed the final pixel in the previous stage,” McGuire said.
To bring latency down enough, McGuire said the NVIDIA Research team is, and will be, experimenting in many areas:
- It starts with the renderer, which drives most of the latency. McGuire said that today the VR industry has largely moved to eliminating post-rasterization stages common in desktop games, such as deferred shading and post-processing effects. This reduces latency, but it also reduces image quality. NVIDIA Research is investigating renderers to achieve high image quality with fewer stages.
- Foveated rendering uses eye-tracking hardware, enabling VR systems to deliver the sharpest resolution to whatever parts of an image the user is looking at, and allowing the rendering process to produce lower-resolution imagery for the rest of the display.
- Rendering and displaying a light field, which extends a two-dimensional image into four dimensions by capturing many versions and angles of that scene — think of a bug’s-eye view of an image — can also bring down latency by allowing the VR display to react quickly as the viewpoint changes, but it requires enormous throughput, as it is effectively processing many images simultaneously.
- NVIDIA researchers also have been testing novel designs for HMDs, such as a design McGuire showed that replaces the bulky lens in traditional designs with a thin sheet of holographic glass, enabling the display to change focus as the user’s eyes move to different parts of an image.
Will VR Kill, or Embrace, the Keyboard?
Perhaps the most surprising part of McGuire’s talk was the subject of text. When he brought this up, audience members were momentarily confused, until he explained further that if VR becomes the gateway to augmented reality and the interface for everyday computer use, it will one day replace your smartphone, laptop display and your PC monitor.
And that means that how text is entered and displayed becomes a major consideration. Incredibly high-quality text will be an incredibly important requirement for future VR systems.
In this scenario, McGuire said, “text is actually the killer app” for VR — definitely not what anyone in the room expected to hear about the future of VR graphics!
Naturally, some may feel these improvements are likely to drive up the price tag for a desktop VR system. McGuire declined to speculate on what pricing of future systems might look like, but he made it clear it won’t be as dramatic as some might fear.