A Whole New Game: NVIDIA Research Brings AI to Computer Graphics

The same GPUs that put games on your screen could soon be used to harness the power of AI to help game and film makers move faster, spend less and create richer experiences.

At SIGGRAPH 2017 this week, NVIDIA is showcasing research that makes it far easier to animate realistic human faces, simulate how light interacts with surfaces in a scene and render realistic images more quickly.

NVIDIA is combining our expertise in AI with our long history in computer graphics to advance 3D graphics for games, virtual reality, movies and product design.

Forward Facing

Game studios create animated faces by recording video of actors performing every line of dialogue for every character in a game. They use software to turn that video into a digital double of the actor, which later becomes the animated face.

Existing software requires artists to spend hundreds of hours revising these digital faces to more closely match the real actors. It’s tedious work for artists and costly for studios, and it’s hard to change once it’s done.

Reducing the amount of labor involved in creating facial animation would let game artists add more character dialogue and additional supporting characters, as well as give them the flexibility to quickly iterate on script changes.

Remedy Entertainment — best known for games like Quantum Break, Max Payne and Alan Wake — approached NVIDIA Research with an idea to help them produce realistic facial animation for digital doubles with less effort and at lower cost.

By using AI for computer graphics, researchers automated the task of converting live actor performances (left) to computer game virtual characters (right).
Using AI, researchers automated the task of converting live actor performances (left) to computer game virtual characters (right).

Artificially Intelligent Game Faces

Using Remedy’s vast store of animation data, NVIDIA GPUs, and deep learning, NVIDIA researchers Samuli Laine, Tero Karras, Timo Aila, and Jaakko Lehtinen trained a neural network to produce facial animations directly from actor videos.

Instead of having to perform labor-intensive data conversion and touch-up for hours of actor videos, NVIDIA’s solution requires only five minutes of training data. The trained network automatically generates all facial animation needed for an entire game from a simple video stream. NVIDIA’s AI solution produces animation that is more consistent and retains the same fidelity as existing methods.

The research team then pushed further, training a system to generate realistic facial animation using only audio. With this tool, game studios will be able to add more supporting game characters, create live animated avatars, and more easily produce games in multiple languages.

Toward a New Era in Gaming 

Antti Herva, lead character technical artist at Remedy, said that over time, the new methods will let the studio build larger, richer game worlds with more characters than are now possible. Already, the studio is creating high-quality facial animation in much less time than in the past.

“Based on the NVIDIA Research work we’ve seen in AI-driven facial animation, we’re convinced AI will revolutionize content creation,” said Herva. “Complex facial animation for digital doubles like that in Quantum Break can take several man-years to create. After working with NVIDIA to build video- and audio-driven deep neural networks for facial animation, we can reduce that time by 80 percent in large scale projects and free our artists to focus on other tasks.”

Creating Images with AI

AI also holds promise for rendering 3D graphics, the process that turns digital worlds into the life-like images you see on the screen. Film makers and designers use a technique called ray tracing to simulate light reflecting from surfaces in the virtual scene. NVIDIA is using AI to improve both ray tracing and rasterization, a less costly rendering technique used in computer games.

Although ray tracing generates highly realistic images, simulating millions of virtual light rays for each image carries a large computational cost. Partially computed images appear noisy, like a photograph taken in extremely low light.

To denoise the resulting image, researchers used deep learning with GPUs to predict final, rendered images from partly finished results. Led by Chakravarty R. Alla Chaitanya, an NVIDIA research intern from McGill University, the research team created an AI solution that generates high-quality images from noisier, more approximate input images in a fraction of the time compared to existing methods.

This work is more than a research project. It’ll soon be a product. Today we announced the NVIDIA OptiX 5.0 software development kit, the latest version of our ray tracing engine. OptiX 5.0, which incorporates the NVIDIA Research AI denoising technology, will be available at no cost to registered developers in November.

AI Smooths out Rough Edges

NVIDIA researchers used AI to tackle a problem in computer game rendering known as anti-aliasing. Anti-aliasing is another way to reduce noise  — in this case, the jagged edges in the partially rendered images. Called “jaggies,” these are staircase-like lines that appear instead of smooth lines. (See left inset in image below).

NVIDIA researchers Marco Salvi and Anjul Patney trained a neural network to recognize these artifacts and replace those pixels with smooth anti-aliased pixels. The AI-based technology produces sharper images than existing algorithms.

AI computer graphics: NVIDIA's AI anti-aliasing smooths out jagged edges and replaces them with smooth lines.
The left inset shows an aliased image that is jaggy and pixelated. NVIDIA’s AI anti-aliasing algorithm produced the larger image and inset on the right by learning the mapping from aliased to anti-aliased images. Image courtesy of Epic Games.

How AI Traces the Right Rays

NVIDIA is developing more efficient methods to trace virtual light rays. Computers sample the paths of many light rays to generate a photo-realistic image. The problem is that not all of those light paths contribute to the final image.

Researchers Ken Daum and Alex Keller used machine learning to guide the choice of light paths. They accomplished this by connecting the mathematics of tracing light rays to the AI concept of reinforcement learning.

Their solution learns to distinguish the “useful” paths — those most likely to connect lights with virtual cameras —from paths that don’t contribute to the image.

NVIDIA's AI-guided light simulation delivers up to 10x faster image synthesis by reducing the required number of virtual light rays.
Simulating light reflections in this virtual scene — shown without denoising — is challenging because the only light comes through the narrowly opened door. NVIDIA’s AI-guided light simulation delivers up to 10x faster image synthesis by reducing the required number of virtual light rays.

NVIDIA AI Research at SIGGRAPH

At SIGGRAPH, you can learn more about how AI is changing computer graphics by visiting us at Booth #403 starting Tuesday, and by attending  NVIDIA’s SIGGRAPH AI research talks:

Tuesday,  Aug. 1

Wednesday, Aug. 2

Thursday, Aug. 3

 

The image at the top of this blog appears in a SIGGRAPH paper by NVIDIA researchers who used artificial intelligence to accelerate image synthesis by converting the partially-rendered image (left) to the final image (right).

Similar Stories