Grab the steering wheel. Step on the accelerator. Take a joyride through a 3D urban neighborhood that looks like Tokyo, or New York, or maybe Rio de Janeiro — all imagined by AI.
We’ve introduced at this week’s NeurIPS conference AI research that allows developers to render fully synthetic, interactive 3D worlds. While still early stage, this work shows promise for a variety of applications, including VR, autonomous vehicle development and architecture.
The tech is among several NVIDIA projects on display here in Montreal. Attendees huddled around a green and black racing chair in our booth have been wowed by the demo, which lets drivers navigate around an eight-block world rendered by the neural network.
Visitors to the booth hopped into the driver’s seat to tour the virtual environment. Azin Nazari, a University of Waterloo grad student, was impressed with the AI-painted scene, which can switch between the streets of Boston, Germany, or even the Grand Theft Auto game environment at sunset.
The demo uses Unreal Engine 4 to generate semantic layouts of scenes. A deep neural network trained on real-world videos fills in the features — depicting an urban scene filled with buildings, cars, streets and other objects.
This is the first time neural networks have been used with a computer graphics engine to render new, fully synthetic worlds, say NVIDIA researchers Ting-Chun Wang and Ming-Yu Liu.
“With this ability, developers will be able to rapidly create interactive graphics at a much lower cost than traditional virtual modeling,” Wang said.
Called vid2vid, the AI model behind this demo uses a deep learning method known as GANs to render photorealistic videos from high-level representations like semantic layouts, edge maps and poses. As the deep learning network trains, it becomes better at making videos that are smooth and visually coherent, with minimal flickering between frames.
Hitting a new state-of-the-art result, the researchers’ model can synthesize 30-second street scene videos in 2K resolution. By training the model on different video sequences, the model can paint scenes that look like different cities around the world.
For those in Montreal this week, stop by our booth — No. 209 — to sit behind the wheel and try it out for yourself.
A TITAN Tangles with DOPE
Opposite of them, attendees are flocking to a table stacked with an odd assortment of items — cans of tomato soup and spam, a box of crackers, a mustard bottle. It may not sound like much, but this demo is DOPE. Literally.
DOPE, or Deep Object Pose Estimation, is an algorithm that detects the pose of known objects using a single RGB camera. It’s an ability that’s essential for robots to grasp these objects.
Giving new meaning to “hands-on demos,” booth visitors can pick up the cracker box and cans, moving them across the table and changing their orientation. A screen above displays the neural network’s inferences, tracking the objects’ edges as they shift around the scene.
“It’s a $30 camera, very cheap, very accessible for anyone interested in robotics,” said NVIDIA researcher Jonathan Tremblay. The tool, trained entirely on computer-generated image data, is publicly available on GitHub.
Booth visitors can also feast their eyes on stunning demos of real-time ray tracing. Running on a single Quadro RTX 6000 GPU, our Star Wars demo features beautiful, cinema-quality reflections enabled by NVIDIA RTX technology.
And while a few conspiracy theorists still question whether the Apollo 11 mission actually landed on the moon, a ray-traced recreation of one iconic lunar landing image shows that the photo looks just as it should if it were taken on the moon.
Data scientists exploring the booth will see the new TITAN RTX in action with the RAPIDS data analytics software, quickly manipulating a dataset of all movie ratings by IMDB users. Other demos showcase the computing power NVIDIA TensorRT software provides for inference, both in data centers and at the edge.
The NVIDIA booth at NeurIPS is open all week from 10am to 7pm. For more, see our full schedule of activities.