Checking the Rearview Mirror: NVIDIA DRIVE Labs Looks Back at Year of Self-Driving Software Development

Video series highlights the challenges and innovations in autonomous driving.
by Neda Cvijetic

The NVIDIA DRIVE Labs video series provides an inside look at how self-driving software is developed. One year and 20 episodes later, it’s clear there’s nearly endless ground to cover.

The series dives into topics ranging from 360-degree perception to panoptic segmentation, and even predicting the future. Autonomous vehicles are one of the great computing challenges of our time, and we’re approaching software development one building block at a time.

DRIVE Labs is meant to inform and educate. Whether you’re just beginning to learn about this transformative technology or have been working on it for a decade, the series is a window into what we at NVIDIA view as the most important development challenges and how we’re approaching them for safer, more efficient transportation.

Here’s a brief look at what we’ve covered this past year, and how we’re planning for the road ahead.

A Cross-Section of Perception Networks

Before a vehicle plans a path and executes a driving decision, it must be able to see and understand the entire environment around the vehicle.

DRIVE Labs has detailed a variety of the deep neural networks responsible for vehicle perception. Our approach relies on redundant and diverse DNNs — our models cover a variety of capabilities, like detecting intersections, detecting traffic lights and traffic signs and understanding intersection structure. They’re also capable of multiple tasks, like spotting parking spaces or detecting whether sensors are obstructed.

These DNNs do more than draw bounding boxes around pedestrians and traffic signals. They can break down images pixel by pixel for enhanced accuracy, and even track those pixels through time for precise positioning information.

For nighttime driving, AutoHighBeamNet enables automated vehicle headlight control, while our active learning approach improves pedestrian detection in the dark.

DNNs also make it possible to extract 3D distances from 2D camera images for accurate motion planning.

And our perception capabilities operate all around the vehicle. With surround camera object tracking and surround camera-radar fusion, we ensure there are no perception blind spots.

Predicting the Road Ahead

In addition to perceiving their environment, autonomous vehicles must be able to understand how other road actors behave to plan a safe path forward.

With recurrent neural networks, DRIVE Labs has shown how a self-driving car can use past insights about an object’s motion to compute future motion predictions.

Our Safety Force Field collision avoidance software adds diversity and redundancy to planning and control software. It constantly runs in the background to double-check controls from the primary system and veto any action that it deems to be unsafe.

The DNNs and software components are just a sampling of the development that goes into an autonomous vehicle. This monumental challenge requires rigorous training and testing, both in the data center and the vehicle. And as transportation continues to change, the vehicle software must be able to adapt.

We’ll explore these topics and more in upcoming DRIVE Labs episodes. As we continue to advance self-driving car software development, we’ll share those insights with you.