A new approach to autonomous driving is pursuing a solo career.
Researchers at MIT are developing a single deep neural network (DNN) to power autonomous vehicles, rather than a system of multiple networks. The research, published at COMPUTEX this week, used NVIDIA DRIVE AGX Pegasus to run the network in the vehicle, processing mountains of lidar data efficiently and in real time.
AV sensors generate an enormous amount of data — a fleet of just 50 vehicles driving six hours a day generates about 1.6 petabytes of sensor data a day. If all that data were stored on 1GB flash drives, they’d cover more than 100 football fields.
Self-driving cars must process this data instantaneously to perceive and safely navigate their surrounding environment. However, due to the volume of data, it’s extremely difficult for a single DNN to perform this processing, which is why most approaches use multiple networks and high-definition maps.
In their paper, the MIT team detailed how it’s attempting a new self-driving strategy with a single DNN, beginning with the task of real-time lidar sensor data processing.
By leveraging the high-performance, energy-efficient NVIDIA DRIVE AGX Pegasus, the team was able to engineer new accelerations to the computation of lidar in order to achieve, and even exceed, this goal, operating 15 times faster than current state-of-the-art systems.
Many AV systems in development today leverage a high-definition map in addition to an array of DNNs to process sensor data. The combination enables an AV to quickly locate itself in space and detect other road users, traffic signs and other objects.
While this method provides the redundancy and diversity necessary for safe autonomous driving, it’s difficult to implement in areas that haven’t been mapped.
Furthermore, AV systems that leverage lidar sensing need to process more than 2 million points in their surroundings every second. Unlike 2-dimensional image data, lidar points are extremely sparse in 3D space, presenting a huge challenge for modern compute hardware, as architectures are not tailored to this form of data.
The MIT team developed new improvements to achieve much more speed and energy efficiency beyond base architectures.
MIT’s DNN is designed to perform the same functions as an entire self-driving system. This complete functionality is achieved by training the network on enormous amounts of human driving data, teaching it to approach driving holistically as a human driver would, rather than breaking it out into specific tasks.
While this method is still in development, it has significant potential benefits.
Running a single DNN in the vehicle is markedly more efficient than multiple dedicated networks, opening up compute headroom for other features. It’s also more flexible, since the DNN relies on its training, rather than a map, to navigate unseen roads. The efficiency improvements also allowed for processing much larger amounts of rich perception data in real-time.
Supercharging Performance with NVIDIA DRIVE
When combined with high-performance, central compute, MIT found that its DNN is even more adept.
NVIDIA DRIVE AGX Pegasus is an AI supercomputer designed for level 4 and level 5 autonomous systems. It uses the power of two NVIDIA Xavier SoCs and two NVIDIA Turing GPUs to achieve an unprecedented 320 trillion operations per second of performance.
MIT researchers set out to develop the DNN on a compute system that wasn’t just powerful, but also common among AV systems currently in development.
“We wanted to have a very flexible and modular AV system, and NVIDIA is the leader in this field,” said Alexander Amini, a Ph.D. student at MIT co-leading the project. “Pegasus is able to handle the input streams from various sensors, making it easy for developers to implement their DNNs.”
The DNN’s lidar perception capabilities are just the beginning of the MIT researchers’ self-driving development goals. Amini said the team is looking to tackle combined sensor streams, more complex interaction with other vehicles, as well as adverse weather conditions — all with NVIDIA DRIVE on board.