Why Simulated Roads Make Self-Driving Cars Safer

Safety is paramount when developing any vehicle, whether driven by a person or a computer.

During the development of autonomous vehicles, the self-driving technologies must be evaluated again and again in a variety of driving conditions to ensure they are even safer than human-driven cars. Sometimes this means testing in the real world on actual roads. But equally important, it means using simulation to augment actual driven miles.

In particular, simulation can be effective in testing dangerous or uncommon driving conditions. The flexibility and versatility of simulation make it especially valuable.

Without simulation, it would be far too dangerous to determine how self-driving vehicles react to certain real scenarios — like a child darting out into the street from behind a parked car, or another vehicle running a red light.

Advanced graphics techniques can also replicate existing scenarios and modify them. For example, simulation can create a snowstorm on demand, even if you live in a desert. Or it can position the sun to “blind” a vehicle as it might during sunrise or sunset. And simulation offers a controlled way to model situations that might put test drivers at risk — like putting a patch of black ice on a highway.

Simulation also supports testing a multitude of scenarios in a short amount of time. During the opening keynote at GTC Europe in Munich, NVIDIA CEO Jensen Huang told the crowd that — using super-real-time simulation with NVIDIA DGX and the new TensorRT 3 — engineers could simulate driving 300,000 miles in 5 hours. That’s essentially every paved road in the United States simulated in just two days.

NVIDIA CEO Jensen Huang on stage at GTC Europe describing the awesome simulation power of GPU technologies.
NVIDIA CEO Jensen Huang on stage at GTC Europe describing the awesome simulation power of GPU technologies.

For simulation to be effective for training and testing self-driving vehicles, the digital world must behave like the real world. Detailed graphics and robust physics engines, powered by GPUs, provide the necessary realism.

Once the simulation is created, it must connect with a self-driving system. NVIDIA’s unified GPU architecture makes it easy to move self-driving technologies between simulated environments in the research lab or the data center, and NVIDIA DRIVE PX in the vehicle.

Simulation on Display at GTC

DRIVE PX is an AI car computer that fuses data from various automotive sensors, runs the complex software algorithms required to drive autonomously, and then sends self-driving instructions to the vehicle.

DRIVE PX can also be configured to take in simulated sensor data, and output simulated driving commands. At GTC Europe, several companies gave talks and demonstrations outlining how they leverage DRIVE PX in this way.

IPG uses DRIVE PX for simulations
IPG uses DRIVE PX in a simulated environment to test pedestrian detection capabilities.

IPG Automotive’s Dominik Dörr spoke about virtual prototypes and sensor models. His company’s automated driving solutions offer self-driving engineers a way to integrate various development efforts in order to test them as a whole. According to Dörr, this allows early testing on individual functions or networks before a complete prototype is finished.

These virtual prototypes run on DRIVE PX, which is configured to navigate in simulated environments. DRIVE PX evaluates a simulated environment the same way it would the real world, and provides driving directions accordingly. Through this process, engineers can evaluate whether or not their new self-driving solutions are working properly.

VI-grade drives a physical simulator that can be used to test lower levels of automation
Roberto De Vecchi from VI-grade drives a physical simulator that can be used to test lower levels of automation, where the human driver still needs to take the wheel under certain conditions.

Roberto De Vecchi from VI-grade, together with Enrico Busto from partner AddFor, discussed testing both the accuracy of driving software and the software’s impact on the human in the vehicle. To do this, they use driving simulators that combine a human driver’s inputs — for example, the need to take over driving when directed to do so by the vehicle — with self-driving software running on DRIVE PX.

As a result of this testing, companies can determine if the software works properly and evaluate how the vehicle experience feels to the human inside.

Driving with Data

Rodolphe Tchalekian talked about how ESI Group’s simulation software, Pro-SiVIC, creates a real-time, physically realistic, 3D virtual environment for testing and training machine learning algorithms.

When companies want to create new machine learning algorithms for autonomous driving, they need large training datasets. This data, if collected from the real world, must then be painstakingly labeled before the self-driving algorithm can ingest and learn from it. Simulated data, on the other hand, is automatically labeled as it is created, saving massive amounts of time.

Once a new algorithm has trained on the synthetic dataset, ESI uses DRIVE PX to validate that it works properly. In the video above you can see this process in action, as a trained machine learning algorithm drives in the Pro-SiVIC simulated environment.

TASS uses DRIVE PX to test lane keeping
TASS uses DRIVE PX to test lane keeping in a simulated driving environment.

Martijn Tideman from TASS International highlighted his company’s PreScan simulation platform in a session at GTC Europe. PreScan is a physics-based simulation platform for evaluating autonomous driving and other vehicle applications.

PreScan has been used in the past to test driver assistance features and vehicle-to-vehicle communications. More recently, TASS has used PreScan data to train and validate deep learning algorithms required for autonomous driving.

Tideman shared findings from a joint project with the German Research Center for Artificial Intelligence and Siemens, which demonstrated the value of synthetic data on deep learning. The project determined that, when training deep learning driving algorithms, adding synthetic data to real-world data was more effective than using real-world data alone.

Following the event in Munich, NVIDIA hosted the inaugural GPU Technology Conference in Israel. Simulation startup Cognata presented its business strategy to a panel of five judges, ultimately winning the Inception Award competition for promising AI startups.

Cognata uses patented algorithms to create simulated cities with realistic vehicle and pedestrian behavior. The company also reproduces sensor input in the simulated environment, applying deep learning to ensure the simulated sensors behave exactly as they would in the real world.

From training to testing, simulation improves autonomous driving outcomes. It saves time and increases performance during the training process, and facilitates testing scenarios that would be unsafe or impractical in the real world.

It’s necessary to evaluate the performance of new self-driving technologies on actual roads with human safety drivers. But simulation allows us to supplement those real-world driving hours, making roads safer for everyone.

To learn more about the role of GPUs in autonomous driving, and how AI is transforming virtually every industry, join us in Washington for our GTC DC event.

Similar Stories

  • realjjj

    Isn’t it a huge waste to test and validate decision making with raw sensor data? Ofc that might not be possible, depends on the system, but feeding it processed sensor data would be more efficient. That way it’s also feasible to use the existing fleet for simulation as cars are not in use 90% of the time with ownership and why let all that available compute go to waste?
    Training is one thing but enabling very frequent software updates is an issue even if the focus is on testing corner cases.

  • http://angelakrauselincoln.com Martin Williams

    Great post full of useful tips! I will bookmark this post as Interesting stuff to read.