As much of the world continues to conduct business from home, NVIDIA’s autonomous test vehicles are hard at work in the cloud.
During the GTC 2020 keynote, NVIDIA CEO Jensen Huang demonstrated how NVIDIA DRIVE technology is being developed and tested in simulation. While physical testing is temporarily paused, the cloud-based NVIDIA DRIVE Constellation platform makes it possible to dispatch virtual vehicles in virtual environments to continue making great progress in self-driving technology.
In the video demonstration, a virtual NVIDIA BB8 test vehicle drives near NVIDIA headquarters in Silicon Valley, traveling through highways and urban streets — all in simulation. The 17-mile loop shows the NVIDIA DRIVE AV Software navigating the roadways, pedestrians and traffic in a highly accurate replica environment.
Data Center Proving Ground
NVIDIA DRIVE Constellation is a cloud-based simulation platform, designed from the ground up to support the development and validation of autonomous vehicles. The data center-based platform consists of two side-by-side servers.
The first server uses NVIDIA GPUs running DRIVE Sim software and generates the sensor output from the virtual car driving in a virtual world. The second server contains the actual vehicle computer, processing the simulated sensor data running the exact same DRIVE AV and DRIVE IX software that’s being deployed in the real car.
The driving decisions from the second server are fed back into the first, enabling real-time, bit-accurate, hardware-in-the-loop development and testing.
The system is designed to be deployed in a data center as a scalable virtual fleet. This provides development engineers with a vehicle on demand, and gives them the ability to conduct testing at scale. It also makes it possible to consistently test rare and dangerous scenarios that are difficult or impossible to encounter in the real world.
Development and Testing from End to End
Building an autonomous vehicle requires testing at every level — starting at subsystems and continuing all the way to full vehicle integration tests. DRIVE Constellation enables this type of end-to-end development and testing for autonomous vehicles in simulation, similar to developing a physical car.
End-to-end tests ensure timing and performance accuracy as well as accurate modeling of the complex interdependency of different systems in autonomous vehicle software.
Achieving this level of fidelity at scale is a major undertaking. The environment, traffic behavior, sensor inputs and vehicle dynamics must appear, act and feed into the car computer just as they would in the real world.
This requires multiple GPUs to generate synthetic data in sync with precise timing. The vehicle software and hardware signals and interfaces must be replicated in simulation — and everything has to run in real time.
Simulating Silicon Valley
Comprehensive simulation starts with the environment. To accurately recreate the Silicon Valley driving loop, 3D Mapping, a member of the NVIDIA DRIVE ecosystem, scanned the roadways to within 5 centimeters of accuracy. The raw scanned data was then processed into a dataset format known as OpenDRIVE.
From there, NVIDIA developed a content creation pipeline to generate a highly accurate 3D environment using the NVIDIA Omniverse collaboration platform. The environment includes accurate road networks and roadmarks. Material properties are also applied to ensure it interacts with light rays, radio waves and lidar rays in the same way real sensors interact with the physical world.
Recreating Sensor Data
With an accurate environment in place, high-fidelity development and testing next requires accurately generated sensor data. The sensor models include those typically found on an autonomous test vehicle, such as camera, lidar, radar and inertial measurement unit. DRIVE Sim provides a flexible sensor pipeline and APIs that allow configuring sensors to match real-world vehicle architectures.
For camera data, the image pipeline starts by rendering an HDR image that is warped according to the lens properties of the camera used on the vehicle. Exposure control, black and white level balancing, and color grading are applied to the image to match the sensor profile. Finally, the pixel data is converted to its native output format using a sensor-specific encoder.
In addition to camera models, DRIVE Sim provides physically based lidar and radar sensors using ray tracing. NVIDIA RTX GPUs enable DRIVE Sim to run highly computationally intensive radar and lidar models in real time.
Modeling Vehicle Behavior
Finally, vehicle models are critical for accurate simulation. As control signals — steering, acceleration and braking — are sent to the in-vehicle computer, the car must respond just as it would in the physical world.
To do so, the simulation platform must recreate motion properly, including details such as interaction with the road surface. Vehicle models in DRIVE Sim are handled using a plugin system with the included PhysX models or third-party vehicle dynamics models from NVIDIA DRIVE ecosystem partners such as Mechanical Simulation or IPG.
Vehicle dynamics also play a key role in accurate sensor data generation. As the vehicle operates, the position and pose of the vehicle changes significantly, affecting a sensor’s viewpoint. For example, the forward-facing cameras will pitch downward when a car is braking. Modeling vehicle dynamics correctly is important to generating sensor data properly.
By accurately simulating each of these components — environment, sensors, vehicle dynamics — on a single, end-to-end platform, NVIDIA DRIVE Constellation and DRIVE Sim are critical pieces to a comprehensive development and testing pipeline. They enable NVIDIA and its partners to work toward safer and more efficient autonomous vehicles as physical fleets remain in the garage.