NVIDIA DRIVE Constellation is bringing autonomous vehicle test fleets to the cloud.
At the GPU Technology Conference, NVIDIA founder and CEO Jensen Huang announced that the NVIDIA DRIVE Constellation simulation platform is now available.
Toyota’s research and development arm, the Toyota Research Institute-Advanced Development (TRI-AD), will leverage DRIVE Constellation as part of an expanded partnership between the companies. TRI-AD will use the platform to accelerate the development and production timeline of autonomous vehicles, simulating the equivalent of billions of miles of driving in challenging scenarios.
The cloud-based solution enables millions of miles to be driven in virtual environments across a broad range of scenarios — from routine driving to rare or even dangerous situations — with greater efficiency, cost-effectiveness and safety than what is possible in the real world.
DRIVE Constellation is a data center solution, comprising two side-by-side servers. The first server — DRIVE Constellation Simulator — generates the sensor output from the virtual car. The second server — DRIVE Constellation Vehicle — contains the DRIVE AGX Pegasus AI car computer. The DRIVE AGX Pegasus receives the sensor data, makes decisions, and then sends vehicle control commands back to the simulator. This closed loop process enables bit-accurate, timing-accurate hardware-in-the-loop testing.
This validation process runs in real time and can be performed at scale, with multiple units running a variety of tests in parallel. With this level of efficiency, DRIVE Constellation can achieve massive amounts of driving experience — 3,000 units can drive over 1 billion miles per year. More importantly, each mile driven in DRIVE Constellation contains events of interest — including rare or hazardous scenarios.
Cloud-Based, End-to-End Workflow
On the GTC keynote stage, Huang demonstrated how the DRIVE Constellation platform performs driving tests and delivers results in a seamless workflow.
DRIVE Constellation users can remotely access the platform anywhere via the cloud. Developers can submit a specific simulation scenario — for example, an autonomous vehicle reacting to another car cutting into its lane in heavy traffic during a foggy night on wet roads.
In order to determine the performance of the AV, developers can set specific evaluators, such as time to collision, following distance and passenger comfort levels, view the test as it runs, and visualize the results.
The same test with specified variations that highlight extreme and dangerous conditions — like dense traffic, inclement weather and low visibility — can be run in parallel. This large-scale validation capability is like operating a massive virtual fleet of test vehicles, accomplishing months or years of testing in a fraction of the time.
DRIVE Constellation is an open platform, meaning it provides a programing interface that allows DRIVE Sim ecosystem partners to integrate their environment models, vehicle models, sensor models and traffic scenarios. By incorporating a variety of partners, the platform can generate comprehensive, diverse and complex testing environments.
Detailed traffic and scenario models developed by Cognata, an Israel-based simulation company, are supported by the DRIVE Constellation platform. The company uses real-world data captured by traffic cameras around the world to create an accurate large-scale traffic model.
With Cognata’s traffic model, developers can define the number of other vehicles and road users, as well as their behavior, based on real-world traffic data.
Automotive simulation company IPG Automotive is also working with DRIVE Constellation to provide a high-fidelity vehicle model. It enables developers to accurately simulate how the vehicle reacts to various DRIVE Sim commands, such as steering, brake and throttle, and to various road conditions.
To accurately simulate how data collected from camera, radar or lidar sensors is fed into the vehicle, DRIVE Constellation can also support sensor models such as camera, lidar and radar. ON Semiconductor, a semiconductor and sensor supplier, is working with DRIVE Constellation to provide a highly accurate camera model.
The open platform is also a key component for third-party and regulatory autonomous vehicle standards. Safety agencies such as TÜV SÜD are already using the platform to formulate their self-driving validation standards.
Covering Known Risks, Discovering Unknown Unknowns
For comprehensive validation, autonomous vehicles must be tested in conditions that are risky for humans as well as those that are difficult for driverless cars.
As humans, it’s easy to identify the situations where we’re accident prone — an unprotected left turn, a heavy rainstorm, sudden highway stops. These scenarios and more can be defined and tested over and over again in simulation until the vehicle handles them seamlessly.
However, we don’t yet know the full range of conditions that are risky for autonomous vehicles. That’s why an open and scalable platform like DRIVE Constellation is necessary for validation.
With accurate traffic models, tests can play out unscripted. The behavior of the other road users, and the test vehicle’s reaction to these behaviors, makes it possible to find the unknown unknowns — edge cases a self-driving car could encounter that may not be common for human drivers.
By mining these situations in simulation, they can then be rigorously tested, just like those already identified by humans.
DRIVE Constellation users can have this wide range of testing capability right at their fingertips. Apply for access to DRIVE Constellation here.
GTC attendees can see DRIVE Constellation in action on the showfloor as part of the NVIDIA AutoPilot demo in NVIDIA’s booth.