Testing, Testing: How VR Can Spin the Odometer Forward for Simulated Self-Driving CarsMarch 13, 2018
If time equals money, then the development of self-driving cars through road testing is in trouble.
Researchers at RAND estimate that getting a self-driving car to reach the same level of accuracy as a human driver would require driving 11 billion test miles. To put that in perspective, in 2016, all the autonomous driving companies in California clocked just 700,000 miles combined.
Using a combination of AI, deep learning and computer vision, the company has created what might be thought of as a time machine for developing vehicles that will take us from point A to B without us having to touch the steering wheel.
Driving in a Virtual World
Self-driving cars need to be able to make informed, rational decisions on how to act in ever-changing environments. To develop this understanding, the autonomous system needs to have experienced driving on real roads, with real driver behavior and real weather conditions.
And this is why many companies and researchers are driving test cars on our roads to gain training data. Cognata’s virtual environment enables companies to save time and money when testing autonomous vehicles, and avoid safety concerns.
Its technology is based on three main layers. The first is a “static” layer, where computer vision and deep learning algorithms use data from maps and satellite imagery to automatically generate digital maps of real cities, in 3D. Cognata’s patented TrueLife 3D Mesh technology simulates cities, including buildings, roads, lane markings, traffic signs, and even their foliage.
On top of this true-to-life but simulated static layer, Cognata adds a “dynamic” layer of traffic models. This includes other vehicles of all shapes and sizes as well as pedestrians. Historic local weather conditions and changes in lighting are also added, allowing a huge amount of variables to be tried and tested by the autonomous system.
The third “sensing” layer combines both the static and dynamic layers to simulate the interaction of sensors such as radar and lidar to the simulated environment and various stimuli within it. This provides a comprehensive autonomous driving simulation feedback loop for each drive.
An Experience Like No Other
The upshot is Cognata’s technology allows AI driving systems to encounter, and learn from, a huge variety of scenarios which they might never get the chance to face in real-world testing. Want to focus on snowy conditions with icy roads? Cognata can provide endless miles of them.
The company’s technology is powered by the NVIDIA DGX Station, an AI supercomputing workstation that packs the deep learning power of a data center, but fits neatly under a desk. They used DGX Station to train their deep neural networks 10x faster than previously. Plus, they can now run 10 virtual vehicles within their simulated environment at the same time.
Equipped with an array of cameras, lidar and radar sensors, the virtual vehicles can generate data for thousands of miles driven per hour. This means Cognata can train autonomous vehicles 1,000x faster than traditional methods and can reach the validation targets that the automotive industry needs to bring fully autonomous vehicles to market in just a few years.
Cognata at GTC
Cognata is one of more than 2,200 startups in our Inception program. The virtual accelerator program provides startups with access to technology, expertise and marketing support.
Cognata won the title of “Israel’s Hottest Startup” at GTC Israel last year. Come hear the company’s CEO, Danny Atsmon, give a talk on “Deep Learning Autonomous Driving Simulation” by registering for our flagship GTC, in San Jose, March 26-29.