The future, as the saying goes, is already here. It’s just unevenly distributed. Maybe that’s because it comes loaded with a trunk full of computers.
So today we’re announcing pricing and availability for our NVIDIA PX development platform.
We’re making our DRIVE PX development platform available in May to automakers, tier 1 automotive suppliers and research institutions, to get cars on the road toward driving themselves.
NVIDIA DRIVE PX is built for the growing number of automakers that have already put — or soon will put — self-driving cars on the roads.
The common denominator: all of these projects rely on NVIDIA GPU technology to help process and analyze, in real time, the data streaming in from sensors and cameras mounted all over the car.
One of our partners has even announced plans to send off its self-piloted cars across the United States.
DRIVE PX’s twin NVIDIA Tegra X1 processors deliver 2.3 teraflops of performance. Yet each individual superchip is no bigger than a thumbnail.
That’s enough to weave together data streaming in from 12 camera inputs and enable a wide range of advanced driver assistance features to run simultaneously — including surround view, collision avoidance, pedestrian detection, mirror-less operation, cross-traffic monitoring and driver-state monitoring.
More Is More
Yet DRIVE PX is built to tap into a new technology called “deep learning” to give it capabilities far beyond what you can stuff into any of today’s passenger cars.
That’s because today’s advanced driver assistance systems have evolved around the principle of classifying objects that the car’s sensors would detect.
It works, but it’s not enough. Imagine training such a system to be ready for any possible eventuality. It’s just not possible.
Our DRIVE PX development platform is built to crack that problem. It includes a new deep neural network software development kit we call DIGITS, as well as video capture and video processing libraries.
DIGITS is a deep learning training system that can be run on systems powered by our GPUs — including our new DIGITS DevBox development platform — and that lets computers train themselves to understand objects in the world around them (see “DIGITs: Deep Learning Training System” on our Parallel Forall blog for more details).
Much like a human learns through experiences, so does the deep neural network. Now we can do more than just train systems to recognize objects – but also training behavior. (See “Here’s How Deep Learning Will Accelerate Self-Driving Cars” for an overview of how this works.)
Then the model created by the DIGITS DevBox can be loaded into a vehicle and run in real time on the DRIVE PX.
It’s a system that can be trained, and retrained, with more data. Every time your self-driving car gets an over-the-air update, it can get smarter.
The result: a self-driving system that extends well beyond the hardware found in any car, without all that extra electronic junk in the trunk.