Live: NVIDIA’s Las Vegas CES Press Event

Thanks for tuning in! To recap, we announced three things tonight: NVIDIA DRIVE PX 2, NVIDIA DriveWorks software and NVIDIA DRIVENet, our deep neural network. For more, read our wrap-up from today’s event, watch our CES news summary below, or review the entire press conference on our video replay further down this page.

7:10 PM – It’s a fast finish – under an hours and 10 minutes. Some folks linger to take pics. Others to schmooze. Others are heading off to the next pre-CES event.

Volvo to Deploy DRIVE PX in Self-Driving SUVs

7:08 PM – NVIDIA DRIVE PX 2 will be liquid-cooled, though there’s a fan option, as well.

It’s self-driving supercomputer that fits in the trunk of your car.

JHH announces that the first car company to deploy DRIVE PX 2 will be Volvo. In a public test next year 100 autonomous cars will hit the road in the Swedish carmaker’s hometown of Gothenburg.

7:03 PM – JHH now compares DRIVE PX 2, built on a 16nm process, to TITAN X, built on a 28nm process. DRIVE PX 2 is roughly six-times more powerful. DRIVE PX2 has 12 CPUs cores, capable of 8 teraflops of processing power and 24 teraflops of deep processing operations. It’s equivalent to 150 MacBook pros in the trunk of your car.

JHH holds DRIVE PX 2, not much bigger than a tablet. It has two next-generation Tegra processors, and two next-generation Pascal-based discrete GPUs.

Cameras flash. Shutters click. The audience chuckles quietly.


A Big (Literally) Demo

7:01 PM – All of this working together, on one software stack. One aspect of self-driving cars is for the driver to be able to see what the car “sees.”

Justin comes on stage and reveals a massive car infotainment system.

There are two screens, one that shows the car moving virtually down the road, the combined result of millions of points of data analyzed and shaped into other cars, lanes, placed in 3D space.

The other screen provides a map that depicts the car’s progress.

For contrast, Justin also shows on a bigger screen, a video that shows the view out the front of the car, and a fused image that shows the car in 3D from, say 20-feet out. The images match, though one is what the eye sees; the other is what the car sees.


6:57 PM – The next step is to figure out where we are on a highly accurate map, so we can do path planning and figure out how to safely move forward. While GPS would be accurate within several meters, we need accuracy within several centimeters.

The car is shown estimating various ways forward, and how to pass other cars cars safely.

At its essence, the demo shows how a collection of sensors, combined with a high-def map, can show where the car is with reference to objects around it and what the best way forward is.

6:53 PM – JHH said that what Miguel is show is structure from motion – that is, projecting the 3D environment through the series of 2D camera sensors.

By adding in lidar, a map is fused together which shows how the car is moving through space and how it relates to objects around it.

The cameras, with the benefit of deep learning, can detect cars, buses, trucks and other moving objects.

It all get projected on the same map that shows our car with other objects whizzing by.

Introducing NVIDIA DriveWorks

6:50 PM – JHH notes just how many things there are for a car to learn. Every bus isn’t the same – school buses and passenger buses play by different rules; some trucks are ambulances. Each requires a different behavior by the car.

First part was a platform to train the network. But this creates the ultimate systems-software challenge. NVIDIA DriveWorks provides the pipeline for running perception, localization, planning and visualization.

JHH introduces Miguel, another engineer. He shows an image of a car with a series of standard auto-grade cameras, pointing front rear right and left, others that provide a narrow view of the same directions. There are lidars that scan four times a second. We chose this configuration of sensors to test our platform for self-driving cars, though our partners may use other configurations.

Miguel then shows all the sensors running at the same time, taking in six different views that are streaming into the computer, plus front and rear lidar measurements that depict the car as if from 50 feet up in the air, off at an angle.

The next challenge is fusing it all together, tracking 8,000 points for each of four cameras at a rate of 30 frames a second.

Innovators Flocking to DRIVE PX

6:41 PM – Running a TITAN X – one of the world’s highest performance GPUs –our NVIDIA DRIVENet network can achieve 50 frames a second. We use this to develop our reference neural net.

There are amazing engineers using our technology now.

Audi, for example, in just a few hours trained a network that can recognize German road signs at a level that beats every hand-coded computer vision approach.  It achieves better perception of road signs than a human could.

At Daimler, they’ve achieved perfect pixel recognition.

Also using it are ZMP, a Japanese company creating a self-driving taxi; and BMW, along with Preferred Networks, a Toyota partner; and Ford, which has had a 30x speedup in network training.

6: 37 PM – Mike continues: because we built such a robust network, we can now do something harder than just identify cars. It can now identify, pedestrians, motorcyclists. There are five different classes. None of this was coded by hand – it was all done through deep learning. That means it’s dramatically faster and far cheaper than paying sophisticated engineers.

Now, Mike shows a Technicolor video that colors humans in green, the road in purple, cars in red. Looks like a children’s drawing come to life.

Audi wanted to see what we could do. They gave us a data set of cars proceeding through driving snow. Overnight we could identify objects in the scene with deep learning that the human eye couldn’t detect.

Hitting the Road with Deep Learning

6:30 PM – We’ve used this very platform to create NVIDIA DRIVENet, our very own deep neural network, with 9 inception layers, 3 convolutional layers, and 37 million neurons. It takes 40 billion operations to run information through the network once.


This is one of the most important parts of doing deep learning training. It requires iterating over time, over just a few months, we’ve achieved nearly world-class levels. The top mark on the most widely followed industry benchmark was posted by Baidu on the KITTI dataset. We’re close, but we’re running in real time.

JHH now introduces an NVIDIA researcher, Mike Houston, who heads up development of our neural network.

Mike shows a vid that depicts a car driving and detecting other cars on a city road. It looks like a high-def video of your commute, but with yellow boxes quickly appearing over every car that comes up. We’ve trained the network on 120 million objects – this took a moth for training work. Without GPU acceleration, this would have taken a couple of years.

6:25 PM – JHH said that there’s now deep learning everywhere with NVIDIA – on a single platform.

So, we want to enable an end-to-end deep learning platform for self driving cars. The training network is called NVIDIA DIGITS. The way the network is deployed is NVIDIA DRIVE PX 2.

This will enable a car to learn about the world, convey it back to the cloud-based network, which then updates all cars. Every car company will own its own deep neural network. We want to create a platform for these to be deployed

So, to recap. Three strategies NVIDIA has:

  1. Ensure NVIDIA GPUs accelerate all frameworks for GPUs;
  2. Create platforms for deploying deep learning;
  3. Develop an end-to-end development system to train and deploy the network.

6:21 PM – There are now a great many networks that use deep learning, running on NVIDIA GPUs. They include Facebook’s Big Sur, Google’s Tensorflow, IBM’s Watson, and Microsoft’s CNTK. Great research universities such as Berkeley and NYU, as well as startups, are doing work in this area as well.

Accelerating applications by 20-40 times will have massive implications.

Superhuman Capabilities

6:19 PM – The real breakthrough took place in 2012, when the accuracy rate in the ImageNet competition jumped dramatically through the application of deep learning and the abandonment of hand coding.

This past year, Microsoft and Google announced that their neural networks are now beating human capabilities. Baidu, the Chinese search-giant, announced that it can now beat humans in voice recognition. Microsoft, working with Hong Kong University of Science and Technology, beat a college student in an IQ test.

So, the challenge is how do you harness it for self-driving


6:16 PM – The solution to this complexity has arrived recently – the rise of Deep Learning, that is computer programs being able to detect important features on their own, given past familiarity with examples. Computer engineers don’t need to do the programming – the work gets done by training on deep neural networks. Training on networks can take trillions of operations, which required many months. But using GPUs accelerated training by 30-40 times. What would have taken a year now took a week. What took a month took a day.

Deep learning will be as big a contribution as the Internet or mobile computing.


6:13 PM – It’s hard because driving is hard.

Consider the complexity of the world and all the permutations it could put forward. Consider the unpredictability of the world – jaywalkers, stray dogs, aggressive bikers. And consider the hazards in the world – potholes, stray cars and other bolts out of the blue.

The biggest problem is perception. This is at the core a very difficult problem.

6:10 PM – Self-driving can be understood, JHH says, as a loop.

The car needs to sense its surroundings – using sensors like radar, lidar, ultrasonics.

It also needs a previously generated map of great precision. It also needs the ability to locate itself in its environment. And it needs to be able to perceive what’s around the car. These, together allow you to plan a safe path forward.

This is a continuous loop, that needs to be done as fast as possible.

Turns out, all this is hard to do.

Revolutionary Potential

6:08 PM – Self-driving cars will revolutionize society. And NVIDIA’s vision is to enable them.

There are two visions of self-driving cars:

  • In one, the car is a virtual co-pilot that drives better than we do because it pays attention all the time. It sees all around itself and keeps you out of harm’s way. It’s as if there’s a virtual force field around it.
  • In the second, the car has no driver at all.

In both, the computational needs are far greater than anything that’s available. In both, if we can realize it, the contribution we can make to society is great.

6:06 PM – He now talks about why NVIDIA is doing this.

Self-driving cars, he said, will make a huge contribution to society. Humans are the most unreliable part of the car.

Self-driving cars will also accelerate mobility, leading to fewer cars on the road. Fewer cars will result in fewer parking lots and better urban environments.

Introducing NVIDIA DRIVE PX 2

6:04 PM – JHH recaps his main points from last year – noting that deep learning is what’s going to be needed to bring accuracy. But that’s going to take huge computational powers.

Several thousand engineering have gone into the NVIDIA DRIVE PX 2, the world’s first artificial=intelligence supercomputer for self-driving cards.

It’s got some chops. 12 CPUs. NVIDIA’s next-gen Pascal-based GPU. All producing 8 teraflops of power. That’s equivalent to 150 Macbook Pros. And it’s in a case the size of a school lunchbox.



Let’s Roll

6:02 PM – Okay, the so-called Voice Of God comes up, saying that the show will start shortly. Just about every seat’s filled.

And JHH enters stage left.  Trademark black leather jacket with a silver zipper. Black pants and shoes.

Welcome, he said. We’re going to talk about self driving cars, which draws applause.

6 PM – The music’s shifted gears. A sign that we’re getting close to the action starting. The crowd here’s pretty fresh, given that most are just in from the airport today. They’ll look different after a couple days of back-to-back press events and slogging down the long, hard hallways of the Las Vegas Convention Center.

5:52 PM – It’s tough being a reporter at these events. Many of the folks here came from one event and will be dashing to another auto shinding later this evening.

So, they’re looking to get their story and get out. But they’ll get some entertainment along the way, too.

5:50 PM – We’re at the Four Seasons Hotel. Not bad digs for a press event. Marble staircase. Liveried gents out front.

But inside the press hall, where 400 automakers, media, analysts and the like are expected, it’s black. Except for the post-Blade Runner-esque green motion graphics.

They show the NVIDIA logo, turn into a semi-transparent cityscape, which turns into a roadway for autonomously driven cars.

Consider it a straw in the wind of what’s to come.

5:45 PM – The world doesn’t descend on the massive CES 2016 show floor for another day and a half. But the real action starts today. Tonight, in fact.

As in recent years, the unofficial start – what Black Friday is to the holiday shopping season, what the first mint julep is to the Kentucky Derby – is the NVIDIA press conference.

NVIDIA CEO Jen-Hsun Huang – JHH for our purposes going forward – will take soon take the stage and get us going.

Our roots are in gaming. But we’ve pivoted in recent years to three really big areas: virtual reality, deep learning and autonomous driving.

It’s the last two topics that we’re going to be focusing on this evening.

NVIDIA CEO Jen-Hsun Huang will get things rolling at CES 2016, in Las Vegas, Monday at 6 pm Pacific.

We won’t spoil any of the CES surprises. But stay tuned. We’ll be live blogging throughout the event. Hit refresh on your browser for updates.


Similar Stories

  • StephenM_82

    Looking forward to it! GO NVIDIA!!!

  • E.

    Can’t wait for this … 2016 will be revolutionary for Pc Gaming !

  • DeezNutsInc

    Hopefully dat pascal releases tho

  • Mattaku

    “Our roots are in gaming. But we’ve pivoted in recent years to three really big areas: virtual reality, deep learning and autonomous driving.

    It’s the last two topics that we’re going to be focusing on this evening.”

    Probably no pascal news

  • Tyler Sherrock

    Better turn off my cellphone then

  • Brent Kullrich

    So exciting got off work just to watch this!

  • JuanMa

    I love you Huang. You are the true god!
    Please, clean the world of the AMDers and other bullshit.
    I m going to pay any price for a Nvidia Card
    My 970 and it`s 3,5 GB Vram are better than a 390 with 8 GB Vram.

    You are the best CEO of the world. I want to kiss you

  • Dawgs4ever

    Kind of a yawn inducing presentation at this point. Decent material, though.

  • Brent Kullrich

    That Lidar mapping is really interesting!

  • Mario Morales

    Terminator is coming!!!!!!!!! D:

  • Brian Caulfield

    Thanks for tuning in!

  • Brian Caulfield

    Hope it was worth your while!

  • Brian Caulfield


  • Lovejoint

    Is that you, Huang;)

  • nvidiots pay

    I wouldn’t trust those nvidiot cars!

  • JuanMa

    Huang is the best man of the world
    I m from spain, but i want to go to Nvidia HQ to kiss Huang!

  • deppman

    Cut that board in half and it still destroys current generation consoles – and the titan X. It could be the first dedicated VR console.

  • 鹏 张

    wow great nivida

  • SalvadorSanchez1

    Well because you’re a irrelevant peasant and always will be. Go get my f**cking fries s*ope!!!

  • Christian Holata

    Another 100 million people without #bullshitjobs … let’s hope that humanity will be smart enough and demand a cooperative monetary system and a #basicincome …

  • Like Daddy

    JHH – Will we be able to purchase a Pascal video card for PCs this year as announced in the March 2014 confference?

  • David Wong

    Really pioneering! pay my tribute to such great effort! Keep forward!