NVIDIA CEO Jen-Hsun Huang earlier this year delivered our NVIDIA DGX-1 AI supercomputer in a box to the University of California, Berkeley’s Berkeley AI Research Lab (BAIR).
BAIR’s over two dozen faculty and more than 100 graduate students are at the cutting edge of multi-modal deep learning, human-compatible AI and connecting AI with other scientific disciplines and the humanities.
“I’m delighted to deliver one of the first ones to you,” Jen-Hsun told a group of researchers at BAIR celebrating the arrival of their DGX-1.
AI’s Need for Speed
The team at BAIR are working on a dazzling array of AI problems across a huge array of fields — and they’re eager to experiment with as many different approaches as possible.
To do that, they need speed, explains Pieter Abbeel, an associate professor at UC Berkeley’s Department of Electrical Engineering and Computer Science.
“More compute power directly translates into more ideas being investigated, tried out, tuned to actually get them to work,” Abbeel says. “So right now, an experiment might typically maybe take anywhere from a few hours to a couple of days, and so if we can get something like a 10-fold speed-up, that would narrow it down from that time to much shorter times — then we could right away try the next thing.”
That speed — and the ability to manage huge quantities of data — is the key to new breakthroughs in deep learning, which, in turn, is key to helping computers navigate environments that people do every day, such as public roads, explains John Canny, the Paul and Stacy Jacobs Distinguished Professor of Engineering at UC Berkeley’s Department of Electrical Engineering and Computer Science.
“In driving, drivers continue to improve over many years and decades because of the experience that they gain,” Canny says. “In machine learning, deep learning currently doesn’t really manage data sets of that size — so our interest is in collecting, processing and leveraging those very large data sets.”
Cars that could learn not just from their own experiences — but from those of millions of other vehicles — promise to dramatically improve safety, explains Trevor Darrell, a professor in UC Berkeley’s Department of Electrical Engineering and Computer Science.
“But that’s just the tip of the iceberg,” Darnell says. “There will be also revolutions in transportation and logistics, the process of just moving stuff around — if you’d like to get a small package from here to there. If we could have autonomous vehicles of all sorts of sizes moving all of our goods and services around, I can’t even speculate the degree of productivity that will give us.”
Giving machines the ability to learn from their experience is also the key to helping robots move from factory floors to less predictable environments, such as our homes, offices and hospitals, Abbeel says.
“It’s going to be important these robots can adapt to new situations they’ve never seen before,” Abbeel says. “The big challenge here is how to build an artificial intelligence that allows these robots to understand situations they’ve never seen before and still do the right thing.”
While deep learning is already part of commonly used web services that help machines categorize information — such as speech and image recognition — Abbeel and his colleagues are exploring ways to help machines make decisions on their own.
Called “reinforcement learning,” this new approach promises to help machines understand and navigate complex environments, Abbeel explains.
Building machines that can not only learn from their environment, but judge the risks that they’re taking is key to building smarter robots, explained Sergey Levine, an assistant professor at the Department of Electrical Engineering and Computer Sciences at UC Berkeley.
Flying robots, for example, not only have to adapt to quickly changing environments, but have to be aware of the risks they’re taking as they fly. “We use deep learning to build deep neural-network policies for flight that are aware of their own uncertainty so that they don’t take actions for which they don’t really understand the outcome,” said Levine.
Fueling the AI Revolution
New approaches such as this promise to help researchers build machines that are, ultimately, more helpful. The speed of DGX-1’s GPUs and integrated software — and the connections between them — will help BAIR explore these new ideas faster than ever.
“There’s somewhat of a linear connection between how much compute power one has and how many experiments one can run,” Darnell says. “And how many experiments one can run determines how much knowledge you can acquire or discover.”