Deep Learning Pioneers Boost Research at NVIDIA AI Labs Around the World
The world’s top researchers are pushing the boundaries of artificial intelligence at the NVIDIA AI Labs, known as NVAIL, located at 20 top universities around the globe.
University of Toronto researchers are developing affordable self-driving cars. At the Université of Montréal, researchers aim to use genetic data to predict and prevent disease. And at the University of California, Berkeley, they’re developing robots that do tasks they’ve never learned.
Our NVAIL program helps us keep these AI pioneers ahead of the curve with support for students, assistance from our researchers and engineers, and access to the industry’s most advanced GPU computing power.
Indeed, NVAIL researchers were among the first to receive our DGX-1 AI supercomputer, beginning nearly a year ago.
AI Research Around the World
In addition to Toronto, Montreal and UC Berkeley, among the 20 institutions in the program are Massachusetts Institute of Technology, Stanford University, University of Tokyo, China’s Tsinghua University and Switzerland’s Dalle Molle Institute for Artificial Intelligence.
That geographic diversity is no accident. NVAIL partner institutions are located in regions that are the research hubs of deep learning. Their research ranges from advancing deep learning itself to improving breast cancer screening (New York University) and automated lip reading (Oxford University).
Read on for a look at a few of their most promising projects.
Self-Driving Cars for All
At University of Toronto, Raquel Urtasun is developing affordable self-driving cars. The professor of computer science and head of Uber’s advanced technologies group thinks autonomous cars should be available to everyone.
“So no matter what your income is, you can get the benefits of self-driving cars,” she said.
The technology in some autonomous cars — lidar, 3D sensors and hand-annotated maps — can cost more than $100,000, Urtasun said. Her team develops algorithms for perception, localization and mapping that use technologies like cheap sensors and satellite data.
In addition to computing power and technical support, the partnership with NVIDIA gives the University of Toronto something just as valuable, Urtasun said. “We get to have a say about the computing of the future, which will help our researchers.”
Forecasting Medical Futures
AI may one day help doctors predict a patient’s disease risk and choose treatments based on genetic data. But because genomic data is highly complex, researchers must develop more effective deep learning techniques, said Adriana Romero, a postdoctoral fellow at Montreal Institute for Learning Algorithms, Université of Montréal.
Modern genotyping methods target as many as 5 million variations in the human genome, some of which may point to the risk of developing a certain disease. Researchers use deep learning to try to determine how useful each variation is for predicting disease, how variations relate to each other, and then weight the relative importance of these factors.
It’s a tall order because there are more variables to consider than the amount of patient genomic data available, Romero said. As a result, it’s hard to train a deep learning system that can make reliable predictions.
To find a better way, her research team — which includes Romero’s adviser, AI pioneer Yoshua Bengio — experimented with predicting genetic ancestry based on mutations. They came up with a deep learning architecture that makes predictions while using fewer parameters (the weights assigned to each variable). (See related paper, “Diet Networks: Thin Parameters for Fat Genomics.”)
“Our next step is to tackle disease prediction and work toward the possibility of having personalized medicine,” Romero said.
Most robots today can do one thing well — delivering packages, for example, vacuuming the floor or assisting with surgery. But when they’re faced with a new task, they’re stumped.
“Right now you see robots in factories or other settings where they repeat the same thing over and over again,” said Chelsea Finn, a doctoral student working in the University of California, Berkeley’s AI lab, which was one of the first to receive an NVIDIA DGX-1. “That won’t work in places like disaster zones, where the robot doesn’t know where everything is.”
Finn wants robots to understand situations they’ve never seen before — without any help from engineers. She collaborates with her advisers, Pieter Abbeel and Sergey Levine, to create robots that are able to adapt to new environments.
To do this, Finn uses GPU-accelerated deep learning to train the robot to understand the results of its actions and then predict what it needs to do to accomplish the next task. (See related paper, “Deep Visual Foresight for Planning Robot Motion.”)
“We need to process data quickly so that the robot can learn on the fly,” she said. “Without the speed of GPUs, a lot of my research wouldn’t be possible.”