NVAIL Partners Showcase Groundbreaking Work at World’s Top Machine Learning Conference

UC Berkeley, IDSIA, University of Tokyo present advances in AI research, powered by DGX-1 supercomputer, at ICML.
by Kristin Bryson

Whether it’s helping neural networks learn how to learn or getting them to work with pseudo-labeled data, most of the advances in deep learning and artificial intelligence have happened in research labs.

Researchers at the University of California at Berkeley, Switzerland’s IDSIA and the University of Tokyo have used the DGX-1 to take their deep learning to the next level.

Attendees of this week’s International Conference on Machine Learning in Sydney, Australia, can hear from these three NVAIL partners. They’re all presenting papers on their research at ICML.

Teaching AI How to Learn

Imagine if robots and other AI-infused devices could learn more like humans. That’s what Assistant Professor Sergey Levine and his students at NVAIL partner UC Berkeley want to make into a reality.

By teaching deep neural networks to learn how to learn, Levine’s team wants to help intelligent agents to learn faster and need less training.

“Look at how people do it,” said Levine, an assistant professor in Berkeley’s Department of Electrical Engineering and Computer Sciences. “We never learn things entirely from scratch. We draw on our past experience to help us learn new skills quickly. So we’re trying to get our learning algorithms to do the same.”

With current AI methods, robots have to experience things over and over again to learn how to best respond to stimuli. Levine’s thinking is that by enabling robots to learn without all that repetition, they’ll not only be more adaptive, they’ll also be able to learn much more.

“If a robot can learn one skill from a thousand times less experience, it can learn a thousand skills in the same time it would have otherwise taken it to learn one,” Levine said. “We’re unlikely to ever build machines that never make mistakes, but we can try to build machines that learn from their mistakes quickly and don’t have to make them more than a few times.”

Levine and his team have been using an NVIDIA DGX-1 system to train their algorithms how to coordinate movement and visual perception. Chelsea Finn, a Ph.D. student with Levine and Abbeel at UC Berkeley, is presenting a research paper on this work at ICML. Levine and Finn are also giving a tutorial on “Deep Reinforcement Learning, Decision Making, and Control.”

DGX-1 AI supercomputer
NVIDIA DGX-1 is proving to be an important research tool for many of the world’s leading AI researchers.

The Path to Deeper Learning

The powerful combination of recurrent neural networks and long short-term memory (LSTM) has been a boon to those working on handwriting and speech recognition.

Unlike feedforward networks, which automatically push each computation to the next step, RNNs can tap internal memory to process arbitrary data (such as different pronunciations or variations in handwriting), using previous decisions and current stimuli to learn on the fly.

That said, RNNs have had a weakness: they become more difficult to work with as they move deeper into a neural network, slowing down the deep learning process. But researchers at the Swiss AI lab and NVAIL partner IDSIA think they’ve found an answer: recurrent highway networks.

“Until now, it was extremely difficult to train recurrent networks with even two layers in the sequential transition,” said Rupesh Srivastava, an AI researcher at IDSIA and one of the co-authors of a research paper on the topic being presented at ICML. “Now, with recurrent highway networks, we can train recurrent networks with tens of layers in the recurrent transition.”

Srivastava said this advance allows for more efficient models for attacking sequential processing tasks, and enables the use of more complicated models.

“These early experiments indicate that we may able to tackle much more complex tasks without requiring the training of gigantic models in the future,” he said.

Srivastava’s team has been using NVIDIA Tesla K40, K80, TITAN X and GeForce GTX 1080 GPUs to speed up training, along with CUDA and cuDNN for deep learning. But the arrival of the DGX-1 AI supercomputer, he said, “significantly accelerated the experimental cycle, allowing all lab projects to progress faster.”

He also said he’s excited by the prospect of using the DGX-1 to speed up the parallel training of recurrent network models. Eventually, he hopes that recurrent highway networks will lead to better reinforcement learning.

At the very least, the research will help to make deep learning models, uh, deeper.

“It is an important development,” said Srivastava, “because the ability to utilize the efficiency brought by deep models in different ways is a cornerstone of deep learning.”

Deep Learning Trickery

Deep learning isn’t always a tidy process. When training a model to perform, say, large-scale speech recognition, it’s important that it be able to account for variations such as background noise or accents.

This concept, known as domain adaptation, is where the intelligence in artificial intelligence is derived. It’s easy to be intelligent in the simpler setting of a training lab. It’s another thing to be intelligent in the unsupervised and unpredictable real world.

Researchers at the University of Tokyo believe they’ve developed a method for getting around many of the challenges of unsupervised domain adaptation. They’ve tapped the power of the DGX-1 to assign “pseudo-labels” to unlabeled data in target domains.

This enables deep learning models to apply what they’ve learned about a source domain — say, the ability to categorize book reviews — to a different target domain, such as movie reviews, without having to train a new model.

To do this, a team at the University of Tokyo proposed a concept they call “asymmetric tri-training,” which involves assigning different roles to three classifiers, employing three separate neural networks. Two networks are used to label unlabeled target samples. The third is trained by pseudo-labeled target samples. So far, the results have been encouraging.

“Transferring knowledge from a simple or synthesized domain to a diverse or realistic domain is a practical and challenging problem,” says Tatsuya Harada, a professor in the University of Tokyo Graduate School of Information Science and Technology. “We believe that our method showed a significant step for realizing adaptation from the simple to the diverse domain.”

Harada is one of the authors of a research paper on this work that’s being presented this week at ICML. It’s a complicated undertaking. Harada acknowledges it will likely need parallel efforts to achieve its potential. He’s hopeful that sharing his team’s research will speed up that process.

“The research on fusing deep learning and pseudo-labels is ongoing,” he said. “We expect our research to stimulate more such research.”

ICML continues in Sydney through Friday.