ImageNet Competitors, AI Researchers Talk Up Benefits of GPUs for Deep Learning

by Stephen Jones

When the number of users of your product flips from zero to nearly 100%, you don’t need a Ph.D. to realize a trend has formed.

And that’s exactly what’s happened at the “World Cup” for computer vision and machine learning – the ImageNet Large Scale Visual Recognition Challenge.

At last week’s event, over 95% of the teams tapped GPUs for their ground-breaking submissions. This compares with just 10% two years ago (and 0% three years ago), underscoring how accelerated computing has become fundamental for this fast-growing field.

As the chart below shows, the use of GPUs has also helped slash ImageNet contestants’ error rates – the instances when their deep learning algorithms failed to accurately identify a given object:

ImageNet chart

At the European Conference on Computer Vision (ECCV), held last week in Zurich, teams from Adobe, U.C. Berkeley, the National University of Singapore, Oxford University and many others from around the world shared details on how GPUs helped them in the ImageNet competition.

Here’s Shuicheng Yan, associate professor from the National University of Singapore, talking about how GPUs helped his team secure one of four winning spots:

Video: U.C. Berkeley Integrates New NVIDIA Deep Learning Software (2:19)

Also at ECCV, we took the wraps off a new software offering that makes it easier for deep learning pros to advance their work.

cuDNN is a comprehensive CUDA-based programming library for deep neural networks (DNN) that helps developers quickly and easily harness the power of GPU acceleration.

U.C. Berkeley researchers have integrated cuDNN into Caffe, one of the world’s most popular and actively developed deep learning frameworks – one that many of the ImageNet contestants used for their work.

Here, Evan Shelhamer, a Ph.D. Student Researcher at U.C. Berkeley, talks about how Caffe works, and what GPU acceleration with cuDNN brings to the table:

Deep learning is one of the fastest growing areas in the machine learning and data analytics fields.

Just one example: in just a few days after we released cuDNN, the Facebook AI research team led by deep learning pioneer Yann LecCun integrated the library into its Torch7 framework.

GPU acceleration can fundamentally improve computing of deep learning training and classification, so expect to hear more from us on the topic in the future.


Related reading:

More on Deep Learning on the NVIDIA Developer Zone Blog

Developer Zone Deep Learning Portal