NVIDIA GPUs Power Deep-Learning Winners in World Cup of Image Recognition

The brightest minds in the field of deep learning will converge next week in Zurich at the European Conference on Computer Vision.

And they’ll be buzzing about the results from the recent ImageNet Large Scale Visual Recognition Challenge.

ImageNet LogoKnown as the World Cup for computer vision and machine learning, the challenge pits teams from academia and industry against one another to tackle fiendishly difficult deep learning-based object recognition tasks.

The winners are well known. What’s news: 90% of the ImageNet teams used GPUs. Now it’s time for some of those teams to talk about how they used their not-so-secret weapons.

Teams from Adobe, the National University of Singapore and Oxford University will share how GPU accelerators helped them to break new ground at the contest by improving the object recognition accuracy of their deep learning algorithms.

It’s just one example of how GPUs are taking the deep learning world by storm.

Adoption of GPUs for Deep Learning Explodes

Around the world, deep learning researchers and enterprises are flocking to GPU acceleration. They’re focusing on tasks ranging from face and speech recognition and supercharged web search capabilities to image auto-tagging and personalized product purchasing recommendations.

Pioneers in this area include Adobe, Baidu, Microsoft, Nuance, NYU, Oxford University, Stanford University, U.C. Berkeley and Yandex. They’re not alone. The reason: GPU accelerators are perfectly suited for training deep learning workloads.

NVIDIA GPUs are helping scientists train computers how to recognize a wide array of objects.
NVIDIA GPUs are helping scientists train computers how to recognize a wide array of objects.

Deep learning is one of the fastest growing segments of the machine learning field. It involves training computers to teach themselves by sifting through massive amounts of data. For example, learning to identify a dog by analyzing lots of images of dogs, ferrets, jackals, raccoons and other animals.

But, deep learning algorithms also depend on massive amounts of computing power to process mountains of data. This can require thousands of CPU-based servers, but that’s expensive, unrealistic and impractical.

But not for GPUs. The high-performance parallel processors crunch through a broad variety of visual computing problems quickly and efficiently.

With GPUs, deep learning training processes run much faster on fewer servers. This helps users to rapidly develop and optimize new training models and, ultimately, to build new, highly accurate deep learning applications.

New NVIDIA Software Makes Deep Learning Acceleration Quicker and Easier

CuDNN: a CUDA Library for Deep Neural NetworksTo make it easier for deep learning pioneers to advance their work, NVIDIA and the University of California at Berkeley are putting the power of GPU acceleration in the hands of many more individuals around the world.

NVIDIA has developed cuDNN – a robust CUDA-based programming library for deep neural networks that helps developers to quickly and easily harness the power of GPU acceleration (for more on cuDNN, see “Accelerate Machine Learning with the cuDNN Deep Neural Network Library” on Parallel Forall).

U.C. Berkeley researchers have integrated cuDNN into Caffe, one of the world’s most popular and actively developed deep learning frameworks  – one that many of the ImageNet contestants used for their work.

With cuDNN and GPU acceleration, Caffe users can now rapidly iterate on new training models to develop powerful, more accurate algorithms.

More on cuDNN-enabled Caffe

Image Recognition blog posts at NVIDIA Developer Zone.

 

 

Similar Stories

  • mpeniak

    Awesome! It’s nice to see NVIDIA is recognising the importance of artificial neural networks. When I worked there back in 2012 evaluating the performance of several machine learning algorithms, it wasn’t the hype yet

  • Tesla

    Intentional to leave the nexus 9 in the patent notes?