AI Will Blaze a Trail to Exascale Computing, NVIDIA CEO Says at SC16

by Brian Caulfield

The AI boom will create a path to exascale computing, one of the supercomputing world’s loftiest goals, NVIDIA CEO Jen-Hsun Huang told a packed house Monday at the SC16 annual supercomputing show in Salt Lake City, Utah.

“Several years ago deep learning came along, like Thor’s hammer falling from the sky, and gave us an incredibly powerful tool to solve some of the most difficult problems in the world,” Jen-Hsun said. “Every industry has awoken to AI.”

2016 has been a great year for deep learning and GPU computing, he explained. There are now more than 400 GPU-optimized high-performance computing applications, and all of the top 10 applications are now GPU optimized. The number of deep learning developers has tripled in two years to 400,000. And the launch of our new Pascal GPU architecture means all these applications will run more quickly, and efficiently, than ever.

text
NVIDIA CEO Jen-Hsun Huang speaking to a crowd of hundreds at Supercomputing 16.

Speaking to some 300 developers, scientists and tech execs who stood shoulder to shoulder in the NVIDIA booth in the opening hours of the show, Jen-Hsun made a series of news announcements that demonstrate NVIDIA’s leadership in AI:

  • Microsoft and NVIDIA announced the Microsoft Cognitive Toolkit, the first purpose-built enterprise AI framework optimized to run on NVIDIA Tesla GPUs in Microsoft Azure on on-premises. “We’re partnering with the company with the largest reach with companies around the world, and we now have the ability to bring AI to companies all around the world,” Jen-Hsun said.
  • NVIDIA announced that it’s teaming up with the National Cancer Institute, the U.S. Department of Energy and several national laboratories to help build an AI framework dubbed CANDLE — for Cancer Distributed Learning Environment — to support the U.S. government’s Project Moonshot cancer research effort. “It’s going to make it possible for scientists and researchers to use deep learning, as well as computational sciences, to address some of the urgent challenges of cancer,” Jen-Hsun said.
  • Jen-Hsun also spoke about the new NVIDIA DGX SATURNV supercomputer, and how it’s speeding our work on CANDLE. Unveiled earlier today, it’s ranked the world’s most efficient — and 28th fastest overall — on the Top500 list of supercomputers. “If you’re going to shoot for the Moon, you’re going to need a big rocket, so we decided to build one of the world’s greatest supercomputers,” Jen-Hsun said.

The story behind these stories: the parallel processing power of GPUs, which gave researchers the ability to design deep neural networks that loosely mimic the structure of the human mind. These deep neural networks give machines the ability to perceive — and understand — the world in ways that match or exceed our own (see “Accelerating AI with GPUs: A New Computing Model”).

At the same time, GPUs — driven forward by the vast economies of scale afforded by the market for PC gaming — give supercomputer scientists the ability to design machines that wring more power out of each unit of energy, key to creating machines with the ability to reach ever faster speeds on a realistic power budget.

Now, AI will give every business a reason to want to join the race towards ever more powerful machines, whether they live in the cloud or in their data center. “Deep learning is both an opportunity, as well as a challenge, that requires supercomputing,” Jen-Hsun said (see “The Intelligent Industrial Revolution”).

The next generation of supercomputers, Jen-Hsun said, will be able to do both kinds of work — performing 64-bit floating point math to tackle computational science challenges, such as predicting physical and biological behavior.

At the same time, they’ll need to be able to tackle tasks where the information is incomplete — where there are no first principles to work with, Jen-Hsun explained. These are classic deep learning problems — such as beating the world’s greatest human Go masters — where 16-bit floating point math is enough.

And on that benchmark the fastest computers can already work at exascale speeds — and produce remarkable results.

Come check it out for yourself. We’ll be showing these and other applications in our booth at SC16 in through Thursday.