Shapely Bell Curve: NVIDIA Volta Tensor Core GPUs Power 5 of 6 Gordon Bell Finalists

by Geetika Gupta
Oak Ridge Summit supercomputer

Movies have the Oscars. Television has the Emmys. Broadway has the Tonys.

But for those known for harnessing computing power more than star power, what counts is the Gordon Bell Prize.

Established more than three decades ago by the Association for Computing Machinery, the award recognizes outstanding achievement in the field of computing for applications in science, engineering and large-scale data science.

This year, five of the six prize finalists, who were just announced by ACM, did their work on the new NVIDIA GPU-accelerated Summit system at Oak Ridge National Laboratory and Sierra system at Lawrence Livermore National Laboratory. Summit is currently the world’s fastest supercomputer and Sierra is the third fastest, according to most recent Top500 list.

The Gordon Bell Prize winner will be announced Nov. 15 at the Supercomputing 2018 conference, in Dallas.

Summit, an open system for researchers worldwide, is designed to bring 200 petaflops of high-precision computing performance and over 3 exaflops of AI, powered by 27,648 NVIDIA Volta Tensor Core GPUs.

The revolutionary accelerators enable multi-precision computing that fuses the highly precise calculations to tackle the challenges of high performance computing with the efficient processing required for deep learning.

Additionally, half of the six projects included NVIDIA researchers who were heavily involved with the code development and performance tuning. NVIDIA employees listed as authors on nominated projects include: Kate Clark, Massimiliano Fatica, Michael Houston, Nathan Luehr, Akira Naruse, Everett Phillips, Joshua Romero and Sean Treichler.

Here’s an overview of the work by each of the five finalists who used NVIDIA Tensor Core GPUs:

  • Identification of extreme weather patterns from high-resolution climate simulations: A team led by Prabhat, a data scientist at Lawrence Berkeley National Laboratory, and NVIDIA engineer Michael Houston used AI software to analyze how extreme weather is likely to change in the future. They used specialized Tensor Cores built into Summit’s NVIDIA GPUs to achieve a performance of 1.13 exaflops, the fastest deep learning algorithm reported.
  • Use of AI and transprecision computing to accelerate earthquake simulation: A team led by Tsuyoshi Ichimura, of the University of Tokyo, used Summit to expand on an existing algorithm. The result was a 4x speedup enabling the coupling of shaking ground and urban structures within an earthquake simulation. The team started their GPU work with OpenACC, which helped get significant performance improvement within a short period of time. They later introduced CUDA and AI algorithms to further improve acceleration.
  • Development of genomics algorithm to attain exascale speeds: A team from Oak Ridge National Laboratory led by Dan Jacobson achieved a peak throughput of 2.31 exaops, the fastest science application ever reported. Their work compares genetic variations within a population to uncover hidden networks of genes that contribute to complex traits. One condition the team is studying is opioid addiction, which was linked to nearly 50,000 U.S. deaths in 2017.
  • Identification of materials’ atomic-level information from electron microscopy data: Another Oak Ridge team led by Robert Patton used Summit to develop AI-powered software to fabricate materials at the atomic level. The team achieved a speed of 152.5 petaflops across 3,000 nodes using the MENNDL algorithm.
  • Development of an algorithm to help scientists quantify the lifetime of neutrons: A multi-institutional team led by Lawrence Berkeley National Laboratory computational nuclear physicist André Walker-Loud and Lawrence Livermore National Laboratory computational theoretical physicist Pavlos Vranas used the Volta GPU-based nodes of Summit and Sierra to calculate the physics of the subatomic particles making up protons and neutrons. They demonstrated improved workflow management capabilities that contributed to sustained performance of nearly 20 petaflops, a tenfold speedup from previous-generation systems. The team included Kate Clark of NVIDIA.

In the coming weeks, Oak Ridge National Laboratory will publish deep dives into all of the finalists’ work. To ensure you don’t miss these, or the winner of the Gordon Bell Prize, follow our Twitter handle at @NVIDIADC.