The new wave of supercomputers is largely GPU accelerated, the latest TOP500 list of the world’s fastest systems shows.
Of the 102 new supercomputers to join the closely watched ranking, 42 use NVIDIA GPU accelerators — including AiMOS, the most powerful addition to the list, which was released this week. Coming in at No. 24, AiMOS achieves 8 petaflops of computing on the High-Performance Linpack benchmark, a yardstick of supercomputing performance.
Installed at the Rensselaer Polytechnic Institute, in New York, the system is powered by NVIDIA V100 Tensor Core GPUs, just like Oak Ridge National Laboratory’s Summit, the world’s fastest supercomputer. NVIDIA GPUs power a record 136 systems on the latest TOP500 list, including half of the top 10.
Europe’s and Japan’s fastest supercomputers, as well as the world’s fastest industrial supercomputer, are all accelerated by NVIDIA GPUs.
Nearly 40 percent of the total compute power on the TOP500 list — 626 petaflops — comes from GPU-accelerated systems. Just over a decade ago, no supercomputers on the list were accelerated.
Three of the TOP500 supercomputers are in-house NVIDIA systems, including our DGX SuperPOD, which ranks 20th on the latest list. These systems are used around the clock for compute-intensive AI workloads like autonomous vehicle development.
NVIDIA GPUs power 90 percent of the top 30 supercomputers on the Green500 list, also released this week at SC19.
Supercomputers accelerated by NVIDIA GPUs are used in universities and laboratories worldwide for groundbreaking research. NVIDIA’s full-stack optimization approach ensures developers and researchers benefit from this computing horsepower in their applications to advance science and do their life’s work.
The Summit supercomputer, featuring more than 27,000 NVIDIA V100 Tensor Core GPUs, is enabling the world’s first exascale science applications, including:
- Genomics: Opioid addiction was linked to more than 50,000 deaths in the U.S. in 2017. To better understand and address the opioid epidemic, researchers at Oak Ridge National Laboratory are investigating genetic variations that contribute to complex traits like chronic pain and addiction. Using Summit and mixed-precision techniques, the team processed around 300 quadrillion element comparisons per second, achieving a peak throughput of 2.31 exaops — the fastest science application ever reported.
- Meteorology: Extreme weather events are on the rise, in part due to human-caused climate change. Scientists at Lawrence Berkeley National Laboratory are working to more accurately predict the path of extreme weather patterns with AI. The Gordon Bell prize-winning team trained their neural network using Summit, setting a performance record for the fastest deep learning algorithm, at 1.13 exaflops.
- Pathology: By 2025, the annual number of new cancer cases worldwide will hit 21.5 million — creating a massive demand for doctors to analyze biopsy scans. Stony Brook University developed a software stack, MENNDL, to generate an AI model that can analyze pathology data with comparable high accuracy and 16x faster inference compared to a fine-tuned version of the InceptionNet model. This will enable real-time processing of 10-gigapixel resolution images generated from biopsy scans. Using Summit, the researchers achieved 1.3 exaflops of performance to generate their neural network.
- Nuclear waste remediation: Located in Washington state, the 580-square-mile Hanford Site was used to produce plutonium for nuclear weapons and nuclear reactors from 1943 to 1989. After it shut down, more than 100 square miles of contaminated groundwater was left behind. To aid in the cleanup effort, researchers from Lawrence Berkeley National Laboratory, Pacific Northwest National Laboratory, Brown University and NVIDIA developed physics-informed generative adversarial networks to quantify subsurface flow. The application achieved 1.2 exaflops of peak and sustained performance on Summit.