Everything’s bigger in Texas — supercomputers included.
The Texas Advanced Computing Center today launched Frontera, the most powerful academic supercomputer in the world, now featuring two subsystems powered by some 800 NVIDIA GPUs.
Frontera will leverage the AI, high performance computing and data analytics capabilities of NVIDIA Tensor Core GPUs to enable powerful scientific simulation and accelerate research areas including drug discovery, astrophysics and natural hazards modeling.
Housed at The University of Texas at Austin, Frontera ranked fifth on the most recent TOP500 list of fastest supercomputers, achieving 23.5 petaflops on the High-Performance Linpack benchmark and 38.75 petaflops of peak double-precision performance. The new GPU subsystems add a further 11 petaflops of peak single-precision performance for researchers.
NVIDIA GPUs power more than 100 systems on the TOP500 list, including half the top 10 and Summit, the world’s fastest supercomputer.
“With Frontera, the key is time to solution. That’s what we’re here for — to solve the biggest problems in science and engineering,” said Niall Gaffney, the center’s director of data-intensive computing.
One of the new subsystems features a cluster of 360 NVIDIA Tensor Core GPUs, liquid-cooled in racks developed by GRC, which specializes in immersion cooling for data centers. Another, built by IBM and named Longhorn, consists of 448 NVIDIA Tensor Core GPUs. Purpose-built with mixed-precision capabilities, these powerful GPUs provide scientists the flexibility to accelerate a variety of AI, simulation and data analysis workloads.
More than three dozen research teams have been using Frontera since the system began supporting science applications in June. The supercomputer was funded by a $60 million award from the National Science Foundation.
Over its lifetime, Frontera and its GPU subsystems will be used for hundreds of applications by thousands of researchers from academic institutions around the world.
From Molecular to Supermassive, Accelerating Science Research
High performance computing systems help researchers rapidly analyze data and run experiments and simulations. GPU acceleration enables faster iteration, cutting down the time it takes for scientists to achieve breakthroughs that can improve human health, broaden our understanding of the universe, and inform how we use materials and energy resources.
“Techniques like machine learning and AI are becoming more and more important for researchers doing large-scale compute,” Gaffney said. “GPU environments allow scientists to take advantage of acceleration for a wide array of applications.”
Initial projects benefiting from the powerful NVIDIA GPU-accelerated Frontera subsystems include:
- Astronomy insights: In the field of astrophysics, researchers often work with datasets 100 terabytes in size or more. GPU acceleration and AI enables them to separate signal from noise in these massive datasets, run large-scale simulations of the universe and better understand phenomena like neutron star collision.
- Medical breakthroughs: Deep learning tools are used in the field of medical imaging to help doctors more quickly identify diseases and abnormalities, like spotting glioblastoma tumors from brain scans. With supercomputing resources, developers can create more complex models to improve the accuracy of cancer diagnosis.
- Drug discovery: Identifying promising molecular compounds for drug candidates is computationally demanding, time-consuming and expensive. Researchers can leverage GPU-accelerated systems for faster simulations of protein folding, helping narrow down candidates to test in a wet laboratory.
- Smart city planning: Cities collect vast quantities of data that can be analyzed for smarter urban planning. With an AI model that can analyze visual data from traffic pole cameras, cities can identify congested areas and better address safety concerns like dangerous intersections.
- Understanding Earth: In weather modeling and in energy research, scientists depend on high-fidelity simulations to analyze the interaction of complex natural systems. Researchers can use AI to better predict weather events and earthquakes, inform precision agriculture projects and explore potential energy sources such as nuclear fusion.
Learn more about how NVIDIA GPUs power the world’s top supercomputers.