by Sumit Gupta

U.S. Energy Secretary Steve Chu recently described to Forbes how supercomputing is bringing change to our energy future, as well as the overall pursuit of science.

We couldn’t agree more.

The U.S. Department of Energy (DOE) is backing up this belief by investing in systems such as the GPU-accelerated Titan supercomputer at the Oak Ridge National Laboratory (ORNL) in Tennessee. Titan will be one of the fastest machines in the world when it is deployed.

Scientists at DOE laboratories are using supercomputers to simulate everything from how viruses attack biological cells to improving the fuel efficiency of combustion engines, and even developing cleaner, more sustainable energy source alternatives.

Powerful GPU-based supercomputers, like Titan, are key to performing computer simulations, which have become the third pillar of science – joining theory and experimentation as a means of discovering new phenomena or testing hypotheses.

The Oak Ridge National Laboratory is
home to the Titan supercomputer

Simulating physical and biological systems is often much easier than performing experiments in a laboratory. For example, determining the exact temperature required to make a chemical reaction happen in a lab is not only labor intensive, but also time consuming. A software program running on a supercomputer can easily model the same chemical reaction at precise temperatures and other environmental conditions.

The challenge with this kind of research is that accurate computer simulations require enormous computing performance from systems consisting of tens of thousands of computer servers – and they simply require too much power to run. The current Jaguar supercomputer at ORNL, for example, delivers about 1 petaflop of computing performance while consuming 7 megawatts of power – equivalent to the power requirements of a small town!

This is why ORNL is building Titan using traditional x86 CPUs accelerated by NVIDIA Tesla GPUs. GPU-accelerated computing requires much less power than CPU-only supercomputers.

The Tokyo Institute of Technology (popularly called Tokyo Tech) made a similar choice. They recently deployed the Tsubame 2.0 GPU-accelerated supercomputer as the greenest petaflop system in the world. Tsubame 2.0 consists of about 1,500 computer servers (about 40 racks) accelerated by the Fermi-based Tesla M2050 GPUs from NVIDIA. It can deliver 1 petaflop of sustained performance while consuming 1.3 megawatts. In contrast, it would take 4,000 non-accelerated, CPU-only servers (about 100 racks) to build a similar petaflop supercomputer, and it would consume between 2 to 3 megawatts.

Scientists and engineers will always desire greater computing capability in their quest to advance the frontiers of science. The DOE’s commitment to providing them the best, most energy-efficient supercomputing tools will help the U.S. continue to lead the world in technology and engineering.