Every six months, the gurus of the supercomputing community publish a list of the 500 most energy-efficient supercomputers in the world. It’s called the Green500 list.
In the last few years, the most energy-efficient systems are being built with our GPU accelerators. In fact, the top 15 systems on the latest Green500 list use GPU accelerators at their heart.
The latest list marks a new milestone. The use of GPU accelerators has now gone beyond supercomputing and research users to mainstream enterprises. The top 15 in the list includes the oil and gas exploration giant, ENI of Italy, and four financial institutions.
That’s because for most data centers today, the energy and cooling costs of their high-performance computing systems over 3 to 4 years exceeds the cost of purchasing the system.
The enormous speed-ups of GPU accelerators over CPU-only systems offer research and enterprise data centers not just the ability to perform tasks that were not possible before, but also at an energy efficiency that dramatically lowers operational cost.
Driving the trend is NVIDIA’s ongoing effort to push the envelope in both performance and energy efficiency. The Kepler compute architecture introduced last year provided a big boost in this area, delivering three times better energy efficiency than its predecessor. We expect future NVIDIA architectures will extend this lead.
Tsubame-KFC Still No. 1
Still sitting atop the list is the Tsubame-KFC system at the Tokyo Institute of Technology.
The world’s greenest supercomputer combines NVIDIA Tesla K20X GPUs with a specialized cooling system that immerses servers in a special oil-based liquid bath.
Designed for research in a range of areas, from drug discovery to earthquake simulation, Tsubame-KFC delivers a record 4.4 gigaflops per watt.
The Wilkes system at Cambridge University took second place, clocking in at 3.6 gigaflops per watt. Japan’s GPU-accelerated system at the Center for Computational Sciences, at the University of Tsukuba, took the third spot at 3.5 gigaflops per watt.
Moving Down the Exascale Path
Improving energy efficiency while increasing performance is central to achieving exascale computing — that is, running at a speed of 1 exaflops, or a million trillion flops.
That’s because if we were to build an exascale supercomputer today, it would require about 10 times more power than the city of San Francisco.
As future GPU accelerators achieve new levels of energy-efficient performance, and with the introduction of new, more efficient processor architectures for HPC, like ARM64, we expect to continue the steady advance to exascale.