by Sumit Gupta

Lauren Sommer wrote a great blog over the weekend on KQED about how supercomputers have hit the “energy wall” – a decidedly real supercomputing problem that NVIDIA’s GPU technology can help to overcome.

This is what 1,000 homes looks like.

The blog post mentions the Hopper supercomputer, located at the Lawrence Berkeley National Lab (LBNL). The system consumes 3 megawatts of electricity (enough to power 2,000-3,000 homes a year) and has performance of 1 petaflop per second (equivalent to about 68,000 laptops).

It’s hard to imagine these numbers scaling to exascale systems – the “energy wall” here would just be too high to reasonably surmount. In fact, I just got back from the International Supercomputing Conference in Hamburg, where the running joke was that power companies would soon be giving supercomputers away for free if you sign up for a five-year power contract with them.

Here at NVIDIA, we’ve been working on a solution to the supercomputing power crisis for several years. Supercomputers can use NVIDIA Tesla GPUs to dramatically accelerate supercomputing applications. Like a turbocharger on your car, GPUs kick in to boost your standard Intel or AMD CPUs when you need the extra oomph.

Using GPUs is a much more energy efficient way of supercomputing.  You choose the right processor to the do the right job.  When I edit pictures of my kids, for example, my computer’s sequential Intel or AMD x86 CPU is used to access the hard disk, retrieve the file, and open it.  Once the picture is open, and I want to do red-eye reduction or remove the blur, the GPU kicks into gear to accelerate the job.

Three of the Top Five supercomputers in the world are accelerated by NVIDIA Tesla GPUs. One of these is the Tsubame 2.0 system at the Tokyo Institute of Technology. Like the Hopper system at LBNL, it delivers 1 petaflop per second of performance. But thanks to its GPUs, it consumes less than half the power of the Hopper system.  To be exact, Tsubame achieves 1.19 Petaflop/sec and sips a “mere” 1.4 megawatts of electricity.

Half the power for the same performance is a big leap forward. But we have a long road ahead, especially as we move towards exascale supercomputers that will be 1,000 times more powerful than the current petaflop supers. Otherwise, the power companies will indeed start giving away supercomputers away for free!


Similar Stories

  • http://profiles.google.com/mark.c.hahn mark hahn

    the K-computer folks would disagree that GPUs are necessary for power-efficient supercomputing…

  • Sumit Gupta

    Hi Mark
    The K-computer is very interesting and a good comparison point is another Japanese supercomputer, Tsubame 2.0 that was built using Tesla GPUs.    
    K computer requires 10 megawatt to deliver 8 petaflop = 0.8 PF / MW
    Tsubame requires 1 megawatt to deliver 1.2 petaflop = 1.2 PF / MW

    In effect, the GPU based Tsubame supercomputer is 1.5 times better than the custom CPU based K computer.

    One of the biggest advantages of heterogeneous / hybrid computers is that they are way more power efficient that CPU-only computers.

  • Ashish Singh

    Hello Dr. Sumit,

    I believe that Nvidia provides APIs to throttle up/down GPUs based on the application requirement. Is it possible to control this throttling from user level? Also it would be nice if you could point me to some document regarding this. I am curious to know about the benefits and side-effects of doing so.


  • http://www.facebook.com/people/George-Cummings/100000638966153 George Cummings

    Of course, this all assumes the power bill is a consideration to entities who have a need for and can afford supercomputers.  Unlikely.
    Another problem that wasn’t a problem, solved.