NVIDIA DGX SATURNV Ranked World’s Most Efficient Supercomputer by Wide Margin

by Roy Kim

Already speeding our efforts to build smarter cars and more powerful GPUs, NVIDIA’s new DGX SATURNV supercomputer is ranked the world’s most efficient — and 28th fastest overall — on the Top500 list of supercomputers released Monday.

Our SATURNV supercomputer, powered by new Tesla P100 GPUs, delivers 9.46 gigaflops/watt — a 42 percent improvement from the 6.67 gigaflops/watt delivered by the most efficient machine on the Top500 list released just last June.  Compared with a supercomputer of similar performance, the Camphore 2 system, which is powered by Xeon Phi Knights Landing, SATURNV is 2.3x more energy efficient.

That efficiency is key to building machines capable of reaching exascale speeds — that’s 1 quintillion, or 1 billion billion, floating-point operations per second. Such a machine could help design efficient new combustion engines, model clean-burning fusion reactors, and achieve new breakthroughs in medical research.

GPUs — with their massively parallel architecture — have long powered some of the world’s fastest supercomputers. More recently, they’ve been key to an AI boom that’s given us machines that perceive the world as we do, understand our language and learn from examples in ways that exceed our own (see “Accelerating AI with GPUs: A New Computing Model“).


Using GPUs to Design GPUs

We’re convinced AI can give every company a competitive advantage. That’s why we’ve assembled the world’s most efficient — and one of the most powerful — supercomputers to aid us in our own work.

Assembled by a team of a dozen engineers using 124 DGX-1s — the AI supercomputer in a box we unveiled in April — SATURNV helps us build the autonomous driving software that’s a key part of our NVIDIA DRIVE PX 2 self-driving vehicle platform.

We’re also training neural networks to understand chipset design and very-large-scale-integration, so our engineers can work more quickly and efficiently. Yes, we’re using GPUs to help us design GPUs.

Most importantly, SATURNV’s power will give us the ability to train — and design — new deep learning networks quickly.

DGX-1: Where AI and Supercomputing Intersect


We think such systems can unlock the power of AI for enterprises, research groups, and academia.

DGX-1 is an appliance that integrates deep learning software, development tools and eight of our Tesla P100 GPUs — based on our new Pascal architecture — to pack computing power equal to 250 x86 servers into a device about the size of a stove top.

Since then, DGX-1 has been adopted by teams looking to harness AI in a wide variety of settings.

  • Enterprise software giant SAP is using DGX-1 AI supercomputers to build machine learning solutions for its 320,000 customers.
  • Researchers at groups such as Open AI, Stanford and New York University are using DGX-1 for their cutting-edge work.
  • Startup BenevolentAI uses DGX-1 as part of its effort to accelerate drug discovery by using deep natural language processing, machine learning and AI to formulate new, usable knowledge from complex scientific information.

We’re confident AI — and DGX-1  — will play a key role in even more breakthroughs to come.

Similar Stories

  • wilsonjonathan

    Now I know the super computer is a massive scale out (or is it scale up) system and probably requires a small power station to supply the electricity and costs about the same as the GDP of a small country to buy…. but… how much does it cost just to buy just 1 of the “boxes” (3rd photo); whats in the “box”; what kind of power supply does the “box” need; can it run just on its own… and finally… can it run Crisis?

  • Mandar Potdar
  • wilsonjonathan

    Thank you. I had seen that page which left me none the wiser… however I just noticed on it there was a pdf datasheet link that gives a much better overview of whats in the box. That is one heck of a lot of kit crammed into a box, with a 3.2KW power requirement – ouch! Although to be fair, its target audience is obviously not going to be some guys “game” room.

  • kalqlate


  • AEON

    The bad thing though – wouldn’t you have to wait in line for like 10 years and by then there will be something better? I know they want to put it in “the cloud” and remote connect (maybe I’m wrong) but there is probably a long wait still. lol — Going to check out the blog link 🙂

  • kalqlate

    Good question. I have no real idea what actual demand will be. Being that ML is mega popular and growing already, probably pretty high.