NVIDIA Sets World Record for Quantum Computing Simulation With cuQuantum Running on DGX SuperPOD

New SDK, running on NVIDIA's Selene supercomputer, simulates 8x more qubits than prior work on a key test in quantum computing.
by Sam Stanwyck
cuQuantum SDK record in quantum computing

In the emerging world of quantum computing, we just broke a record with big impact, and we’re making our software available so anyone can do this work.

Quantum computing will propel a new wave of advances in climate research, drug discovery, finance and more. By simulating tomorrow’s quantum computers on today’s classical systems, researchers can develop and test quantum algorithms more quickly and at scales not otherwise possible.

Driving toward that future, NVIDIA created the largest ever simulation of a quantum algorithm for solving the MaxCut problem using cuQuantum, our SDK for accelerating quantum circuit simulations on a GPU.

In the math world, MaxCut is often cited as an example of an optimization problem no known computer can solve efficiently. MaxCut algorithms are used to design large computer networks, find the optimal layout of chips with billions of silicon pathways and explore the field of statistical physics.

MaxCut is a key problem in the quantum community because it’s one of the leading candidates for demonstrating an advantage from using a quantum algorithm.

We used the cuTensorNet library in cuQuantum running on NVIDIA’s in-house supercomputer, Selene, to simulate a quantum algorithm to solve the MaxCut problem. Using 896 GPUs to simulate 1,688 qubits, we were able to solve a graph with a whopping 3,375 vertices. That’s 8x more qubits than the previous largest quantum simulation.

Our solution was also highly accurate, reaching 96% of the best known answer. We set this new record with an algorithm developed by NVIDIA researchers and an open source framework. (Editor’s note: Since publishing this post, we’ve announced larger simulations up to 10,000 vertices with 5,000 qubits, using 20 NVIDIA DGX A100 nodes, and achieving 93% accuracy.)

NVIDIA cuquantum world record
The previous world record for a tensor network MaxCut simulation on a supercomputer (left) and the results using the cuTensorNet library running on Selene (right).

Our breakthrough opens the door for using cuQuantum on NVIDIA DGX systems to research quantum algorithms at a previously impossible scale, accelerating the path to tomorrow’s quantum computers.

Keys to the Quantum World

You can test drive the same software that set this world record.

Starting today, the first library from cuQuantum, cuStateVec, is in public beta, available to download. It uses state vectors to accelerate simulations with tens of qubits.

The cuTensorNet library that helped us set the world record uses tensor networks to simulate up to hundreds or even thousands of qubits on some promising near-term algorithms. It will be available in December.

Get the Latest News at GTC

We invite you to try cuQuantum, get dramatically accelerated performance on your simulations and go break some big records.

Learn more about cuQuantum’s partner ecosystem here.

To get the big picture, attend NVIDIA GTC, taking place online through Nov. 11. Watch NVIDIA CEO Jensen Huang’s GTC keynote address below.