How Dell Technologies Is Building the Engines of AI Factories With NVIDIA Blackwell

An inside look at assembling servers for the world's largest AI factories.
by Scott Martin

Over a century ago, Henry Ford pioneered the mass production of cars and engines to provide transportation at an affordable price. Today, the technology industry manufactures the engines for a new kind of factory — those that produce intelligence.

As companies and countries increasingly focus on AI, and move from experimentation to implementation, the demand for AI technologies continues to grow exponentially. Leading system builders are racing to ramp up production of the servers for AI factories – the engines of AI factories – to meet the world’s exploding demand for intelligence and growth.

Dell Technologies is a leader in this renaissance. Dell and NVIDIA have partnered for decades and continue to push the pace of innovation. In its last earnings call, Dell projected that its AI server business will grow at least $15 billion this year.

“We’re on a mission to bring AI to millions of customers around the world,” said Michael Dell, chairman and chief executive officer, Dell Technologies, in a recent announcement at Dell Technologies World. “With the Dell AI Factory with NVIDIA, enterprises can manage the entire AI lifecycle across use cases, from training to deployment, at any scale.”

The latest Dell AI servers, powered by NVIDIA Blackwell, offer up to 50x more AI reasoning inference output and 5x improvement in throughput compared with the Hopper platform. Customers use them to generate tokens for new AI applications that will help solve some of the world’s biggest challenges, from disease prevention to advanced manufacturing.

Dell servers with NVIDIA GB200 are shipping at scale for a variety of customers, such as CoreWeave’s new NVIDIA GB200 NVL72 system. One of Dell’s U.S. factories can ship thousands of NVIDIA Blackwell GPUs to customers in a week. It’s why they were chosen by one of their largest customers to deploy 100,000 NVIDIA GPUs in just six weeks.

But how is an AI server made? We visited a facility to find out.

Building the Engines of Intelligence

We visited one of Dell’s U.S. facilities that builds the most compute-dense NVIDIA Blackwell generation servers ever manufactured.

Modern automobile engines have more than 200 major components and take three to seven years to roll out to market. NVIDIA GB200 NVL72 servers have 1.2 million parts and were designed just a year ago.

Amid a forest of racks, grouped by phases of assembly, Dell employees quickly slide in GB200 trays, NVLink Switch networking trays and then test the systems. The company said its ability to engineer the compute, network and storage assembly under one roof and fine tune, deploy and integrate complete systems is a powerful differentiator. Speed also matters. The Dell team can build, test, ship – test again on site at a customer location – and turn over a rack in 24 hours.

The servers are destined for state-of-the-art data centers that require a dizzying quantity of cables, pipes and hoses to operate. One data center can have 27,000 miles of network cable — enough to wrap around the Earth. It can pack about six miles of water pipes, 77 miles of rubber hoses, and is capable of circulating 100,000 gallons of water per minute for cooling.

With new AI factories being announced each week – the European Union has plans for seven AI factories, while India, Japan, Saudi Arabia, the UAE and Norway are also developing them – the demand for these engines of intelligence will only grow in the months and years ahead.