NVIDIA and Partners Bring AI Supercomputing to Enterprises

NVIDIA DGX SuperPOD infrastructure powered by DDN, IBM, Mellanox and NetApp eases every enterprise’s pursuit of AI development.
by Tony Paikeday

Academia, hyperscalers and scientific researchers have been big beneficiaries of high performance computing and AI infrastructure. Yet businesses have largely been on the outside looking in.

No longer. NVIDIA DGX SuperPOD provides businesses a proven design formula for building and running enterprise-grade AI infrastructure with extreme scale. The reference architecture gives businesses a prescription to follow to avoid exhaustive, protracted design and deployment cycles and capital budget overruns.

Today, at SC19, we’re taking DGX SuperPOD a step further. It’s available as a consumable solution that now integrates with the leading names in data center IT — including DDN, IBM, Mellanox and NetApp — and is fulfilled through a network of qualified resellers. We’re also working with ScaleMatrix to bring self-contained data centers in a cabinet to the enterprise.

The Rise of the Supercomputing Enterprise

AI is an accelerant for gaining competitive advantage. It can open new markets and even address a business’s existential threats. Formerly untrainable models for use cases like natural language processing become solvable with massive infrastructure scale.

But leading-edge AI demands leadership-class infrastructure — and DGX SuperPOD offers extreme-scale multi-node training of even the most complex models, like BERT for conversational AI.

It consolidates often siloed pockets of AI and machine learning development into a centralized shared infrastructure, bringing together data science talent so projects can quickly go from concept to production at scale.

And it maximizes resource efficiency, avoiding stranded, underutilized assets and increasing a business’s return on its infrastructure investments.

Data Center Leaders Support NVIDIA DGX SuperPOD 

Several of our partners have completed the testing and validation of DGX SuperPOD in combination with their high-performance storage offerings and the Mellanox InfiniBand and Ethernet terabit-speed network fabric.

DGX SuperPOD with IBM Spectrum Storage

“Deploying faster with confidence is only one way our clients are realizing the benefits of the DGX SuperPOD reference architecture with IBM Storage,” said Douglas O’Flaherty, director of IBM Storage Product Marketing. “With comprehensive data pipeline support, they can start with an all-NVMe ESS 3000 flash solution and adapt quickly. With the software-defined flexibility of IBM Spectrum Scale, the DGX SuperPOD design easily scales, extends to public cloud, or integrates IBM Cloud Object Storage and IBM Spectrum Discover. Supported by the expertise of our business partners, we enhance data science productivity and organizational adoption of AI.”

DGX SuperPOD with DDN Storage

“Meeting the massive demands of emerging large-scale AI initiatives requires compute, networking and storage infrastructure that exceeds architectures historically available to most commercial organizations,” said James Coomer, senior vice president of products at DDN. “Through DDN’s extensive development work and testing with NVIDIA and their DGX SuperPOD, we have demonstrated that it is now possible to shorten supercomputing-like deployments from weeks to days and deliver infrastructure and capabilities that are also rock solid and easy to manage, monitor and support. When combined with DDN’s A3I data management solutions, NVIDIA DGX SuperPOD creates a real competitive advantage for customers looking to deploy AI at scale.”

DGX SuperPOD with NetApp

“Industries are gaining competitive advantage with high performance computing and AI infrastructure, but many are still hesitant to take the leap due to the time and cost of deployment,” said Robin Huber, vice president of E-Series at NetApp. “With the proven NVIDIA DGX SuperPOD design built on top of the award-winning NetApp EF600 all-flash array, customers can move past their hesitation and will be able to accelerate their time to value and insight while controlling their deployment costs.”

NVIDIA has built a global network of partners who’ve been qualified to sell and deploy DGX SuperPOD infrastructure:

  • In North America: Worldwide Technologies
  • In Europe, the Middle East and Africa: ATOS
  • In Asia: LTK and Azwell
  • In Japan: GDEP

To get started, read our solution brief and then reach out to your preferred DGX SuperPOD partner.

Scaling Supercomputing Infrastructure — Without a Data Center

Many organizations that need to scale supercomputing simply don’t have access to a data center that’s optimized for the unique demands of AI and HPC infrastructure. We’re partnering with ScaleMatrix, a DGX Ready Data Center Program partner, to bring self-contained data centers in a rack to the enterprise.

In addition to colocation services for DGX infrastructure, ScaleMatrix offers its Dynamic Density Control cabinet technology, which enables businesses to bypass the constraints of data center facilities. This lets enterprises deploy DGX POD and SuperPOD environments almost anywhere while delivering the power and technology of a state-of-the-art data center.

With self-contained cooling, fire suppression, various security options, shock mounting, extreme environment support and more, the DDC solution offered through our partner Microway removes the dependency on having a traditional data center for AI infrastructure.

Learn more about this offering here.