NVIDIA DGX systems make deploying AI simpler, faster and more cost-effective for organizations using the approach that’s optimal for their business.
Over the last four years, DGX systems have been deployed by thousands of customers around the globe, including nine of the top 10 government institutions and eight of the top U.S. universities. Today, we announced the new NVIDIA DGX A100 — the third generation of the world’s most advanced, purpose-built AI system.
But there’s more to AI than compute. Many enterprises need expert assistance to get up and running. That’s why we have a network of partners to ease the deployment of large-scale AI infrastructure and streamline AI development workflows.
Today, we’re expanding that network with the NVIDIA DGX-Ready Software program, which helps customers increase their data science productivity.
To access the full potential of DGX systems, NVIDIA and its technology partners offer a portfolio of certified software that enhances deployments, whether customers are working on a single system or a multi-system NVIDIA DGX POD.
These applications enable data science teams to streamline their workflows and help IT teams integrate model development within an enterprise DevOps environment to make it easier to put AI projects into production.
Removing Complexity with Turnkey DGX POD
NVIDIA powers its own work with DGX systems, including the SATURNV, the world’s largest proving ground for AI, with more than 2,000 DGX nodes.
This massive AI infrastructure takes into account not only the compute node but also the network fabric — including storage — and the facilities engineering requirements for building a massive supercomputing infrastructure.
Now powered by DGX A100, SATURNV fuels our critically important AI research and development.
We developed the DGX POD reference architecture based on what we learned from the thousands of DGX systems, including the SATURNV, deployed within NVIDIA as well as at customer sites. And we’ve partnered with trusted companies to launch their own branded DGX POD offerings.
Combining DGX systems with the highest performance storage from NetApp, Pure Storage, DDN Storage, IBM and Dell EMC, along with state-of-the-art networking from Cisco, Arista and Mellanox, these offerings provide prescriptive, defined approaches on how to build and scale AI infrastructure in an enterprise setting.
For organizations eager to accelerate their AI adoption, these DGX POD systems provide the stability, reliability, scalability and plug-and-play capabilities required.
Easing Barriers to Entry with Flexible Deployment Options
Many organizations are challenged by the large capital cost of AI infrastructure, or they simply don’t have the resources to build and maintain a physical data center. And not everybody wants to build a hyperscale AI data center. With increasing cloud adoption, some organizations are even moving away from having their own on-prem infrastructure.
To make it easy to add AI, the partners in our NVIDIA DGX-Ready Data Center colocation program provide access to world-class data center facilities and services in more than 122 locations across 26 countries. This DGX infrastructure as a service provides cloud-like simplicity for AI workloads.
The program allows organizations to get a DGX system or even a DGX POD as an affordable monthly option, rather than having to spend their budget to build and stand up on-prem AI infrastructure. Customers can also take advantage of flexible leasing options and access to the latest NVIDIA technology.
Learn more about NVIDIA DGX partners.