From analytics to training to inference, we built the NVIDIA DGX POD to support the end-to-end lifecycle of AI development. Based on NVIDIA DGX A100 systems, it’s a single platform engineered to solve the challenges of design, deployment and operations.
At NetApp INSIGHT 2020 this week, we announced a new eight-system DGX POD configuration for the NetApp ONTAP AI reference architectures. This new configuration gives businesses incredible performance and scale for all AI workloads — from recommender systems to natural language processing to autonomous system development and more.
And this is in addition to the four-node ONTAP AI reference architecture we announced at GTC earlier this month, giving businesses even more options to use unified building blocks for building their AI data centers.
We’ve already seen organizations build their own AI private clouds with ONTAP AI. This newest DGX POD offering provides a dramatic increase in the size and complexity of AI models that they can develop and deploy in their data centers. NVIDIA DGX PODs and ONTAP AI offer the foundation on which every enterprise can build an AI center of excellence.
Solving Key Challenges
The NVIDIA DGX POD solves three critical challenges that most businesses face:
- Building a streamlined infrastructure that delivers an optimal balance of compute, storage and networking performance.
- Creating a turnkey offering that shortens deployment times from months to weeks.
- Simplifying the day-to-day operations of AI infrastructure.
Since its introduction, DGX POD — and the ecosystem of partners supporting it — has enabled organizations in every industry to build a diverse set of platforms. These range from single, company-wide AI initiatives to country-wide AI centers of excellence that power global enterprises.
Making AI More Accessible
At the GPU Technology Conference earlier this month, we announced how we’re making AI infrastructure more accessible and easier to deploy for businesses. For those undertaking their first step in tackling an AI project to those enabling the work of hundreds of data scientists around the globe, we’ve made investments to support the complete journey of the AI enterprise.
For IT teams faced with tight timelines and budget constraints, sometimes the cloud can seem like the only option to meet urgent project requirements. To assist those who would prefer to keep their workloads and data on-prem, NVIDIA offers the NVIDIA AI Starter Kit.
Designed to get AI workloads up and running quickly, it features the NVIDIA DGX A100 system, combined with ready-to-use AI models, data science workflow software and expert AI consulting expertise from SFL Scientific, all backed by three years of support.
Businesses seeking to infuse AI throughout their operations can’t wait six to eight months. So we took our DGX SuperPOD reference architecture and wrapped it in a full-service solution that accounts for upfront planning, design, deployment and ongoing optimization. It provides companies a faster way to scale their AI.
Learn More at INSIGHT 2020
At the NetApp INSIGHT 2020 Digital Event, running through Oct. 29, we’re presenting sessions focused on AI use cases across industries and demonstrating how NVIDIA and NetApp collaborate to help customers solve critical business problems using AI.
Register to INSIGHT and learn more about NetApp ONTAP AI with NVIDIA DGX A100 systems from the below sessions.
NetApp ONTAP AI with NVIDIA DGX A100 | BRK-1123-2
Join Jacci Cenci, senior technical marketing engineer at NVIDIA, and David Arnette, principal technical marketing engineer at NetApp, to learn about the updated ONTAP AI reference architecture with the NVIDIA DGX A100 and AMD EPYC processor. They’ll provide an overview of the ONTAP AI infrastructure, DGX A100 systems and the NetApp AI Control Plane for managing data science workflows. They’ll also briefly cover updates to the NetApp AI portfolio, including EF-series with DGX A100 and NetApp StorageGRID for object storage data lakes.
AI Software Partner Ecosystem: Overview of Integration | SPD-1106-2
Hear from John Barco, senior director of DGX Product Management at NVIDIA, and Rick Huang, data scientist for AI and analytics at NetApp, who collaborate to help customers with AI pipeline automation, data labeling, consolidating multiple projects, providing Hadoop alternatives and achieving high Kubernetes cluster utilization with a smart orchestration solution through powerful ISV integrations and partnerships.
NetApp for Healthcare, with NVIDIA | SPD-1113-2
Abood Quraini, application engineering manager for Healthcare HPC and AI at NVIDIA, joins Rick Huang and Esteban Rubens, healthcare AI principal at NetApp, to offer guidelines for customers building AI infrastructure using NVIDIA DGX systems and NetApp AFF storage for healthcare use cases. They’ll discuss the high-level workflows used in the development of deep learning models for medical diagnostic imaging, validated test cases and results. They’ll also cover the latest features of NVIDIA Clara Train V3 and how NetApp collaborated with NVIDIA to assist researchers in the fight against COVID-19.
NetApp AI for Retail, with NVIDIA | SPD-1119-2
Davide Onofrio, technical marketing lead engineer at NVIDIA, and NetApp’s Rick Huang discuss their joint work on the NVIDIA Jarvis (since renamed NVIDIA Riva) conversational AI application framework, with a special emphasis on the retail industry.
NetApp AI for Financial Services, with NVIDIA | SPD-1447-1
The voracious data consumption of today’s data scientists means that systems have to be designed around the data — at rest, in flight and during transformation. Join John Ashley, general manager of Financial Services and Technology at NVIDIA, and Satish Thyagarajan, senior technical marketing engineer at NetApp, to hear some examples on how to think technically about this critical business problem.
Also mark your calendars for SC20, where NVIDIA and NetApp plan to host a joint session on helping enterprises take advantage of AI infrastructure.