NVIDIA and Cisco Weave Fabric for Generative AI

The Cisco Nexus HyperFabric AI cluster solution debuts at Cisco Live, supercharged by NVIDIA accelerated computing platforms, NVIDIA AI Enterprise software and NVIDIA NIM inference microservices.
by Kevin Deierling

Building and deploying AI applications at scale requires a new class of computing infrastructure — one that can handle the massive amounts of data, compute power and networking bandwidth needed by generative AI models.

To better ensure these models perform optimally and efficiently, NVIDIA is teaming with Cisco to enable enterprise generative AI infrastructure.

Cisco’s new Nexus HyperFabric AI cluster solution, developed in collaboration with NVIDIA, provides a path for enterprises to operationalize generative AI. Cisco HyperFabric is an enterprise-ready, end-to-end infrastructure solution to scale generative AI workloads. It combines NVIDIA accelerated computing and AI software with Cisco AI-native networking and a robust VAST Data Platform.

“Enterprise applications are transforming into generative AI applications, significantly increasing data processing requirements and overall infrastructure complexity,” said Kevin Wollenweber, senior vice president and general manager of data center and provider connectivity at Cisco. “Together, Cisco and NVIDIA are advancing HyperFabric to advance generative AI for the world’s enterprises so they can use their data and domain expertise to transform productivity and insight.”

Powering an Enterprise-Ready AI Cluster Solution

Foundational to the solution are NVIDIA Tensor Core GPUs, which provide the accelerated computing needed to process massive datasets. The solution utilizes NVIDIA AI Enterprise, a cloud-native software platform that acts as the operating system for enterprise AI. NVIDIA AI Enterprise streamlines the development and deployment of production-grade AI copilots and other generative AI applications, ensuring optimized performance, security and application programming interface stability.

Included with NVIDIA AI Enterprise, NVIDIA NIM inference microservices accelerate the deployment of foundation models while ensuring data security. NIM microservices are designed to bridge the gap between complex AI development and enterprise operational needs. As organizations across various industries embark on their AI journeys, the combination of NVIDIA NIM and the Cisco Nexus HyperFabric AI cluster supports the entire process, from ideation to development and deployment of production-scale AI applications.

The Cisco Nexus HyperFabric AI cluster solution integrates NVIDIA Tensor Core GPUs and NVIDIA BlueField-3 SuperNICs and DPUs to enhance system performance and security. The SuperNICs offer advanced network capabilities, ensuring seamless, high-speed connectivity across the infrastructure. BlueField-3 DPUs offload, accelerate and isolate the infrastructure services, creating a more efficient AI solution.

BlueField-3 DPUs can also run security services like the Cisco Hypershield solution. It enables an AI-native, hyperdistributed security architecture, where security shifts closer to the workloads needing protection. Cisco Hypershield is another notable area of collaboration between the companies, focusing on creating AI-powered security solutions.

Join NVIDIA at Cisco Live

Learn more about how Cisco and NVIDIA power generative AI at Cisco Live — running through June 6 in Las Vegas — where the companies will showcase NVIDIA AI technologies at the Cisco AI Hub and share best practices for enterprises to get started with AI.

Attend these sessions to discover how to accelerate generative AI with NVIDIA, Cisco and other ecosystem partners:

  • Keynote Deep Dive: “Harness a Bold New Era: Transform Data Center and Service Provider Connectivity” with NVIDIA’s Kevin Deierling and Cisco’s Jonathan Davidson, Kevin Wollenweber, Jeremy Foster and Bill Gartner — Wednesday, June 5, from 1-2 p.m. PT
  • AI Hub Theater Presentation: “Accelerate, Deploy Generative AI Anywhere With NVIDIA Inference Microservices” with Marty Jain, vice president of sales and business development at NVIDIA — Tuesday, June 4, from 2:15-2:45 p.m. PT
  • WWT AI Hub Booth: Thought leadership interview with NVIDIA’s Jain and WWT Vice President of Cloud, Infrastructure and AI Solutions Neil Anderson — Wednesday, June 5, from 10-11 a.m. PT
  • NetApp Theater: “Accelerating Gen AI With NVIDIA Inference Microservices on FlexPod” with Sicong Ji, strategic platforms and solutions lead at NVIDIA — Wednesday, June 5, from 1:30-1:40 p.m. PT
  • Pure Storage Theater: “Accelerating Gen AI With NVIDIA Inference Microservices on FlashStack” with Joslyn Shakur, sales alliance manager at NVIDIA — Wednesday, June 5, from 2-2:10 p.m. PT ​

Sign up for generative AI news to stay up to date on the latest breakthroughs, developments and technologies.