country_code

‘Everybody Will Have an AI Assistant,’ NVIDIA CEO Tells SIGGRAPH Audience

Jensen Huang discusses the future of AI-amplified human productivity, the energy efficiency of accelerated computing, and the intersection of graphics and AI with WIRED’s Lauren Goode at SIGGRAPH 2024.
by Brian Caulfield

Editor’s note: As of June 6, 2025, NVIDIA Edify is no longer available as an NVIDIA NIM microservice preview. To explore available visual AI models, visit build.nvidia.com.

The generative AI revolution — with deep roots in visual computing — is amplifying human creativity even as accelerated computing promises significant gains in energy efficiency, NVIDIA founder and CEO Jensen Huang said Monday.

That makes this week’s SIGGRAPH professional graphics conference, in Denver, the logical venue to discuss what’s next.

“Everybody will have an AI assistant,” Huang said. “Every single company, every single job within the company, will have AI assistance.”

But even as generative AI promises to amplify human productivity, Huang said the accelerated computing technology that underpins it promises to make computing more energy efficient.

“Accelerated computing helps you save so much energy, 20 times, 50 times, and doing the same processing,” Huang said. “The first thing we have to do, as a society, is accelerate every application we can: this reduces the amount of energy being used all over the world.”

The conversation follows a spate of announcements from NVIDIA today.

NVIDIA introduced a new suite of NIM microservices tailored for diverse workflows, including OpenUSD, 3D modeling, physics, materials, robotics, industrial digital twins and physical AI.
These advancements aim to enhance developer capabilities, particularly with the integration of Hugging Face Inference-as-a-Service on DGX Cloud.

In addition, Shutterstock has launched a Generative 3D Service, while Getty Images has upgraded its offerings using NVIDIA Edify technology.

In the realm of AI and graphics, NVIDIA has revealed new OpenUSD NIM microservices and reference workflows designed for generative physical AI applications.

This includes a program for accelerating humanoid robotics development through new NIM microservices for robotics simulation and more.

Finally, WPP, the world’s largest advertising agency, is using Omniverse-driven generative AI for The Coca-Cola Company, helping drive brand authenticity, and showcasing the practical applications of NVIDIA’s advancements in AI technology across various industries.

Huang and Goode started their conversation by exploring how visual computing gave rise to everything from computer games to digital animation to GPU-accelerated computing and, most recently, generative AI powered by industrial-scale AI factories.

All these advancements build on one another. Robotics, for example, requires advanced AI and photorealistic virtual worlds where AI can be trained before being deployed into next-generation humanoid robots.

Huang explained that robotics requires three computers: one to train the AI, one to test the AI in a physically accurate simulation, and one within the robot itself.

“Just about every industry is going to be affected by this, whether it’s scientific computing trying to do a better job predicting the weather with a lot less energy, to augmenting and collaborating with creators to generate images, or generating virtual scenes for industrial visualization,” Huang said. “Robotic self-driving cars are all going to be transformed by generative AI.”

Likewise, NVIDIA Omniverse systems — built around the OpenUSD standard — will also be key to harnessing generative AI to create assets that the world’s largest brands can use.

By pulling from brand assets that live in Omniverse, which can capture brand assets, these systems can capture and replicate carefully curated brand magic.

Finally, all these systems — visual computing, simulation and large-language models — will come together to create digital humans who can help people interact with digital systems of all kinds.

“One of the things that we’re announcing here this week is the concept of digital agents, digital AIs that will augment every single job in the company,” Huang said.

“And so one of the most important use cases that people are discovering is customer service,” Huang said. “In the future, my guess is that it’s going to be human still, but AI in the loop.”

All of this, like any new tool, promises to amplify human productivity and creativity. “Imagine the stories that you’re going to be able to tell with these tools,” Huang said.

NVIDIA GTC 2026: Live Updates on What’s Next in AI

Rolling coverage from San Jose, including NVIDIA CEO Jensen Huang’s keynote, news highlights, live demos and on‑the‑ground color through March 19.
by NVIDIA Writers

Advancing Open Source AI, NVIDIA Donates Dynamic Resource Allocation Driver for GPUs to Kubernetes Community

In addition, NVIDIA announced at KubeCon Europe a confidential containers solution for GPU-accelerated workloads, updates to the NVIDIA KAI Scheduler and new open source projects to enable large-scale AI workloads.
by Justin Boitano

Artificial intelligence has rapidly emerged as one of the most critical workloads in modern computing.

For the vast majority of enterprises, this workload runs on Kubernetes, an open source platform that automates the deployment, scaling and management of containerized applications.

To help the global developer community manage high-performance AI infrastructure with greater transparency and efficiency, NVIDIA is donating a critical piece of software — the NVIDIA Dynamic Resource Allocation (DRA) Driver for GPUs — to the Cloud Native Computing Foundation (CNCF), a vendor-neutral organization dedicated to fostering and sustaining the cloud-native ecosystem. 

Announced today at KubeCon Europe, CNCF’s flagship conference running this week in Amsterdam, the donation moves the driver from being vendor-governed to offering full community ownership under the Kubernetes project. This open environment encourages a wider circle of experts to contribute ideas, accelerate innovation and help ensure the technology stays aligned with the modern cloud landscape. 

“NVIDIA’s deep collaboration with the Kubernetes and CNCF community to upstream the NVIDIA DRA Driver for GPUs marks a major milestone for open source Kubernetes and AI infrastructure,” said Chris Aniszczyk, chief technology officer of CNCF. “By aligning its hardware innovations with upstream Kubernetes and AI conformance efforts, NVIDIA is making high-performance GPU orchestration seamless and accessible to all.”

In addition, in collaboration with the CNCF’s Confidential Containers community, NVIDIA has introduced GPU support for Kata Containers, lightweight virtual machines that act like containers. This extends hardware acceleration into a stronger isolation, separating workloads for increased security and enabling AI workloads to run with enhanced protection so organizations can easily implement confidential computing to safeguard data.

Simplifying AI Infrastructure

Historically, managing the powerful GPUs that fuel AI within data centers required significant effort. 

This contribution is designed to make high-performance computing more accessible. Key benefits for developers include:

  • Improved Efficiency: The driver allows for smarter sharing of GPU resources, delivering effective use of computing power, with support of NVIDIA Multi-Process Service and NVIDIA Multi-Instance GPU technologies.
  • Massive Scale: It provides native support for connecting systems together, including with NVIDIA Multi-Node NVlink interconnect technology. This is essential for training massive AI models on NVIDIA Grace Blackwell systems and next-generation AI infrastructure.
  • Flexibility: Developers can dynamically reconfigure their hardware to suit their needs, changing how resources are allocated on the fly.
  • Precision: The software supports fine-tuned requests, allowing users to ask for the specific computing power, memory settings or interconnect arrangement needed for their applications.

A Collaborative, Industry-Wide Effort

NVIDIA is collaborating with industry leaders — including Amazon Web Services, Broadcom, Canonical, Google Cloud, Microsoft, Nutanix, Red Hat and SUSE — to drive these features forward for the benefit of the entire cloud-native ecosystem.

“Open source will be at the core of every successful enterprise AI strategy, bringing standardization to the high-performance infrastructure components that fuel production AI workloads,” said Chris Wright, chief technology officer and senior vice president of global engineering at Red Hat. “NVIDIA’s donation of the NVIDIA DRA Driver for GPUs helps to cement the role of open source in AI’s evolution, and we look forward to collaborating with NVIDIA and the broader community within the Kubernetes ecosystem.”

“Open source software and the communities that sustain it are a cornerstone of the infrastructure used for scientific computing and research,” said Ricardo Rocha, lead of platforms infrastructure at CERN. “For organizations like CERN, where efficiently analyzing petabytes of data is essential to discovery, community-driven innovation helps accelerate the pace of science. NVIDIA’s donation of the DRA Driver strengthens the ecosystem researchers rely on to process data across both traditional scientific computing and emerging machine learning workloads.”

Expanding the Open Source Horizon

This donation is just part of NVIDIA’s broader initiatives to support the open source community. For example, NVSentinel — a system for GPU fault remediation — and AI Cluster Runtime, an agentic AI framework, were announced at GTC last week.

In addition, NVIDIA announced at GTC new open source projects including the NVIDIA NemoClaw reference stack and NVIDIA OpenShell runtime for securely running autonomous agents. OpenShell provides fine-grained programmable policy security and privacy controls, and natively integrates with Linux, eBPF and Kubernetes.

NVIDIA also today announced that its high-performance AI workload scheduler, the KAI Scheduler, has been onboarded as a CNCF Sandbox project — a key step toward fostering broader collaboration and ensuring the technology evolves alongside the needs of the wider cloud-native ecosystem. Developers and organizations can use and contribute to the KAI Scheduler today.

NVIDIA remains committed to actively maintaining and contributing to Kubernetes and CNCF projects to help meet the rigorous demands of enterprise AI customers. 

In addition, following the release of NVIDIA Dynamo 1.0, NVIDIA is expanding the Dynamo ecosystem with Grove, an open source Kubernetes application programming interface for orchestrating AI workloads on GPU clusters. Grove, which enables developers to express complex inference systems in a single declarative resource, is being integrated with the llm-d inference stack for wider adoption in the Kubernetes community. 

Developers and organizations can begin using and contributing to the NVIDIA DRA Driver today.

Visit the NVIDIA booth at KubeCon to see live demos of this technology in action.

NVIDIA, Telecom Leaders Build AI Grids to Optimize Inference on Distributed Networks

AT&T, T‑Mobile, Comcast, Spectrum and others are building AI grids using NVIDIA AI infrastructure, while Personal AI, Linker Vision, Serve Robotics and Decart are deploying real-time AI applications across the grid.
by Kanika Atri

As AI‑native applications scale to more users, agents and devices, the telecommunications network is becoming the next frontier for distributing AI. 

At NVIDIA GTC 2026, leading operators in the U.S. and Asia showed that this shift is underway, announcing AI grids — geographically distributed and interconnected AI infrastructure — using their network footprint to power and monetize new AI services across the distributed edge.  

Different operators are taking different paths. Many are starting by lighting up existing wired edge sites as AI grids they can monetize today. Others harness AI-RAN — a technology that enables the full integration of AI into the radio access network — as a workload and edge inference platform on the same grid.  

Telcos and distributed cloud providers run some of the most expansive infrastructure in the world: about 100,000 distributed network data centers worldwide, spanning regional hubs, mobile switching offices and central offices, with enough spare power to offer more than 100 gigawatts of new AI capacity over time.

AI grids turn this existing real-estate, power and connectivity into a geographically distributed computing platform that runs AI inference closer to users, devices and data, where response and cost per token align best. This is more than an infrastructure upgrade — it’s a structural change in how AI is delivered, putting telecom networks at the center of scaling AI rather than just carrying its traffic. 

Global Operators Turn Distributed Networks Into AI Grids

Across six major operators, AI grids are moving from concept to reality.

AT&T, a leader in connected IoT with over 100 million connections across thousands of device types, is partnering with Cisco and NVIDIA to build an AI grid for IoT. By running AI on a dedicated IoT core and moving AI inference closer to where data is created, AT&T can support mission‑critical, real‑time applications like public‑safety use cases with Linker Vision, enabling faster detection, alerting and response while helping keep sensitive information under customer control at the network edge.

“Scaling AI services that are both highly secure and accessible for enterprises and developers is a core pillar of our IoT connectivity strategy,” said Shawn Hakl, senior vice president of product at AT&T Business. “By combining AT&T’s business‑grade connectivity, localized AI compute and zero‑trust security while working with members of the NVIDIA Inception program and harnessing Cisco’s AI Grid with NVIDIA infrastructure and Cisco Mobility Services Platform, we’re bringing real‑time AI inference closer to where data is generated — accelerating digital transformation and unlocking new business opportunities.”

Comcast is developing one of the nation’s largest low‑latency broadband footprints into an AI grid for real‑time, hyper‑personalized experiences. Working with NVIDIA, Decart, Personal AI and HPE, Comcast has validated that its AI grid keeps conversational agents, interactive media and NVIDIA GeForce NOW cloud gaming responsive and economical even during demand spikes, with significantly higher throughput and lower cost per token.

Spectrum has the network infrastructure to support an AI grid that spans more than 1,000 edge data centers and hundreds of megawatts of capacity less than 10 milliseconds away from 500 million devices. The initial deployment focuses on rendering high-resolution graphics for media production using remote GPUs embedded across Spectrum’s fiber-powered, low-latency network.

Akamai is building a globally distributed AI grid, expanding Akamai Inference Cloud across more than 4,400 edge locations with thousands of NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs. Akamai’s AI grid orchestration platform matches each request to the right tier of compute, improving the token economics of inference while powering low-latency, real-time AI experiences for applications like gaming, media, financial services and retail.

Indosat Ooredoo Hutchison is connecting its sovereign AI factory with distributed edge and AI‑RAN sites across Indonesia to build an AI grid for local innovation. By running Sahabat-AI — a Bahasa Indonesia-based platform — on this grid within Indonesia’s borders, Indosat can bring localized AI services closer to hundred millions of Indonesians across thousands of islands, giving local developers and startups a sovereign platform to build AI applications that are fast, culturally relevant and compliant by design.

T‑Mobile  is working with NVIDIA to explore edge AI applications using NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs, demonstrating how distributed network locations could support emerging AI-RAN and edge inference use cases. Developers including LinkerVision, Levatas, Vaidio, Archetype AI and Serve Robotics are already piloting smart‑city, industrial and retail applications on the grid, connecting cameras, delivery robots and city‑scale agents to real-time intelligence on the network edge. This demonstrates how cell sites and mobile switching offices can support distributed edge AI workloads while continuing to deliver advanced 5G connectivity.

New AI‑Native Services Put Telecom AI Grids to Work

AI grids are becoming foundational to a new class of AI‑native applications — real‑time, hyper‑personalized, concurrent and token-intensive.

Personal AI is using NVIDIA Riva to power human‑grade conversational agents on the AI grid. By running small language models closer to users, it achieves sub-500 millisecond end-to-end latency and over 50% lower cost-per-token, enabling voice experiences that feel natural while remaining economically viable at scale.

Linker Vision is transforming city operations by running real‑time vision AI on the AI grid. By processing thousands of camera feeds across distributed edge sites, it delivers predictable latency for live detection and instant alerting — enabling safer, smarter cities with up to 10x faster traffic accident detection, 15x faster disaster response and sub‑minute alerts for unsafe crowd behavior. 

Decart is redefining hyper‑personalized distributed media by bringing real‑time video generation to AI grids. By running its Lucy models at the network edge, it achieves sub‑12-millisecond network latency, enabling interactive video streams and overlays that adapt instantly to each viewer, delivering smooth, immersive live video experiences even when viewership peaks.

AI Grid Reference Design and Ecosystem

The NVIDIA AI Grid Reference Design defines the building blocks — including NVIDIA accelerated computing, networking and software platforms — for deploying and orchestrating AI across distributed sites.

A growing ecosystem of full‑stack partners including Cisco and infrastructure partners like HPE are bringing AI grid solutions to market on systems built with the NVIDIA RTX PRO 6000 Blackwell Server Edition. Armada, Rafay and Spectro Cloud are among the partners building an AI grid control plane to seamlessly orchestrate workloads across distributed AI infrastructure.

“Physical AI is accelerating the shift from centralized intelligence to distributed decision making at the network edge,” said Masum Mir, senior vice president and general manager provider mobility at Cisco. “Our partnership with NVIDIA brings together the full stack — from NVIDIA GPUs to Cisco’s networking and mobility capabilities — enabling operators to power mission-critical applications, deliver real-time inferencing and participate in the AI value chain.”

Together, this ecosystem is helping telcos and distributed cloud providers redefine their role in the AI value chain — transforming the network edge into a unified intelligence layer that runs, scales and monetizes AI workloads.

Learn more about AI Grid.