country_code

New NVIDIA Neural Graphics SDKs Make Metaverse Content Creation Available to All

A dozen tools and programs — including new releases NeuralVDB and Kaolin Wisp — enable easy, fast 3D content creation for millions of designers and creators.
by Greg Estes

The creation of 3D objects for building scenes for games, virtual worlds including the metaverse, product design or visual effects is traditionally a meticulous process, where skilled artists balance detail and photorealism against deadlines and budget pressures.

It takes a long time to make something that looks and acts as it would in the physical world. And the problem gets harder when multiple objects and characters need to interact in a virtual world. Simulating physics becomes just as important as simulating light. A robot in a virtual factory, for example, needs to have not only the same look, but also the same weight capacity and braking capability as its physical counterpart.

It’s hard. But the opportunities are huge, affecting trillion-dollar industries as varied as transportation, healthcare, telecommunications and entertainment, in addition to product design. Ultimately, more content will be created in the virtual world than in the physical one.

To simplify and shorten this process, NVIDIA today released new research and a broad suite of tools that apply the power of neural graphics to the creation and animation of 3D objects and worlds.

These SDKs — including NeuralVDB, a ground-breaking update to industry standard OpenVDB, and Kaolin Wisp, a PyTorch library establishing a framework for neural fields research — ease the creative process for designers while making it easy for millions of users who aren’t design professionals to create 3D content.

Neural graphics is a new field intertwining AI and graphics to create an accelerated graphics pipeline that learns from data. Integrating AI enhances results, helps automate design choices and provides new, yet to be imagined opportunities for artists and creators. Neural graphics will redefine how virtual worlds are created, simulated and experienced by users.

These SDKs and research contribute to each stage of the content creation pipeline, including:

3D Content Creation

  • Kaolin Wisp – an addition to Kaolin, a PyTorch library enabling faster 3D deep learning research by reducing the time needed to test and implement new techniques from weeks to days. Kaolin Wisp is a research-oriented library for neural fields, establishing a common suite of tools and a framework to accelerate new research in neural fields.
  • Instant Neural Graphics Primitives – a new approach to capturing the shape of real-world objects, and the inspiration behind NVIDIA Instant NeRF, an inverse rendering model that turns a collection of still images into a digital 3D scene. This technique and associated GitHub code accelerate the process by up to 1,000x.
  • 3D MoMa – a new inverse rendering pipeline that allows users to quickly import a 2D object into a graphics engine to create a 3D object that can be modified with realistic materials, lighting and physics.
  • GauGAN360 – the next evolution of NVIDIA GauGAN, an AI model that turns rough doodles into photorealistic masterpieces. GauGAN360 generates 8K, 360-degree panoramas that can be ported into Omniverse scenes.
  • Omniverse Avatar Cloud Engine (ACE) – a new collection of cloud APIs, microservices and tools to create, customize and deploy digital human applications. ACE is built on NVIDIA’s Unified Compute Framework, allowing developers to seamlessly integrate core NVIDIA AI technologies into their avatar applications.

Physics and Animation

  • NeuralVDB – a groundbreaking improvement on OpenVDB, the current industry standard for volumetric data storage. Using machine learning, NeuralVDB introduces compact neural representations, dramatically reducing memory footprint to allow for higher-resolution 3D data.
  • Omniverse Audio2Face – an AI technology that generates expressive facial animation from a single audio source. It’s useful for interactive real-time applications and as a traditional facial animation authoring tool.
  • ASE: Animation Skills Embedding – an approach enabling physically simulated characters to act in a more responsive and life-like manner in unfamiliar situations. It uses deep learning to teach characters how to respond to new tasks and actions.
  • TAO Toolkit – a framework to enable users to create an accurate, high-performance pose estimation model, which can evaluate what a person might be doing in a scene using computer vision much more quickly than current methods.

Experience

  • Image Features Eye Tracking – a research model linking the quality of pixel rendering to a user’s reaction time. By predicting the best combination of rendering quality, display properties and viewing conditions for the least latency, It will allow for better performance in fast-paced, interactive computer graphics applications such as competitive gaming.
  • Holographic Glasses for Virtual Reality – a collaboration with Stanford University on a new VR glasses design that delivers full-color 3D holographic images in a groundbreaking 2.5-mm-thick optical stack.

Join NVIDIA at SIGGRAPH to see more of the latest research and technology breakthroughs in graphics, AI and virtual worlds. Check out the latest innovations from NVIDIA Research, and access the full suite of NVIDIA’s SDKs, tools and libraries.

NVIDIA DSX Air Boosts Time to Token With Accelerated Simulation for AI Factories

Used by CoreWeave and others, the new platform enables enterprises to simulate complex deployments through validated reference architectures for compute, networking, storage, orchestration and security — before a single server is unboxed.
by Scott Martin

Setting up AI factories in simulation — decreasing deployment time from months to days — is  accelerating the next industrial revolution. 

Nowhere was that more apparent than at GTC 2026, in San Jose, where NVIDIA founder and CEO Jensen Huang introduced NVIDIA DSX Air. Part of NVIDIA DSX Sim in the DSX platform, NVIDIA’s blueprint for AI factories, DSX Air is a software-as-a-service platform for logically simulating AI factories. It delivers high‑fidelity digital simulations of NVIDIA hardware infrastructure, including GPUs, SuperNICs, DPUs and switches, and it integrates with leading partner solutions for storage and routing, security, orchestration and more via open, API-based connectivity.

NVIDIA DSX Air enables a complete AI factory ecosystem, uniting NVIDIA infrastructure with partner technologies to deliver full‑stack simulation and accelerate complex AI deployments.    

Companies building some of the world’s most advanced AI infrastructure, including CoreWeave, are already using DSX Air to simulate and validate their environments long before hardware reaches the loading dock. The development underscores a new reality: simulation is now essential to accelerating AI deployment at scale.

DSX Air allows organizations to construct a full digital twin of their AI factory — compute, networking, storage, orchestration and security — before a single server is unboxed. By shifting integration and troubleshooting into simulation, customers are reducing the time to first token from weeks or months to mere days or hours, saving enormous amounts of time and costs.

An industry analogy for this AI factory simulation phenomenon explains it well: It’s like IT mirroring your laptop to set up a new one, except the “laptop” is a hyperscale AI factory and the “mirroring” is a complete, high‑fidelity replica of the production environment.

For operators racing to bring new AI capacity online, this change is transformative.

Building a Platform for an Entire Ecosystem

The NVIDIA DSX Air simulation platform is designed to support the entire AI factory ecosystem. Server manufacturers, orchestration vendors, storage providers and security partners can all validate their offerings alongside NVIDIA infrastructure — together, in one environment, at scale.

This ecosystem‑wide capability is already reshaping partner workflows.

Server manufacturers, which serve as the primary channel for enterprise inference, can now model and validate their reference architectures without building expensive physical labs. Enterprise AI environments rarely fit rigid designs, and customers often require bespoke configurations. With DSX Air, manufacturers can create digital twins tailored to specific customer needs, test their software stacks and deliver validated solutions without touching hardware.

Orchestration vendors — critical for enterprises and tier‑2 clouds that need turnkey AI services — gain the ability to test at scale. At GTC, NVIDIA showcased a multi‑tenant RTX PRO Server environment running entirely in simulation, with Netris providing network orchestration, Rafay handling host orchestration and NVIDIA Run:ai optimizing GPU allocation. These partners can now validate complex workflows under realistic conditions without deploying physical clusters.

The simulation environment is also valuable for validating the data platforms that power AI factories. Instead of requiring large physical clusters, DSX Air allows ecosystem partners to model complete AI workflows alongside NVIDIA compute, networking and software infrastructure. At GTC, the booth demonstration features a video retrieval-augmented generation workload running on the VAST AI Operating System, including a fully operational VAST cluster with DataEngine nodes and the video search and summarization front end. DataEngine triggers and functions process and index video content through an end-to-end pipeline, illustrating how AI applications can be designed, tested and validated inside the DGX Air simulation before deploying physical infrastructure.

Security vendors — facing some of the most demanding validation requirements — can now test multi‑tenant policies, DPU‑accelerated isolation and threat detection in a realistic environment. The GTC demo includes Check Point’s distributed firewall running on simulated BlueField DPUs, TrendAI Vision One for threat detection and Keysight AI Inference Builder, an emulation and analytics platform designed to validate inference-optimized AI infrastructure at scale. Security partners can identify vulnerabilities and validate policies in a customer’s digital twin long before production goes live.

Across the ecosystem, partners emphasized the same point: DSX Air gives them a complete, scalable and cost‑effective way to validate their solutions with NVIDIA infrastructure and with each other.

Operating With a New Model to Accelerate Time to Token

NVIDIA DSX Air isn’t just a deployment accelerator — it introduces a new operational model for AI factories.

On the first day, customers build their intended production environment entirely in simulation. They configure networking, compute, storage, orchestration, security and scheduling exactly as they plan to deploy them. They validate that everything works together, identify issues early and ensure the environment behaves as expected.

Next, they can deploy with confidence. Because the environment has already been tested end to end, the probability of a smooth bring‑up increases dramatically. Time to first token shrinks, and teams can focus on running workloads rather than troubleshooting infrastructure.

Afterward and beyond, DSX Air becomes a safe environment for change management. Long‑lived simulations allow customers to test upgrades, rehearse maintenance windows, validate patches and predict operational impact before touching production. Only after changes succeed in simulation are they applied to the live environment, maximizing uptime and ensuring infrastructure availability.

This lifecycle approach reflects how modern AI factories can operate as they scale.

Simulating AI Factories Becomes the Backbone of AI Infrastructure

GTC showed that simulation is no longer a future concept — it is the new backbone of AI infrastructure deployment and operations.

NVIDIA DSX Air enables customers and partners to simulate everything in one place, accelerating deployment, reducing risk and ensuring day‑one performance at scale.

Adopting NVIDIA DSX Air to Accelerate Deployments With Simulation

Siam.AI, Thailand’s largest AI cloud provider, has accelerated its infrastructure deployment with NVIDIA DSX Air. Using simulation, Siam.AI embraced NVIDIA best practices well ahead of schedule, ensuring day-one operational expertise and validating their architecture in a virtual environment before the physical hardware even arrived.

Similarly, Hydra Host is using DSX Air to accelerate development of Brokkr, its AI factory operating system for bare-metal GPU provisioning that’s used by dozens of GPU deployments globally. By simulating full-stack environments in DSX Air before deploying to production, Hydra Host can validate Brokkr’s automation and orchestration workflows across diverse networking and hardware configurations at scale. This simulation-first approach lets Hydra Host ship validated infrastructure faster to customers worldwide while minimizing risk to live systems as global AI demand grows.

As AI factories grow in size and complexity, the ability to validate full‑stack environments before hardware arrives will define the pace of innovation. NVIDIA DSX Air delivers that capability today, giving organizations the fastest possible path to first token and a more reliable way to operate AI infrastructure over time.

Learn more about NVIDIA DSX Air.

 

 

NVIDIA GTC 2026: Live Updates on What’s Next in AI

Rolling coverage from San Jose, including NVIDIA CEO Jensen Huang’s keynote, news highlights, live demos and on‑the‑ground color through March 19.
by NVIDIA Writers

NVIDIA and Thinking Machines Lab Announce Long-Term Gigawatt-Scale Strategic Partnership

by NVIDIA Newsroom

NVIDIA and Thinking Machines Lab announced today a multiyear strategic partnership to deploy at least one gigawatt of next-generation NVIDIA Vera Rubin systems to support Thinking Machines’ frontier model training and platforms delivering customizable AI at scale. Deployment on the NVIDIA Vera Rubin platform is targeted for early next year. The partnership also includes an effort to design training and serving systems for NVIDIA architectures and broaden access to frontier AI and open models for enterprises, research institutions and the scientific community.

NVIDIA has also made a significant investment in Thinking Machines Lab to support the company’s long-term growth.

“AI is the most powerful knowledge discovery instrument in human history,” said Jensen Huang, founder and CEO of NVIDIA. “Thinking Machines has brought together a world-class team to advance the frontier of AI. We are thrilled to partner with Thinking Machines to realize their exciting vision for the future of AI.”

“NVIDIA’s technology is the foundation on which the entire field is built,” said Mira Murati, cofounder and CEO of Thinking Machines. “This partnership accelerates our capacity to build AI that people can shape and make their own, as it shapes human potential in turn.”

Building powerful AI systems that are understandable, customizable and collaborative demands advances in research, design and infrastructure at scale. This partnership provides that foundation, with the shared aim of ensuring that the most transformative technology of our time expands human capability.