country_code

NVIDIA and Zoox Pave the Way for Autonomous Ride-Hailing

‘The world has never seen a robotics company like this before,’ NVIDIA founder and CEO Jensen Huang said in a fireside chat with Zoox CEO Aicha Evans and Zoox cofounder and CTO Jesse Levinson.
by Jessica Soares

In celebration of Zoox’s 10th anniversary, NVIDIA founder and CEO Jensen Huang recently joined the robotaxi company’s CEO, Aicha Evans, and its cofounder and CTO, Jesse Levinson, to discuss the latest in autonomous vehicle (AV) innovation and experience a ride in the Zoox robotaxi.

In a fireside chat at Zoox’s headquarters in Foster City, Calif., the trio reflected on the two companies’ decade of collaboration. Evans and Levinson highlighted how Zoox pioneered the concept of a robotaxi purpose-built for ride-hailing and created groundbreaking innovations along the way, using NVIDIA technology.

“The world has never seen a robotics company like this before,” said Huang. “Zoox started out solely as a sustainable robotics company that delivers robots into the world as a fleet.”

Since 2014, Zoox has been on a mission to create fully autonomous, bidirectional vehicles purpose-built for ride-hailing services. This sets it apart in an industry largely focused on retrofitting existing cars with self-driving technology.

A decade later, the company is operating its robotaxi, powered by NVIDIA GPUs, on public roads.

Computing at the Core

Zoox robotaxis are, at their core, supercomputers on wheels. They’re built on multiple NVIDIA GPUs dedicated to processing the enormous amounts of data generated in real time by their sensors.

The sensor array includes cameras, lidar, radar, long-wave infrared sensors and microphones. The onboard computing system rapidly processes the raw sensor data collected and fuses it to provide a coherent understanding of the vehicle’s surroundings.

The processed data then flows through a perception engine and prediction module to planning and control systems, enabling the vehicle to navigate complex urban environments safely.

NVIDIA GPUs deliver the immense computing power required for the Zoox robotaxis’ autonomous capabilities and continuous learning from new experiences.

Using Simulation as a Virtual Proving Ground

Key to Zoox’s AV development process is its extensive use of simulation. The company uses NVIDIA GPUs and software tools to run a wide array of simulations, testing its autonomous systems in virtual environments before real-world deployment.

These simulations range from synthetic scenarios to replays of real-world scenarios created using data collected from test vehicles. Zoox uses retrofitted Toyota Highlanders equipped with the same sensor and compute packages as its robotaxis to gather driving data and validate its autonomous technology.

This data is then fed back into simulation environments, where it can be used to create countless variations and replays of scenarios and agent interactions.

Zoox also uses what it calls “adversarial simulations,” carefully crafted scenarios designed to test the limits of the autonomous systems and uncover potential edge cases.

The company’s comprehensive approach to simulation allows it to rapidly iterate and improve its autonomous driving software, bolstering AV safety and performance.

“We’ve been using NVIDIA hardware since the very start,” said Levinson. “It’s a huge part of our simulator, and we rely on NVIDIA GPUs in the vehicle to process everything around us in real time.”

A Neat Way to Seat

Zoox’s robotaxi, with its unique bidirectional design and carriage-style seating, is optimized for autonomous operation and passenger comfort, eliminating traditional concepts of a car’s “front” and “back” and providing equal comfort and safety for all occupants.

“I came to visit you when you were zero years old, and the vision was compelling,” Huang said, reflecting on Zoox’s evolution over the years. “The challenge was incredible. The technology, the talent — it is all world-class.”

Using NVIDIA GPUs and tools, Zoox is poised to redefine urban mobility, pioneering a future of safe, efficient and sustainable autonomous transportation for all.

From Testing Miles to Market Projections

As the AV industry gains momentum, recent projections highlight the potential for explosive growth in the robotaxi market. Guidehouse Insights forecasts over 5 million robotaxi deployments by 2030, with numbers expected to surge to almost 34 million by 2035.

The regulatory landscape reflects this progress, with 38 companies currently holding valid permits to test AVs with safety drivers in California. Zoox is currently one of only six companies permitted to test AVs without safety drivers in the state.

As the industry advances, Zoox has created a next-generation robotaxi by combining cutting-edge onboard computing with extensive simulation and development.

In the image at top, NVIDIA founder and CEO Jensen Huang stands with Zoox CEO Aicha Evans and Zoox cofounder and CTO Jesse Levinson in front of a Zoox robotaxi.

New Adobe Premiere Color Grading Mode Accelerated on NVIDIA GPUs

New NVIDIA RTX-accelerated features streamline creative workflows in Adobe Premiere and system optimization with NVIDIA Project G-Assist.
by Joel Pennington

The NAB Show 2026 trade show, running April 18-22 in Las Vegas, is set to showcase a wave of new features and optimizations for top video editing applications. Bringing together over 60,000 content professionals from across the broadcast and media and entertainment industries, the event highlights how video editors, livestreamers and professional creators are exploring new tools, accelerated by NVIDIA RTX technology, to enhance and streamline their creative workflows.

At the show, Adobe is announcing a new Adobe Premiere Color Mode in beta. 

Designed to function as a dedicated grading environment nested directly within Premiere, it offers a clean, responsive interface that lets editors stay in their creative flow rather than relying on external tools for color correction. Tapping into GPU acceleration on NVIDIA GeForce RTX- and NVIDIA RTX PRO-equipped systems, this streamlined workflow, operating in 32-bit color depth for the first time, delivers significantly faster performance and quality.

NVIDIA also launched a new update to NVIDIA Project G-Assist — an experimental AI assistant that helps tune, control and optimize GeForce RTX systems. 

Color Meets Compute

Premiere’s Color Mode is a new clean, responsive interface within Adobe Premiere that enables editors to do color grading on native videos. Every element is designed to guide editors through the grading process without distractions. A large program monitor anchors the experience, providing immediate visual feedback as adjustments are made to enable faster decision-making and more precise control.

A clip grid view allows editors to visualize progression across shots in a sequence. This makes it easier to maintain consistency across scenes and ensure a cohesive look throughout a project. 

Controls are organized into focused modules, each tailored to a specific aspect of color grading. Multiple modules can be active simultaneously, giving editors flexibility while maintaining clarity. Each control features a unique heads-up display (HUD), providing contextual guidance without cluttering the interface.

Color grading is one of the most computationally intensive tasks in post-production. Every adjustment — bidirectional controls, multi-zone tonal shaping and stacked color operations — runs on NVIDIA GPUs, accelerating playback, iteration and visual feedback.

Editors can work with up to six luminance adjustment zones, moving beyond traditional highlights, midtones and shadows models. This allows for more nuanced tonal control and finer adjustments across the image. 

Visual scopes are context-aware, dynamically adapting based on the selected tool. HUD overlays provide visual cues directly within the scopes, helping editors understand how their adjustments affect the image without needing to interpret complex visual scopes and graphs.

The entire system now operates in 32-bit color depth precision, delivering maximum color fidelity and preventing unwanted clipping. Editors retain full control, with the ability to clip colors intentionally when needed for creative effect. Color styles can also be applied flexibly, at the sequence, clip, reel or custom group level, making it easier to manage looks across complex projects.

Download the Adobe Premiere (beta) to get started with Color Mode. 

Project G-Assist: Enhanced Recommendations and Controls 

The NVIDIA Project G-Assist on-device AI assistant helps users get the most out of their hardware. Today’s update adds an advanced detection system for gaming settings, as well as an enhanced knowledge system, enabling G-Assist to deliver higher accuracy when providing advice or adjusting settings for esports and AAA gaming.

The assistant can also now control more settings across systems. It can configure advanced RTX features from the NVIDIA App, including NVIDIA DLSS Overrides, Smooth Motion, RTX HDR, Digital Vibrance and encoder settings.

Download Project G-Assist v0.2.1 from the NVIDIA App.

#ICYMI: The Latest Updates for RTX AI PCs

📹 Learn how visual effects shop Corridor Crew’s Niko Pueringer built his own green screen key tool, powered by NVIDIA RTX GPUs, at NAB. Stop by the Puget Systems booth on Monday, April 20, at 1 p.m. PT for a special presentation, or tune in on NVIDIA Studio’s YouTube channel on Tuesday, April 21, at 12 p.m. PT to watch the full session.

🖼️ Also at NAB, join NVIDIA’s Sabour Amirazodi for a special presentation at the ASUS booth on Tuesday, April 21, at 11 a.m. PT. Amirazodi will showcase how guiding generative AI can produce creative outputs like storyboards or entire movie trailers — based on a single image input. 

📽️ Check out content creator Gavin Herman’s Studio Session, “How to Edit Professional Talking Head Videos in DaVinci Resolve,” on the NVIDIA Studio YouTube channel. Generative workflow specialists can watch this two-hour, instructor-led workshop on how to use NVIDIA GPU acceleration for ComfyUI.

🦞 LM Studio is now an official OpenClaw provider. OpenClaw can now run local models through LM Studio on NVIDIA GPUs, unlocking faster on-device performance.

🦥 Unsloth and NVIDIA have teamed up to eliminate hidden bottlenecks that slow down fine-tuning on NVIDIA GPUs, improving fine-tuning performance by 15%. 

✨ Google’s Gemma 4 family of omni-capable models are built for local AI across a wide range of devices. Google and NVIDIA have optimized Gemma 4 for NVIDIA GPUs, enabling efficient performance on NVIDIA RTX-powered PCs and workstations, NVIDIA DGX Spark personal AI supercomputers and NVIDIA Jetson Orin Nano edge AI modules.

📽️ Check out this NVIDIA GTC session on how developers can build, run and optimize AI agents locally on NVIDIA GPUs, covering everything from quantization to backends like Ollama and applications like OpenClaw and ComfyUI.

👀 Wondershare Filmora has added a new feature for Eye Contact Correction based on the NVIDIA Broadcast Eye Contact feature. This feature runs on the cloud on NVIDIA GPUs, designed to refine the gaze of subjects in post production for a more natural, confident and camera-ready look, delivering polished, professional videos in seconds. 

Filmora’s AI Eye Contact Correction feature powered in the cloud by NVIDIA GPUs.

Plug in to NVIDIA AI PC on Facebook, Instagram, TikTok and X — and stay informed by subscribing to the RTX AI PC newsletter.

Follow NVIDIA Workstation on LinkedIn and X

Into the Omniverse: NVIDIA GTC Showcases Virtual Worlds Powering the Physical AI Era

by Heather McDiarmid
Key visual showcasing partner robots in action running and working on an assembly line.

Editor’s note: This post is part of Into the Omniverse, a series focused on how developers, 3D practitioners, and enterprises can transform their workflows using the latest advances in OpenUSD and NVIDIA Omniverse.

NVIDIA GTC last week showcased a turning point in physical AI: Robots, vehicles and factories are scaling from single use cases and isolated deployments to sophisticated enterprise workloads across industries. 

At the center of this shift are new frontier models for physical AI, including NVIDIA Cosmos 3, NVIDIA Isaac GR00T N1.7 and NVIDIA Alpamayo 1.5. 

NVIDIA also released the NVIDIA Physical AI Data Factory Blueprint, designed to push the state of the art in world modeling, humanoid skills and autonomous driving, as well as the NVIDIA Omniverse DSX Blueprint for AI factory digital twin simulation.

Open source agentic frameworks such as OpenClaw extend the AI stack all the way to operations — enabling long‑running “claws” that use tools, memory and messaging interfaces to orchestrate workflows, manage data pipelines and execute tasks autonomously on dedicated machines. 

“With NVIDIA and the broader ecosystem, we’re building the claws and guardrails that let anyone create powerful, secure AI assistants,” said Peter Steinberger, creator of OpenClaw, in an NVIDIA press release from GTC. 

OpenUSD is a driving force behind the scalability of physical AI — providing a common, scene‑description language that lets teams bring computer-aided design (CAD) data, simulation assets and real‑world telemetry into a shared, physically accurate view of the world. 

Simulating the AI Factory Before It’s Built

Modern AI factories are complex — spanning thermals, power grids, network load and mechanical systems. Building them on time and on budget becomes much easier when using simulation technology. 

To tackle this, NVIDIA introduced the Omniverse DSX Blueprint at GTC, a reference architecture that unifies simulation across every layer of an AI factory through a single digital twin. This enables operators to optimize performance and efficiency before a rack is installed in the real world.

Compute Is Data: Real-World Data Is No Longer the Moat

Real-world data used to function as a moat for physical AI — but it doesn’t scale. The real world is messy, unpredictable and full of edge cases, and the pipelines to process, simulate and evaluate data are fragmented. The bottleneck isn’t just data — it’s the entire data factory.

To help address this, NVIDIA introduced at GTC its Physical AI Data Factory Blueprint, an open reference architecture that transforms compute into large-scale, high-quality training data. Built on NVIDIA Cosmos open world foundation models and the NVIDIA OSMO operator, it unifies data curation, augmentation and evaluation into a single pipeline, enabling developers to generate diverse, long-tail datasets from limited real-world inputs.

Leading physical AI developers including FieldAI, Hexagon Robotics, Linker Vision, Milestone Systems, Skild AI and Teradyne Robotics are already tapping the blueprint to speed up robotics projects, vision AI agents and autonomous vehicle programs.

Microsoft Azure and Nebius are the first cloud platforms to offer the blueprint, turning world-scale compute into turnkey data production engines.

“Together with cloud leaders, we’re providing a new kind of agentic engine that transforms compute into the high-quality data required to bring the next generation of autonomous systems and robots to life,” said Rev Lebaredian, vice president of Omniverse and simulation technologies at NVIDIA, in this press release. “In this new era, compute is data.”

From OpenUSD to Reality: Seamless Design to Deployment

Converting CAD files to OpenUSD is a critical step in the physical AI pipeline — transforming engineering data into simulation-ready assets that developers can use to build, test and validate robots in physically accurate virtual environments. 

Using tools like the NVIDIA Omniverse Kit software development kit and NVIDIA Isaac Sim, teams can optimize and enrich 3D data for real-time rendering, simulation and collaborative workflows.  

Companies including FANUC and Fauna Robotics are using this seamless CAD-to-OpenUSD workflow to speed up robotic system design and validation.

Transforming Manufacturing and Logistics Through Industrial Digital Twins

“Factories themselves are now robotic systems,” Lebaredian said during his special address on digital twins and simulation at GTC. 

All factories are born in simulation. The NVIDIA Mega Omniverse Blueprint provides enterprises with a reference architecture to design, test and optimize robot fleets and AI agents in a physically accurate facility digital twin before a single robot is deployed on the floor. 

KION, working with Accenture and Siemens, is using this blueprint to build large-scale warehouse digital twins that train and test fleets of NVIDIA Jetson-based autonomous forklifts for GXO, the world’s largest pure-play contract logistics provider. 

Physical AI Steps From Simulation to the Real World

NVIDIA is partnering with the global robotics ecosystem — including leading robot brain developers, industrial robot giants and humanoid pioneers — to enhance production-level physical AI. 

ABB Robotics, FANUC, KUKA and Yaskawa, which have a combined global install base of over 2 million robots, are using NVIDIA Omniverse libraries and NVIDIA Isaac simulation frameworks to validate complex robot applications and production lines through physically accurate digital twins. These companies have also integrated NVIDIA Jetson modules into their controllers to enable real-time AI inference. 

Robot development starts with the robot brains, which is why leading developers including FieldAI and Skild AI are building theirs using NVIDIA Cosmos world models for data generation and Isaac simulation frameworks to validate policies in simulation. 

Meanwhile, Generalist AI is using NVIDIA Cosmos to explore generating synthetic data. This combination allows robots to become proficient in any task — from supply chain monitoring to food delivery — at an exceptional pace. 

Read all of NVIDIA’s announcements from GTC on this online press kit and watch the keynote replay. Catch up on all Physical AI Days sessions from GTC and watch the developer livestream replay.