NVIDIA founder and CEO Jensen Huang took the stage at the Fontainebleau Las Vegas to open CES 2026, declaring that AI is scaling into every domain and every device.
“Computing has been fundamentally reshaped as a result of accelerated computing, as a result of artificial intelligence,” Huang said. “What that means is some $10 trillion or so of the last decade of computing is now being modernized to this new way of doing computing.”
Huang unveiled Rubin, NVIDIA’s first extreme-codesigned, six-chip AI platform now in full production, and introduced Alpamayo, an open reasoning model family for autonomous vehicle development — part of a sweeping push to bring AI into every domain.
With Rubin, NVIDIA aims to “push AI to the next frontier” while slashing the cost of generating tokens to roughly one-tenth that of the previous platform, Huang said, making large-scale AI far more economical to deploy.
Huang also emphasized the role of NVIDIA open models across every domain, trained on NVIDIA supercomputers, forming a global ecosystem of intelligence that developers and enterprises can build on.
“Every single six months, a new model is emerging, and these models are getting smarter and smarter,” Huang said. “Because of that, you could see the number of downloads has exploded.”
Find all NVIDIA news from CES in this online press kit.
A New Engine for Intelligence: The Rubin Platform
Introducing the audience to pioneering American astronomer Vera Rubin, after whom NVIDIA named its next-generation computing platform, Huang announced that the NVIDIA Rubin platform, the successor to the record‑breaking NVIDIA Blackwell architecture and the company’s first extreme-codesigned, six‑chip AI platform, is now in full production.

Built from the data center outward, Rubin platform components span:
- Rubin GPUs with 50 petaflops of NVFP4 inference
- Vera CPUs engineered for data movement and agentic processing
- NVLink 6 scale‑up networking
- Spectrum‑X Ethernet Photonics scale‑out networking
- ConnectX‑9 SuperNICs
- BlueField‑4 DPUs
Extreme codesign — designing all these components together — is essential because scaling AI to gigascale requires tightly integrated innovation across chips, trays, racks, networking, storage and software to eliminate bottlenecks and dramatically reduce the costs of training and inference, Huang explained.
He also introduced AI-native storage with NVIDIA Inference Context Memory Storage Platform — an AI‑native KV‑cache tier that boosts long‑context inference with 5x higher tokens per second, 5x better performance per TCO dollar and 5x better power efficiency.
Put it all together and the Rubin platform promises to dramatically accelerate AI innovation, delivering AI tokens at one-tenth the cost. “The faster you train AI models, the faster you can get the next frontier out to the world,” Huang said. “This is your time to market. This is technology leadership.”
Open Models for All

NVIDIA’s open models — trained on NVIDIA’s own supercomputers — are powering breakthroughs across healthcare, climate science, robotics, embodied intelligence and autonomous driving.
“Now on top of this platform, NVIDIA is a frontier AI model builder, and we build it in a very special way. We build it completely in the open so that we can enable every company, every industry, every country, to be part of this AI revolution.”
The portfolio spans six domains — Clara for healthcare, Earth-2 for climate science, Nemotron for reasoning and multimodal AI, Cosmos for robotics and simulation, GR00T for embodied intelligence and Alpamayo for autonomous driving — creating a foundation for innovation across industries.
“These models are open to the world,” Huang said, underscoring NVIDIA’s role as a frontier AI builder with world-class models topping leaderboards. “You can create the model, evaluate it, guardrail it and deploy it.”
AI on Every Desk: RTX, DGX Spark and Personal Agents
Huang emphasized that AI’s future is not only about supercomputers — it’s personal.
Huang showed a demo featuring a personalized AI agent running locally on the NVIDIA DGX Spark desktop supercomputer and embodied through a Reachy Mini robot using Hugging Face models — showing how open models, model routing and local execution turn agents into responsive, physical collaborators.
“The amazing thing is that is utterly trivial now, but yet, just a couple of years ago, that would have been impossible, absolutely unimaginable,” Huang said.
The world’s leading enterprises are integrating NVIDIA AI to power their products, Huang said, citing companies including Palantir, ServiceNow, Snowflake, CodeRabbit, CrowdStrike, NetApp and Semantec.
“Whether it’s Palantir or ServiceNow or Snowflake — and many other companies that we’re working with — the agentic system is the interface.”
At CES, NVIDIA also announced that DGX Spark delivers up to 2.6x performance for large models, with new support for Lightricks LTX‑2 and FLUX image models, and upcoming NVIDIA AI Enterprise availability.
Physical AI

AI is now grounded in the physical world, through NVIDIA’s technologies for training, inference and edge computing.
These systems can be trained on synthetic data in virtual worlds long before interacting with the real world.
Huang showcased NVIDIA Cosmos open world foundation models trained on videos, robotics data and simulation. Cosmos:
- Generates realistic videos from a single image
- Synthesizes multi‑camera driving scenarios
- Models edge‑case environments from scenario prompts
- Performs physical reasoning and trajectory prediction
- Drives interactive, closed‑loop simulation
Advancing this story, Huang announced Alpamayo, an open portfolio of reasoning vision language action models, simulation blueprints and datasets enabling level 4‑capable autonomy. This includes:
- Alpamayo R1 — the first open, reasoning VLA model for autonomous driving
- AlpaSim — a fully open simulation blueprint for high‑fidelity AV testing

“Not only does it take sensor input and activates steering wheel, brakes and acceleration, it also reasons about what action it is about to take,” Huang said, teeing up a video showing a vehicle smoothly navigating busy San Francisco traffic.
Huang announced the first passenger car featuring Alpamayo built on NVIDIA DRIVE full-stack autonomous vehicle platform will be on the roads soon in the all‑new Mercedes‑Benz CLA — with AI‑defined driving coming to the U.S. this year, and follows the CLA’s recent EuroNCAP five‑star safety rating.
Huang also highlighted growing momentum behind DRIVE Hyperion, the open, modular, level‑4‑ready platform adopted by leading automakers, suppliers and robotaxi providers worldwide.

“Our vision is that, someday, every single car, every single truck will be autonomous, and we’re working toward that future,” Huang said.
Huang was then joined on stage by a pair of tiny beeping, booping, hopping robots as he explained how NVIDIA’s full‑stack approach is fueling a global physical AI ecosystem.
Huang rolled a video showing how robots are trained in NVIDIA Isaac Sim and Isaac Lab in photorealistic, simulated worlds — before highlighting the work of partners in physical AI across the industry, including Synopsys and Cadence, Boston Dynamics and Franka, and more.
Huang also appeared with Siemens CEO Roland Busch at the company’s Tuesday keynote to announce an expanded partnership, supported by a montage showing how NVIDIA’s full stack integrates with Siemens’ industrial software, enabling physical AI from design and simulation through production.
“These manufacturing plants are going to be essentially giant robots,” Huang said at NVIDIA’s presentation on Monday.

Building the Future, Together
Huang explained that NVIDIA builds entire systems now because it takes a full, optimized stack to deliver AI breakthroughs.
“Our job is to create the entire stack so that all of you can create incredible applications for the rest of the world,” he said.
Watch the full presentation replay:
DLSS 4.5 and Other Gaming and Creating Updates
On Monday evening, NVIDIA announced DLSS 4.5, which introduces Dynamic Multi Frame Generation, a new 6X Multi Frame Generation mode and a second-generation transformer model for DLSS Super Resolution, so gamers can experience the latest and greatest titles with enhanced performance and visuals.
Over 250 games and apps now support NVIDIA DLSS 4 technology, with this year’s biggest titles adding support, including 007 First Light, Phantom Blade Zero, PRAGMATA and Resident Evil Requiem at launch.
RTX Remix Logic debuted, expanding the capabilities of the Remix modding platform to enable modders to trigger dynamic graphics effects throughout a game based on real-time game events.
Plus, NVIDIA ACE technology demonstrated in Total War: PHARAOH showcases how AI can assist players in navigating the complexities of the game’s many systems and mechanics.
In PUBG: BATTLEGROUNDS, PUBG Ally powered by NVIDIA ACE adds long-term memory, evolving its intelligence and capabilities.
And G-SYNC Pulsar monitors are available this week, delivering a tear-free experience together with a perceived 1,000Hz+ effective motion clarity and G-SYNC Ambient Adaptive Technology — all setting a new gold standard for gamers.
In addition, NVIDIA is bringing GeForce RTX gaming to more devices with new GeForce NOW Apps for Linux PC and Amazon Fire TV.
And NVIDIA RTX accelerates 4K AI video generation on PCs with LTX-2 and ComfyUI upgrades.
Read more about these announcements from Monday night at CES on this GeForce news article.
Learn more about all NVIDIA announcements at CES.
