OpenAI and NVIDIA just announced a landmark AI infrastructure partnership — an initiative that will scale OpenAI’s compute with multi-gigawatt data centers powered by millions of NVIDIA GPUs.
To discuss what this means for the next generation of AI development and deployment, the two companies’ CEOs, and the president of OpenAI, spoke this morning with CNBC’s Jon Fortt.
“This is the biggest AI infrastructure project in history,” said NVIDIA founder and CEO Jensen Huang in the interview. “This partnership is about building an AI infrastructure that enables AI to go from the labs into the world.”
Through the partnership, OpenAI will deploy at least 10 gigawatts of NVIDIA systems for OpenAI’s next-generation AI infrastructure, including the NVIDIA Vera Rubin platform. NVIDIA also intends to invest up to $100 billion in OpenAI progressively as each gigawatt is deployed.
“There’s no partner but NVIDIA that can do this at this kind of scale, at this kind of speed,” said Sam Altman, CEO of OpenAI.
The million-GPU AI factories built through this agreement will help OpenAI meet the training and inference demands of its next frontier of AI models.
“Building this infrastructure is critical to everything we want to do,” Altman said. “This is the fuel that we need to drive improvement, drive better models, drive revenue, drive everything.”

Building Million-GPU Infrastructure to Meet AI Demand
Since the launch of OpenAI’s ChatGPT — which in 2022 became the fastest application in history to reach 100 million users — the company has grown its user base to more than 700 million weekly active users and delivered increasingly advanced capabilities, including support for agentic AI, AI reasoning, multimodal data and longer context windows.
To support its next phase of growth, the company’s AI infrastructure must scale up to meet not only training but inference demands of the most advanced models for agentic and reasoning AI users worldwide.
“The cost per unit of intelligence will keep falling and falling and falling, and we think that’s great,” said Altman. “But on the other side, the frontier of AI, maximum intellectual capability, is going up and up. And that enables more and more use — and a lot of it.”
Without enough computational resources, Altman explained, people would have to choose between impactful use cases, for example either researching a cancer cure or offering free education.
“No one wants to make that choice,” he said. “And so increasingly, as we see this, the answer is just much more capacity so that we can serve the massive need and opportunity.”

The first gigawatt of NVIDIA systems built with NVIDIA Vera Rubin GPUs will generate their first tokens in the second half of 2026.
The partnership expands on a long-standing collaboration between NVIDIA and OpenAI, which began with Huang hand-delivering the first NVIDIA DGX system to the company in 2016.
“This is a billion times more computational power than that initial server,” said Greg Brockman, president of OpenAI. “We’re able to actually create new breakthroughs, new models…to empower every individual and business because we’ll be able to reach the next level of scale.”
Huang emphasized that though this is the start of a massive buildout of AI infrastructure around the world, it’s just the beginning.
“We’re literally going to connect intelligence to every application, to every use case, to every device — and we’re just at the beginning,” Huang said. “This is the first 10 gigawatts, I assure you of that.”
Watch the CNBC interview.