NVIDIA’s message was unmistakable as it kicked off the 10th annual GPU Technology Conference: it’s doubling-down on the data center.
Founder and CEO Jensen Huang delivered a sweeping opening keynote at San Jose State University, describing the company’s progress accelerating the sprawling data centers that power the world’s most dynamic industries.
With a record GTC registered attendance of 9,000, he rolled out a spate of new technologies, detailed their broad adoption by industry leaders, including Cisco, Dell, Hewlett Packard Enterprise and Lenovo, and highlighted how NVIDIA technologies are relied on by some of the world’s biggest names, including Accenture, Amazon, Charter Communications, Microsoft and Toyota.
“The accelerated computing approach that we pioneered is really taking off,” said Huang, who exactly a week ago announced the company’s $6.9 billion acquisition of Mellanox, a leader in high performance computing interconnect technology. “If you take look at what we achieved last year, the momentum is absolutely clear.”
To be sure, Huang also detailed progress outside the data center, rolling out innovations targeting everything from robotics to pro graphics to the automotive industry.
Developers, Developers, Developers
The recurring theme, however, was how NVIDIA’s ability to couple software and silicon delivers the advances in computing power needed to transform torrents of data into insights and intelligence.
“Accelerated computing is not just about the chips,” Huang said. “Accelerated computing is a collaboration, a codesign, a continuous optimization between the architecture of the chip, the systems, the algorithm and the application.”
As a result, the GPU developer ecosystem is growing fast, Huang said. The number of developers has grown to more than 1.2 million from 800,000 last year; there now are 125 GPU-powered systems among the world’s 500 fastest supercomputers; and there are more than 600 applications powered by NVIDIA’s CUDA parallel computing platform.
Mellanox — whose interconnect technology helps power more than half the world’s 500 fastest supercomputers — complements NVIDIA’s strength in data centers and HPC, Huang said, explaining why NVIDIA agreed to buy the company earlier this month.
Mellanox CEO Eyal Waldman, who joined Huang on stage, said: “We’re seeing a great growth in data, we’re seeing an exponential growth. The program-centric data center is changing into a data-centric data center, which means the data will flow and create the programs, rather than the programs creating the data.”
Bringing AI to Data Centers
These technologies are all finding their way into the world’s data centers as enterprises build more powerful servers — “scaling up” or “capability” systems, as Huang called it — and network their servers more closely together than ever — or “scaling out,” or “capacity” systems, as businesses seek to turn data into a competitive advantage.
To help businesses move faster, Huang introduced CUDA-X AI, the world’s only end-to-end acceleration libraries for data science. CUDA-X AI arrives as businesses turn to AI — deep learning, machine learning and data analytics — to make data more useful, Huang explained.
The typical workflow for all these: data processing, feature determination, training, verification and deployment. CUDA-X AI unlocks the flexibility of our NVIDIA Tensor Core GPUs to uniquely address this end-to-end AI pipeline.
CUDA-X AI has been adopted by all the major cloud services, including Amazon Web Services, Google Cloud Platform and Microsoft Azure. It’s been adopted by Charter, PayPal, SAS and Walmart.
“Think about not just the costs that they’re saving, but the most precious resource that these data scientists have — time and iterations,” said Matt Garman, vice president of computing services at AWS.
Turing, RTX and Omniverse
NVIDIA’s Turing GPU architecture — and its RTX real-time ray tracing technology — is also being widely adopted. NVIDIA RTX enjoys wide support, with Huang highlighting more than 20 partners — including Adobe, Autodesk, Dassault Systèmes, Pixar, Siemens, Unity, Unreal and Weta Digital — among those supporting it.
And for the fast-growing numbers of creative professionals across an increasingly complex pipeline around the globe, Huang introduced Omniverse, which lets them harness multiple applications to create and share scenes across different teams and from different locations. He described Omniverse as a collaboration tools like Google Docs for 3D designers, who could be located anywhere in the world while working on the same project.
“We wanted to make a tool that made it possible for studios all around the world to collaborate,” Huang said. “Omniverse basically connects up all the designers in the studios, it works with every tool.”
To speed the work of graphics pros using these and other tools, Huang introduced the NVIDIA RTX Server, a reference architecture that will be delivered with top system vendors.
The massive power savings alone mean these machines don’t just accelerate your work, they pay for themselves. “I used to say ‘The more you buy the more you save,’ but I think I was wrong,” Huang said, with a smile. “RTX Servers are free.”
To accelerate data preparation, model training and visualization, Huang also introduced NVIDIA-powered Data Science Workstations. Built with Quadro RTX GPUs and pre-installed with CUDA-X AI accelerated machine learning and deep learning software, these systems for data scientists are available from global workstation providers.
Bringing gaming technology to the data center, as well, Huang announced the GeForce NOW Alliance. It expands NVIDIA’s GeForce NOW online gaming service – which is built around specialized pods, each packing 1,280 GPUs in 10 racks, all interconnected with Mellanox high-speed interconnect technology – through partnerships with global telecoms providers.
Together, GeForce NOW Alliance partners will scale GeForce NOW to serve millions more gamers, Huang said. Softbank and LG Uplus will be among the first partners to deploy RTX cloud gaming servers in Japan and Korea, respectively, later this year.
To underscore his announcement, he rolled a witty demo featuring characters in high-tech armor at a futuristic firing range, drawing broad applause from the audience. “Very few tech companies get to sit at the intersection of art and science and it’s such a thrill to be here,” Huang said. “NVIDIA is the ILM of real-time computer graphics and you can see it here.”
Inviting makers to build on NVIDIA’s platform, Huang announced Jetson Nano. It’s a small, powerful CUDA-X AI computer delivering 472 GFLOPs of compute performance for running modern AI workloads, and consumes as little as 5 watts. Yet it supports the same architecture and software powering America’s fastest supercomputers.
Jetson Nano will come in two flavors, a $99 dev kit for makers, developers, learners and students, which is available now; and a $129 production-ready module for creating mass-market AI-powered edge systems, which will be available in June.
“Here’s the amazing thing about this little thing,” Huang said. “It’s $99 — the whole computer — and if you use Raspberry Pi and you just don’t have enough computer performance, you just get yourself one of these, and it runs the entire CUDA-X AI stack.”
Huang also announced the general availability of the Isaac SDK, a toolbox that saves manufacturers, researchers and startups hundreds of hours by making it easier to add AI for perception, navigation and manipulation into next-generation robots.
Huang finished his keynote with a flurry of automotive news.
He announced that NVIDIA is collaborating with Toyota, Toyota Research Institute-Advanced Development in Japan and Toyota Research Institute in the United States on the entire workflow of developing, training and validating self-driving vehicles.
“Today, we are announcing that the world’s largest car company is partnering with us from end to end,” Huang said.
Building on an ongoing relationship with Toyota to use DRIVE AGX Xavier AV compute, the deal expands the collaboration to new testing validation using DRIVE Constellation — which is now available and allows automakers to simulate billions of miles of driving in all conditions.
And Huang announced Safety Force Field — a driving policy designed to shield self-driving cars from collisions, a sort of “cocoon” of safety.
“We have a computational method that detects the surrounding cars and predicts their natural path — knowing our own path — and computationally avoids traffic,” Huang said, adding that the open software has been validated in simulation and can be combined with any driving software.