Supercomputing has swept rapidly from the far edges of science to the heart of our everyday lives.
And propelling it forward — bringing it into the mobile phone already in your pocket and the car in your driveway — is GPU acceleration, NVIDIA CEO Jen-Hsun Huang told a packed house at a rollicking event kicking off this week’s SC15 annual supercomputing show in Austin, Texas. The event draws 10,000 researchers, national lab directors and others from around the world.
“Supercomputing technology is in the process of extending well beyond supercomputing itself,” Huang said, dressed in his trademark black leather jacket, speaking to a largely standing crowd. “These advancements are in the process of revolutionizing consumer applications, cloud services, the auto industry and autonomous machines.”
He described the GPU as the next processor to reach near-ubiquity in supercomputing — much as Intel’s Xeon architecture has over the past decade. Its growing prevalence, working in tandem with a CPU, will draw growing momentum and create the new standard for supercomputing.
“A lot is about to change. Whether Moore’s Law slows or ends, it’s irrelevant. It’s very clear we need a new path forward,” he said.
That new path is propelled by GPU acceleration, he argued. Already, more than 100 of the world’s fastest high-performance computing systems use accelerators, according to today’s list of the Top 500 supercomputers.
Better than two-thirds of those accelerators are NVIDIA Tesla GPUs, a figure growing nearly 50 percent annually. And that’s just the start.
In a sign of what’s to come, he noted that GPU acceleration figures in several of the most powerful next-gen supercomputers that have been announced. Examples are the U.S. Energy Department’s new systems for Oak Ridge and Livermore National Labs — expected to be the world’s fastest when they come online in 2017 — as well as in IBM Watson, which IBM announced yesterday will use GPUs to achieve speedups of up to 10x.
The rise of acceleration is benefiting from three key trends. First, the slowing of Moore’s Law, which holds that computer power doubles every 18 months. Second, hundreds of high-performance computing applications — including the vast majority of the most popular ones — are now GPU accelerated. And third, even modest investments in accelerators can result in big throughput improvements.
Another driving factor is the tremendous momentum behind machine learning — a key field of artificial intelligence by which computers teach themselves about the real world. Led by web-services giants, a first wave of machine learning applications has already arrived.
Voice-driven web searches, with near-perfect comprehension, have quickly become part of everyday life. As are Facebook’s facial-recognition capabilities, YouTube’s ability to offer click-to-buy features on videos and Google Photos’ powerful new features for customizing images.
“Machine learning is high performance computing’s first killer app for consumers,” Huang said. “Machine learning is in the process of going from the R&D stage into broad deployment.”
To further this trend, NVIDIA introduced last week its Tesla Hyperscale Acceleration line. A combination of hardware and software, it enables GPU-accelerated machine learning in massive data centers — both so they can train on massive datasets and deploy learnings from this work directly to benefit consumers.
Huang also used the talk to describe an array of new products NVIDIA has announced over the course of the year that drive supercomputing-fueled machine learning across a wide range of uses.
In addition to the Hyperscale Acceleration suite, these include the just-launched Jetson TX1 module that enables machine learning for portable devices like robots and drones. The NVIDIA DRIVE PX car computer that has 12 inputs for cameras, radar and lidar and enables work toward autonomous driving. The NVIDIA GeForce GTX TITAN X GPU, which enables machine learning on a PC. And the DIGITS DevBox, a machine learning appliance that can be plugged into a regular wall socket and includes all the hardware and software needed to get right to work applying deep learning.
Following Huang on stage, Ian Buck, who runs NVIDIA’s Accelerated Computing business, gave an update on Tesla’s capabilities in simulation and visualization, in addition to machine learning. He cited a broad range of fields turning to GPU acceleration, among them weather prediction. He noted that in recent months, major national weather supercomputers have turned to GPUs to create more precise predictions that can save thousands of lives in anticipation of disasters. NVIDIA is showing these and other applications in our booth at Supercomputing 2015 in Austin through Thursday.