HOT CHIPS 2009 KEYNOTE BY JEN-HSUN HUANG

by Kevin Krewell

NVIDIA CEO Jen-Hsun Huang provided the opening keynote for the 21st annual Hot Chips conference and presented a vision of a future of computing that can be enhanced by harnessing the massively parallel computing power of GPUs.

The Hot Chips conference, which is being held at Stanford’s Memorial Auditorium, brings together both academic and industry leaders for a three-day event focused on the latest technology trends in the computer chip business. This year we had well over 400 attendees (the conference is still ongoing as I write this with registrations still being taken), and over 30 press members and analysts.

In his keynote, Huang said that this new world was completely unimaginable when he started NVIDIA back in 1993. Even then, the business of building a chip to enhance PC gaming was considered radical. Now, those chips have grown in capability and programmability to not only enable stunning games, but also provide advanced tools to help cure cancer. If this dream/vision had been articulated in 1993, Huang is convinced he would not have been funded!

NVIDIA CEO Jen-Hsun Huang giving the 2009 Hot Chips Keynote

Huang enthusiastically painted a picture of a world where the massive threading and computing capability of the GPU can provide many orders of magnitude performance increases over just the multi-core CPU alone. While mobile processing was not the major theme of this talk, Huang also mentioned that the other vector of NVIDIA chip development is the Tegra processor that can provide mainstream handheld application performance while consuming only a few milliwatts.

 

As  background to this new era of GPU Compute, he showed a chart setting out a brief  history of NVIDIA from its first successful graphics chips, the Riva128, which required only three million transistors to the present day, where a new GPU will have over one billion transistors, take 3-4 years of development, thousands of man-years, and cost about a $1 billion. His point was that the GPU had evolved to a point where it can provide cost and power-effective computing capabilities while still leveraging its “day job” running visually beautiful games.

It was with the G80, launched in 2006, that GPUs gained additional capabilities, such as larger shared memories and load-store operations that it became more practical to think of a dedicated computation mode. NVIDIA called this compute capability for heterogeneous computing “CUDA.”

Huang showed a number of real-world examples of heterogeneous computing.  The range of applications that are accelerated by GPU computing extend from oil and gas exploration, to interactive ray tracing, to the simulation of realistic looking directed fire for movie magic.

Jen-Hsun Huang discussing 2015 projections. Looking six years into the future, Huang  believes that GPU Compute can offer 570 times the capability we have today, while pure CPU performance growth might only offer three-times the performance. With this orders of magnitude increase in computations, such goals as realtime universal language translation and advanced forms of augmented reality will be possible.

 

 

 

During Q&A at the end of the speech, Professor David Patterson of U.C. Berkeley asked if Huang had to do it over, would he still partition the CPU and GPU into separate chips. The answer he  gave was that there were three constituents, the programmers, OEMs/ODMs, and chip designers, and each had differing requirements that make it difficult to bet on integrating new and very rapidly developing architectures into one device. By separating these functions, each can develop at its own pace and also provide the flexibility to address many market opportunities. Of course, Huang offered that the GPU is evolving much faster than any other chip architecture.