My first blog post for NVIDIA was on our CEO Jen-Hsun Huang’s keynote address at Hot Chips 21. This second post is an introduction to our GPU Technology Conference, or GTC. Over the course of the rest of this week, I will be offering readers a big-picture view of what’s happening in GPU Computing and specifically relevant content at GTC.
To put my posts in some context, let me offer some highlights from my background. Before joining NVIDIA in 2006, I was editor-in-chief of Microprocessor Report, a technical-oriented research service, where I spent six years covering the microprocessor business. Previously, I spent a decade at AMD. My background is electrical engineering, with a focus on microprocessors, configurable logic, and graphics.
I came to GPU Compute with a fair amount of skepticism, but I have increasingly realized that some of the most important and interesting computing workloads now and in the future will benefit greatly from GPU Compute. To get to the next level of computer interactivity and workload parallelism, it’s going to take more than just a few more CPU cores every two years. By embracing the many hundreds of cores available in the GPU, we can accelerate that future.
I, for one, look forward to the day when we can interact with computers just as we have seen in science fiction – through speech, sight, and gesture recognition. To accomplish those goals and solve massively complex problems and simulations, GPU Compute brings a new weapon that is far more powerful and compact than anything we have ever had before.
But like any new disruptive technology, there’s a need to educate the developer community on how best to handle it. And this is why NVIDIA is sponsoring the GPU Technology Conference: to spur the development of this highly parallelized future.
Related Link: Check out previous GTC Sessions at GTC On-Demand