More Than Just Talk: Get Hands-On Coding Experience at GPU Tech Conference

by Brian Caulfield

Our GPU Technology Conference (GTC) in San Jose, Calif., next week has something for everyone: game developers, scientists, software engineers, automotive designers, and movie-makers. Here’s a sneak peak at one of the conference’s highlights – be sure to drop by our blog next week for complete coverage.

Mad scientist? Would-be James Bond villain? Or do you just have a lot of research you need to get done in a hurry? Whatever your project, GPUs can help you do more compute work, using less power.

This year’s GTC will offer more than just great conversations. Thanks to the cloud, we’re also planning to give more attendees than ever the opportunity to sit down and crank out some code.

For the first time ever, NVIDIA will offer attendees the ability to use their own laptops to tap into a powerful CUDA Cloud Development Platform hosted on Amazon Web Services (AWS).

If you’re not a software developer, the story here is that GPUs are being used to solve a wider-array of general computing problems than ever before. And their power is becoming more widely available thanks to offerings such as AWS and our own NVIDIA GRID technology.

If you are a developer, what this means is that you can take advantage of powerful GPU accelerated systems in the cloud to develop and deploy your applications, explains Will Ramey, our product manager for GPU Computing.

Using Amazon’s Cluster GPU Instances with two powerful NVIDIA Tesla GPUs and 22GB of memory in each system, Ramey hopes to attract nearly a third of the conference’s more than 3,000 attendees to one of the 17 hands-on lab sessions that are scheduled over three days at the conference (many of them are already sold out, so reserve your seat today).

Three highlights:

  • Using Python to Speed-up Applications with GPUs in the Cloud or on the Desktop: Python is a powerful and widely-used computer language — and new tools are bringing the power of GPU computing into this ecosystem: Travis Oliphant and Siu Kwan Lam of Continuum Analytics will walk users through some examples of how to use native Python code to exploit the power of GPUs.
  • SSOR Solver Using cuSPARSE and cuBLAS & Building a High-performance Drop-in BLAS Library on the GPU to Accelerate Existing Applications: Three engineers from NVIDIA will teach you how to accelerate certain kinds of advanced math problems such as vector and matrix multiplication using the free cuBLAS library – based on the widely-used Basic Linear Algebra Operations Subroutine (BLAS) — in back-to-back sessions Wednesday afternoon.
  • NVIDIA Performance Primitives (NPP) for Image Processing: Image and signal processing are two of the broad sets of tasks where GPUs excel — NVIDIA’s Yang Song will show you  how to exploit the free GPU-accelerated NVIDIA Performance Primitives (NPP) library to automate the contrast adjustment of an image.