by Roy Kim

With a $6.9 billion annual budget, the National Science Foundation (NSF) is the top U.S. government agency promoting breakthroughs in science and engineering by sponsoring programs in research and education.

And now scientists and researchers in the NSF community are turning to GPU computing and the OpenACC directives-based programming model help advance this mission.

Recognizing the growing interest and demand from NSF researchers for education on GPU computing, leading  centers in NSF’s Extreme Science and Engineering Discovery Environment (XSEDE) program are working together to host a free two-day, hands-on workshop to share tips and best practices for accelerating scientific applications on GPUs using OpenACC, a programming standard for parallel computing developed by CAPS, Cray, The Portland Group (PGI), and NVIDIA.

Held on Oct. 16 and 17 at the Pittsburgh Supercomputing Center’s (PSC) facility, the OpenACC GPU Programming Workshop will be broadcast live to nine other universities and research centers across the U.S. via high-definition videoconferencing technology for fully interactive experience for all participants, including:

The workshop is open to university researchers across the U.S. Attendees can register by clicking on the link to any of the above institutions, or by visiting the Pittsburgh Supercomputing Center website.

Demand for OpenACC training “overwhelming”
OpenACC is quickly growing in popularity with scientists and researchers because it is an open, portable programming standard that makes developing on GPUs easier than ever before.

NSF Keeneland cluster at George Tech.
NSF Keeneland cluster at Georgia Tech, powered by NVIDIA Tesla GPU Accelerators

In fact, researchers around the world are reporting dramatic speedups in their scientific codes – as much as 5x, 7x and greater – in a matter of a few hours or days. A few examples can be found on the NVIDIA website here.

With OpenACC, the researchers don’t need a deep knowledge of parallel accelerator programming to achieve much higher performance on their code. They simply insert simple compiler hints, or “directives,” into their existing Fortran or C code, directing the compiler to port compute-intensive portions of the application code to the GPU for higher performance. The compiler does the heavy lifting of taking advantage of the high performance of the GPU, enabling scientists to focus on their research.

Pittsburgh Supercomputer Center’s John Urbanic, parallel computing specialist, is spearheading this national effort after a successful OpenACC workshop at PSC in April, 2012.  So successful in fact, that he had to turn registrants away. Of the ones that did attend, many were able to improve and accelerate their own code within the two days of the workshop.

“Demand is clearly overwhelming for OpenACC. For the October workshop, we had over 20 universities requesting to participate, but had to reduce the list down to 10 because of limited resources,” said Urbanic.

Experience GPU computing on the Keeneland supercomputer
Attendees are encouraged to bring their own codes to the workshop. For the lab sessions, they’ll have a unique opportunity to experience the power of GPU computing on Keeneland, NSF’s premiere GPU-based supercomputer cluster at Georgia Tech, one of most powerful systems in the world.

There are also other ways to experience GPU computing and OpenACC programming. If you have a CUDA-capable GPU handy and would like to try OpenACC for yourself, get a 30-day free trial license of PGI Accelerator OpenACC compiler here.

And, be sure to let us know how you are using OpenACC.

For more on OpenACC and GPU computing follow @NVIDIATesla.