by Sumit Gupta

CUDA has been getting some serious parallel-computing coverage as of late, with Chinese scientists running the world’s fastest simulation on Tianhe-1A and Microsoft announcing support for GPU computing in their mainstream Visual Studio developer tools.

Today, we have more CUDA-related news from the Portland Group (PGI), which announced that they are releasing new CUDA compilers that target x86 CPUs.

This means that developers at ISVs and end-customers can write applications, using CUDA C and CUDA C++ toolkits, that target workstations and servers running CPUs only, or a combination of CPUs and NVIDIA GPUs. This gives developers a unified codebase for CPUs and GPUs, and provides them with an elegant programming model for multi-core CPUs on Linux, MacOS and Windows.

Back in 2007, we realized that moving 5 percent of the most CPU-intensive application code to the CUDA parallel programming model offered performance benefits on multi-core CPUs. Apparently, in a bid to leverage the GPU’s massively parallel processor, these developers:

  1. Organized their data to make it more data-parallel
  2. Used algorithms that could make use of hundreds of cores

PGI’s release of CUDA x86 compilers enables developers to protect their investment in parallelizing their applications, by using the same code for CPUs and GPUs.

This makes NVIDIA CUDA GPUs the only platform that supports all GPU computing programming models, APIs, and languages – including CUDA C/C++/Fortran, OpenCL, DirectCompute, or the recently announced Microsoft C++ AMP.

CUDA has come a long way in just four years – empowering more than 100,000 developers, and now available for CPUs and GPUs!