What Is CUDA?

by Mark Ebersole

You may not realize it, but GPUs are good for more than videogames and scientific research. In fact, there’s a good chance your daily life is being affected by GPU computing.

Mobile applications rely on GPUs running servers in the cloud. Stores use GPUs to analyze retail and web data. Web sites use GPUs to more accurately place ads. Engineers rely on them in computer-aided engineering applications. Accelerated computing using GPUs continues to expand.

A key role in modern AI: the NVIDIA CUDA Deep Neural Network library (cuDNN) is a GPU-accelerated library of primitives for deep neural networks.

No longer is it something just for the high-performance computing (HPC) community. The benefits of CUDA are moving mainstream.

So, What Is CUDA?

Even with this broad and expanding interest, as I travel across the United States educating researchers and students about the benefits of GPU acceleration, I routinely get asked the question “what is CUDA?”

Most people confuse CUDA for a language or maybe an API. It is not.

It’s more than that. CUDA is a parallel computing platform and programming model that makes using a GPU for general purpose computing simple and elegant. The developer still programs in the familiar C, C++, Fortran, or an ever expanding list of supported languages, and incorporates extensions of these languages in the form of a few basic keywords.

These keywords let the developer express massive amounts of parallelism and direct the compiler to the portion of the application that maps to the GPU.

A simple example of code is shown below. It’s written first in plain “C” and then in “C with CUDA extensions.”

Sample code - standard C code on the left, C with CUDA extensions on the right.

More CUDA Resources

Learning how to program using the CUDA parallel programming model is easy. We have webinars and self-study exercises at the CUDA Developer Zone website.

In addition to toolkits for C, C++, and Fortran, there are tons of libraries optimized for GPUs and other programming approaches such as the OpenACC directive based compilers.

Check it out. And be sure to let us know how you’re using CUDA to advance your work.

Looking for a crash course in AI? Check out our cheat sheet to find the top courses to learn AI and machine learning, fast.