Call it the heart of the heart of the SC18 supercomputing show.
Possibly the world’s best-educated graffiti wall, the whiteboard-sized graphic tracks the dizzying growth of the NVIDIA Developer Program — from a single soul to its current 1.1 million members. In a rainbow of colors and multiplicity of handwriting styles, developers are penning notes describing their own contributions along the way, corresponding to the year it occurred.
Beside an illuminated line tracking the number of developers each year, towers of note cards are building, growing by the hour as more individuals take in the project. The work of computing legends sits beside those of anonymous engineers.
2008 begins with “World’s First GPU-accelerated Supercomputer in Top500: Tsubame 1.2.” Midway above the 2010 stack is the first “GPU-accelerated Molecular Simulation” by Erik Lindahl, the Stockholm University biologist. 2012 features “Alexnet Wins ImageNet” by Alex Krizhevsky, considered a defining moment ushering in the era of artificial intelligence.
“It’s a crowdsourced celebration of the GPU developer ecosystem,” said Greg Estes, vice president of developer programs and corporate marketing at NVIDIA.
The living yearbook — which after the show will take a pride of place in NVIDIA’s Santa Clara campus — depicts the story of the growth of accelerated computing, propelled less by silicon and more by individual imagination and dazzling coding skills.
It embraces work that’s been awarded Nobel Prizes in physics and chemistry. It’s made possible the world’s fastest supercomputers, which are driving groundbreaking research in fields as far-flung as particle physics and planetary science. And it’s opened the door to video games so realistic that they begin to blur with movies. And to movies with effects so mind blowing, they push to the far edges of human imagination.
The NVIDIA Developer Program, which recently pushed above 1.1 million individuals, continues to grow steadily because of the emergence of AI, as well as continued growth in robotics, game development, data science and ray tracing.
What developers receive when they sign up for the program for free is access to more than 100 SDKs, performance analysis tools, member discounts and hands-on training on CUDA, OpenACC, AI and machine learning through the NVIDIA Deep Learning Institute. It’s a package of tools and offerings that simplifies the task of tapping into the incredible processing power and efficiency of GPUs, the performance of which has increased many times over in just a few generations.
The timeline begins with hoary entries that, by the standards of accelerated computing, seem modest and almost accidental. But they laid the foundation for work to come, including monumental achievements that have changed the shape of science.
Among them, the 2002 milestone when NVIDIA’s Mark Harris coined the term GPGPU — general purpose computation on graphics processing units. This inspired developers to find ways to use GPUs for compute functions beyond their traditional focus on graphics.
Two years later, NVIDIA’s Ian Buck and a team of researchers introduced Brooks for GPUs, the first GPGPU programming language. And two years after that, in 2006, we launched CUDA, our accelerated parallel computing architecture, which has since been downloaded more than 12 million times.
The board’s very first entry, though, far precedes Harris’s and Buck’s work, and was even more foundational. “Just The First,” it reads, signed by Jensen Huang, dated 1993, the year of NVIDIA’s founding.