Researchers Win Third Annual CUDA Achievement Award; Three New CUDA Fellows Named

by Chandra Cheij

Researchers from University of Illinois at Urbana-Champaign snagged the Third Annual Achievement Award for CUDA Centers of Excellence, for their research with Fighting HIV with CUDA.

The team was among three other groups of researchers from CUDA Centers, which include some of the world’s top universities, engaged in cutting-edge work with CUDA and GPU computing.

Each of the world’s 22 CUDA Centers were asked to submit an abstract describing their top achievement in GPU computing over the past year.

ohn Stone, James Phillips, Wen-mei Hwu and Kimberly Powell.
John Stone, James Phillips, Wen-mei Hwu and Kimberly Powell.

A panel of experts then selected four CUDA Centers to present their achievements at a special event during our annual GPU Technology Conference (GTC) this week in San Jose, Calif. Their peers at other CUDA Centers voted for their favorite.

All four finalists will receive an NVIDIA Tegra K1 DevKit, designed to unleash GPU computing power for embedded applications. Based on the same Kepler computing core powering some of the world’s fastest supercomputers, the Tegra K1 DevKit is built around the Tegra K1 system-on-a-chip.

The overall winner will also get the Geforce GTX Titan Z, dual GPU graphics card.

The four CCOE finalists who presented were:

Shanghai Jiao Tong University, James Lin for their CUDA Education & Evangelism

Shanghai Jiao Tong UniversitySJTU is one of the top 5 universities in China and it has been a NVIDIA CCOE since 2011. They were selected as a finalist for all their CUDA Education and evangelism efforts including organizing the “SJTU HPC seminar” since 2009, teaching CUDA to more than 3K students and professors, building π, the largest and fastest Kepler-based supercomputer in China, providing the largest GPU Test Drive in China, hosting the ASC13 finals, the first Asia Student Cluster Contest and one of the biggest contest along with ISC and SC, 10 teams from 6 difference Asia countries had attended the finals, and lastly, they will be hosting the  ISC2014 in May of 2014, the HPC leaders’ summit.

Tokyo Tech, Satoshi Matsuoka for their work on TSUBAME-KFC
TSUBAME-KFC, a new prototype supercomputer for future power efficient supercomputers leading to exascale, designed and built by GSIC, Tokyo Institute of Technology along with partners such as NVIDIA, was ranked as the No.1 supercomputer on the November 2013 edition of the Green 500 list as well as the Green Graph 500 list. This marks the first time that a single supercomputer was crowned #1 in both compute as well as data intensive applications.

University of Illinois at Urbana-ChampaignUniversity of Illinois at Urbana-Champaign, Wen-Mei Hwu, James Phillips and John Stone for their work on Fighting HIV with CUDA

The first scientific breakthrough achieved with the BlueWaters supercomputer at the University of Illinois was the determination of the structure of the complete HIV capsid in atomic-level detail, a collaborative effort of experimental groups, at the University of Pittsburgh and Vanderbilt University, and the NIH  center for Macromolecular Modeling and Bioinformatics, led by Prof. Klaus Schulten at the University of Illinois. The breakthrough was enabled by the NIH Center’s popular and freely available programs

NAMD and VMD, both of which incorporate CUDA technology to enable and accelerate the computationally intensive large-scale biomolecular modeling, simulation, and analysis required to perform

the 64-million-atom HIV capsid simulation. The process through which the capsid disassembles, releasing its genetic material, is a critical step in HIV infection and a potential target for antiviral drugs. The work was featured on the cover of Nature and recognized by an HPCwire Editors’ Choice Award for “Best use of HPC in life sciences”2 at SC13.

University of Tennessee, Knoxville, Stan Tomov for their work on Breakthroughs in Sparse Solvers

University of Tennessee, KnoxvilleOver the last year, UTK developed CUDA-based breakthrough technologies in sparse solvers for GPUs. Sparse linear algebra computations comprise a fundamental building block for many scientific computing applications, ranging from national security to medical advances, highlighting their importance and potential for broad impact. The new developments harness our expertise in DLA – namely, the MAGMA libraries, providing LAPACK for GPUs and auto-tuned BLAS – to develop high-performance sparse solvers, and building blocks for sparse computations in general.

Three New CUDA Fellows Announced

As part of a week full of GPU-goodness at GTC 2014, it’s fitting that the CUDA Fellow Program welcomes three new research and academic leaders to the family of CUDA architecture and parallel computing experts.  These CUDA Fellows have demonstrated the benefits of GPU computing to advance their fields of research and have been instrumental in introducing GPU computing to their peers.

The newest CUDA Fellows are:

Alan GrayAlan Gray is a Research Architect at EPCC, the supercomputing center at The University of Edinburgh. Alan’s research career began in the area of theoretical physics: his Ph.D. thesis was awarded the UK-wide Ogden Prize in 2004 for the best thesis in particle physics phenomenology. He continued this work, which involved exploiting supercomputers using Lattice QCD methods to calculate quantities of importance to our fundamental understanding of matter, under a University Fellowship at The Ohio State University, before moving to EPCC in 2005. His current research focuses on the exploitation of GPUs to the benefit of real scientific and industrial applications: he has a particular interest in the programming of large-scale GPU-accelerated supercomputers. He has developed the Ludwig soft matter physics application, which can simulate a wide range of complex fluids of key importance to our everyday lives, such that it can efficiently exploit many thousands of GPUs in parallel to tackle the most complex of problems. Alan has involvement with a wide range of other GPU-related activities, and provides GPU training courses.

Bormin HuangBormin Huang is a Research Scientist of the Space Science and Engineering Center at the University of Wisconsin-Madison, a Honorary International Chair Professor at the National Taipei University of Technology (Taiwan), a Guest Professor at the University of Las Palmas de Gran Canaria (Spain), an Adjunct Professor at Harbin Institute of Technology (China), a Chair Professor at Xidian University (China), etc.  He is a Fellow of SPIE – The International Society of Optics and Photonics.  Dr. Huang has been the Lead Chair for the SPIE Conference on Satellite Data Compression, Communications, and Processing since 2005, for the SPIE Europe Conference on High Performance Computing in Remote Sensing since 2011, and for the IEEE International Workshop on Parallel and Distributed Computing in Remote Sensing, in conjunction with the IEEE International Conference on Parallel and Distributed Systems, since 2011.  He is serving as an Associate Editor (in the areas including high performance computing) for the IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, and for the Journal of Applied Remote Sensing (in the areas also including high performance computing). He also serves as a Guest Editor for Special Section on High Performance Computing in Remote Sensing, Journal of Applied Remote Sensing.

Rich BrowerRich Brower is a Professor of Physics and Computer Engineering at Boston University. His research combines methods from theoretical physics and computational science applied to string theory, lattice

Quantum Chromodynamics (QCD), quantum field theory for the Higgs and statistical mechanics of graphene.  He began his work on algorithms starting with data parallel methods on the Connection Machine continuing on each advance to the present spectacular development of GPU heterogeneous extreme scale architectures. His algorithmic research includes efficient multi-grid Dirac solvers, parallel connected components of random graphs, chronological inverter for Monte Carlo evolution and cluster algorithms for fermionic systems. He serves on the USQCD Executive Committee and as the National Software Director for the DOE SciDAC software infrastructure project in lattice field theory.