NVIDIA Awards $50,000 Fellowships to Ph.D. Students for GPU Computing Research

Now in its 21st year, the NVIDIA Graduate Fellowship Program has awarded $6 million to nearly 200 students, supporting their work spanning machine learning, computer vision, robotics and programming systems.
by Sylvia Chanak

For more than two decades, NVIDIA has supported graduate students doing GPU-based work through the NVIDIA Graduate Fellowship Program. Today we’re announcing the latest awards of up to $50,000 each to 10 Ph.D. students involved in GPU computing research.

Selected from a highly competitive applicant pool, the awardees will participate in a summer internship preceding the fellowship year. The work they’re doing puts them at the forefront of GPU computing, with fellows tackling projects in deep learning, robotics, computer vision, computer graphics, architecture, circuits, high performance computing, life sciences and programming systems.

“Our fellowship recipients are among the most talented graduate students in the world,” said NVIDIA Chief Scientist Bill Dally. “They’re working on some of the most important problems in computer science, and we’re delighted to support their research.”

The NVIDIA Graduate Fellowship Program is open to applicants worldwide.

Our 2022-2023 fellowship recipients are:

  • Davis Rempe, Stanford University — Modeling 3D motion to solve pose estimation, shape reconstruction and motion forecasting, which enables intelligent systems that understand dynamic 3D objects, humans and scenes.
  • Hao Chen, University of Texas at Austin — Developing next-generation VLSI physical synthesis tools capable of generating sign-off quality layouts in advanced manufacturing nodes, particularly in analog/mixed-signal circuits.
  • Mohit Shridhar, University of Washington — Connecting language to perception and action for vision-based robotics, where representations of vision and language are learned through embodied interactions rather than from static datasets.
  • Sai Praveen Bangaru, Massachusetts Institute of Technology — Developing algorithms and compilers for the systematic differentiation of numerical integrators, allowing them to mix seamlessly with machine learning components.
  • Shlomi Steinberg, University of California, Santa Barbara — Developing models and computational tools for physical light transport — the computational discipline that studies the simulation of partially coherent light in complex environments.
  • Sneha Goenka, Stanford University — Exploring genomic analysis pipelines through hardware-software co-design to enable the ultra-rapid diagnosis of genetic diseases and accelerate large-scale comparative genomic analysis.
  • Yufei Ye, Carnegie Mellon University — Building agents that can perceive physical interactions among objects, understand the consequences of interactions with the physical world, and even predict the potential effects of specific interactions.
  • Yuke Wang, University of California, Santa Barbara — Exploring novel algorithm- and system-level designs and optimizations to accelerate diverse deep-learning workloads, including deep neural networks and graph neural networks.
  • Yuntian Deng, Harvard University — Developing scalable, controllable and interpretable natural language generation approaches using deep generative models with potential applications in long-form text generation.
  • Zekun Hao, Cornell University — Developing algorithms that learn from real-world visual data and apply that knowledge to help human creators build photorealistic 3D worlds.

We also acknowledge the 2022-2023 fellowship finalists:

  • Enze Xie, University of Hong Kong
  • Gokul Swamy, Carnegie Mellon University
  • Hong-Xing (Koven) Yu, Stanford University
  • Suyeon Choi, Stanford University
  • Yash Sharma, University of Tübingen