As an aspiring space adventurer it was a bittersweet moment for me today as the space shuttle Atlantis landed at the Kennedy Space Center after its final mission. While Atlantis marks the end of NASA’s space shuttle program, I know it doesn’t mean the end of space research. In fact there’s a new NVIDIA GPU computing cluster at Drexel University that will help continue our quest for a better understanding of the universe.

Rather than flying to space to further our celestial knowledge, Steve McMillan and his team of scientists at Drexel University are using a GPU cluster to run astronomical simulations of stellar systems right here on earth. Funded by an NSF-MRI grant, the 144 GPU-powered machine, called DRACO (named after the Drexel Dragons), boasts an impressive 176 TFLOPS (peak) performance. McMillan and his team are running N-body and other simulations to study formation and evolution of black holes, galactic nuclei and compact star clusters.

Drexel’s DRACO: By the numbers
No. of GPUs 144
No. of nodes 24
CPUs per node Two 6-core CPUs
GPUs per node Six GPUs
Peak Performance 176 TFLOPS
Power consumption 45 kW
Cost to deploy $400,000

GPUs helped Drexel researchers tackle one of their biggest computing challenges: scaling cluster performance efficiently and economically. After a decade relying on GRAPE (GRAvity PipE – special purpose accelerators designed for gravitational simulations), the transition to GPU computing was a natural evolution for the researchers. CUDA and simple GPU programming tools made it a painless process to convert code libraries from GRAPE to GPU in 6 months. GPUs also enabled researchers to scale performance while constrained to a limited power and footprint.

In two years the Drexel researchers progressed from a 16 GPU setup to a full-fledged, 144 GPU research cluster.

DRACO is not just helping researchers simulate massive stars and galaxies. Cameron Abrams, Molecular Dynamics (MD) researcher at Drexel, and his team are also simulating minute human cells. Abrams leverages readily available MD software such as NAMD and AMBER as well as in-house developed MD codes using CUDA for large-scale molecular simulation, in the fields of protein and polymer science. DRACO gives researchers a detailed, atomic-level understanding of the mechanisms of protein receptor specificity and activation through these MD simulations. The simulations also enable them to establish pathways by which certain diseases develop and aid in designing molecular therapeutics.

Of course, Drexel University has another reason to celebrate. Chris Ferguson, Commander of the final NASA space shuttle mission, and a Drexel alumnus, is back home safe. Go Dragons!

Finally, a shout-out to our partners – Advanced HPC and Bright Computing for helping Drexel build and manage DRACO.

 

  • http://www.facebook.com/profile.php?id=100000413964444 Patrick Trotter

    i would love to see what a machine like that looks like

  • http://www.facebook.com/people/Kex-Xey/100000703834172 Kex Xey

    Pictures!

  • Devang Sachdev

    Hi Patrick and Kex, Thanks for reading the blog. No pictures yet, but the system is just 3 racks! Really small physical footprint for all that compute performance.

  • http://www.facebook.com/profile.php?id=100001293060152 David Dresden

    They could have built their cluster with 5.0 GHz Sandy Bridge systems:
    http://www.liquidnitrogenoverclocking.com/trinity_plutonium_i.shtml