Trying to determine the origins and fate of the universe is important work with far-reaching implications. But cosmologists, unlike their counterparts in other scientific fields, can’t do hands-on experiments to test their hypotheses.
“The only way we can check our theories is through numerological simulations,” Claudio Gheller, a computational scientist at the Swiss National Supercomputing Centre, said during his presentation the GPU Technology Conference.
“What you need is sophisticated simulation codes” running on high-performance computers, he continued. As the volume of data being analyzed and visualized has grown, however, CPUs have run into difficulties supplying sufficient processing power. This has led scientists like Gheller to wonder how GPUs might accelerate their work.
“Can these applications efficiently exploit GPUs?” Gheller asked. “And if so, what kind of performance can we get out of these codes?”
In the search for an answer, Gheller and his colleagues have been running tests to determine how GPUs impact the performance of three codes commonly used in cosmology—RAMSES, a platform for developing applications using adaptive mesh refinement; SPLOTCH, which enables visualization of cosmological simulation data; and ENZO, used to simulate the formation of cosmological structures.
The tests were performed on the supercomputing center’s Cray XK7 system, packed with more than 200 nodes linking AMD Opteron processors on one side to an NVIDIA Tesla K20X on the other.
Results were mixed. In the SPLOTCH tests, GPUs improved the performance of a rasterization kernel by 50x, while the performance of a rendering kernel on the GPU leveled off as the particles being visualized increased in size.
The takeaway: While GPUs can have a huge impact, cosmologists will have to be selective in how they use them.
Said Gheller: “The trend is to take some specific kernels and try to move them to the GPUs separate from the rest of the code.”