This week’s spotlight is on André R. Brodtkorb. André is a scientist at SINTEF in Norway, where he works on GPU acceleration and algorithm design. A presenter at GTC 2010, Andre’s research interests include GPU/heterogeneous computing, simulation of partial differential equations (PDEs) and real-time visualization.

NVIDIA: André, please tell us a bit about yourself.
André:  I first started working with GPUs in 2005, when you could only use graphics APIs like OpenGL. At the University of Oslo, I wrote my master’s thesis on “A MATLAB Interface to the GPU,” which was, to my knowledge, one of the first times the GPU was used with MATLAB.

André R. Brodtkorb of SINTEF

Since then, I have been working on a lot of different applications of GPUs and parallel processing, including direct visualization and video surveillance. I recently completed my Ph.D. thesis – “Scientific Computing on Heterogeneous Architectures” – in which shallow water simulations played a central part. Shallow water simulations are extremely important in everything from tsunami warnings to simulation of storm surges and dam breaks, where processing speed is a critical factor.

Simulation of a dam break

NVIDIA: How does GPU computing currently play a role in your research?
André: At SINTEF, I work on a range of application areas that have strict demands for computational speed, typically real-time or faster-than-real-time. The GPU is a key piece of the puzzle to achieve these goals. Shallow water simulations, for example, can be used for both creating emergency action plans and for real-time simulation of an ongoing event. In both cases, you want high-quality results as fast as possible. A conventional CPU-based system is often not good enough, because it typically sacrifices quality. Using the GPU, on the other hand, you get high-quality results faster-than-real-time, providing a far better basis for important decisions.

NVIDIA: How did you get interested in shallow water simulation?
André:  Some of my colleagues actually worked on simulating shallow water equations using OpenGL and GPGPU techniques six years ago. They showed that using GPUs could achieve speedups of around 30X over equivalently-tuned CPU code.

In 2010 we were approached by the University of Mississippi’s National Center for Computational Hydroscience and Engineering (NCCHE), who wanted to use the GPU to accelerate simulation of real-world events. This was the perfect opportunity to revisit shallow water simulations with the latest CUDA technology. I spent three months at NCCHE as a visiting scholar and together with Martin L. Sætra (University of Oslo) and researchers at NCCHE we developed a full simulator. That included going from the typical proof-of-concept implementation (which hydrologists typically consider to be toy models!) to a thoroughly validated implementation. We did not settle for “it looks right,” but instead verified that we were able to reproduce real-world events.

For shallow water simulations, there is always the trade-off between quality of results and computational time. Take a tsunami simulation in the Indian ocean, for example, which covers roughly 73 million square kilometers. In areas far from the shore, you can get away with using very low resolutions, but along the shore, you would like to have 10-meter resolution or higher to be able to resolve small-scale effects that can be very important. Today, people are using huge CPU clusters to perform such simulations, but are not able to capture all the effects. With faster GPU-based systems, you get the opportunity to run at higher resolutions, and thus get higher-quality results, which can make a huge difference during an emergency.

Similar Stories

  • Andrew Sheppard

    What do you mean by faster-than-real-time?

    Is it similar to what I do when backtesting HFT strategies, where I replay tick and level 2 data at a rate faster than that at which it was created and captured? Obviously when analyzing past data you want to process it in “faster-than-real-time” and GPUs are great for that.

    Or are you using the term in a predictive fashion? In the sense that you need to do complex calculations in a time-frame far shorter than the decision time. Obviously, a prediction is only useful if it precedes the decision point.

    I’m just trying to fully understand your blog interview.

  • André R. Brodtkorb

    Hello Andrew,

    For these simulations, the aim is (as you suggest) to predict the effect of a flood scenario as fast as possible. For CPU simulations of the shallow water equations, a real-world code can take 1-1.5 (wall clock) hours to simulate one hour of a typical flood scenario. The massively parallel processing of GPUs is (close to) perfectly suited for the type of calculations we do, and this enables us to simulate the same cases in a much smaller time frame.

    For an on-going flood event, speed is of course important to be able to predict what will happen. But it is equally important when planning for a possible flooding/dam break. As there are many different parameters that can affect the simulation results (type of dam break/flooding, water level in the reservoir, etc.), you want to simulate many versions of the same case. Using GPUs, you can simulate far more cases, giving you a better overview of the event which means you have better data to guide you in choosing which areas to evacuate, which levees to reinforce, etc.