More and more these days, GPUs are helping institutions navigate troubled financial waters. Guest author Pierre Spatz gives his insight into how his firm, Murex, is using GPUs to analyze markets and better monitor risk.
These have been turbulent times for financial markets.
The recent crisis reduced the appetite for some “toxic” exotic products, but it didn’t spell the end of financial derivatives needed by our global economy. In fact, it signaled a greater need for computational performance in financial markets.
The current trend leans toward standardizing some first- and second-generation exotic derivatives. This results in larger volume and reduced margins, at a time when regulators are more closely scrutinizing banks and asking for better risk monitoring. It’s a situation where GPUs are the best choice amongst several alternatives, offering a performance revolution without enormously increasing hardware costs – especially at a time when budget concerns abound.
From early adoption …
Back in 2008, Murex, like most market participants, used larger and larger grids of standard servers running mostly single-threaded code to crunch complex financial calculations.
We had had some success using task parallelism but the overall throughput of the grid remained the same – and would remain the same even if individual pricings happened more quickly. We were fast approaching the point where hardware and infrastructure costs were becoming prohibitive.
Seeking a way out of this bind, we started our first “proof of concept” project with NVIDIA GPUs. We quickly realized performance acceleration that ignited a complete overhaul of our mathematical code.
… to production
To win over customers, we had to extend the scope of the project to include scripting languages, PDEs (partial differential equations), calibration, etc., and implement methodologies hitherto undreamed of for fear of killing performance.
Improved performance proved to be addictive. Once the parallel code was ported to GPUs, we ran an optimization campaign on the residual sequential code in our models. The result was better-than-expected performance and had the added benefit of making the software ready for future generations of GPUs.
Our first customer went live at the beginning of the year, and the results were dramatic. Pricing precision and the number and frequency of Greeks – derivatives of the value of the portfolio, relative to market data – computations increased while the time needed for a full evaluation decreased – all with a reduced hardware footprint. Several other customers are already committed to this path, and it is clear that any new Murex installation featuring Monte Carlo simulation will be GPU powered.
A paradigm shift
Programming GPUs also changed our understanding of how a computer works – so much so that it has impacted our vision of how financial software should be built. At the recent GPU Technology Conference, known as GTC 2012, we presented the first iteration of our GPU cluster solution, foreshadowing the future of trading and risk management.
When evaluating complex products that are Monte Carlo-based, we assert that “real time without compromise” will be the response, whatever the precision needed and whatever the number of Greeks or VAR/PFE/CVA scenarios.
Enabled by GPU technology, this is our answer to the conundrum of keeping traders and regulators happy simultaneously.