Hedge Connection will join several dozen other companies sharing their ideas at the Emerging Companies Summit, at the fourth annual GPU Technology Conference, to be held in San Jose, Calif., next March.

Video gamers rely on GPUs to deliver dazzling fast-twitch action. Scientists use banks of GPUs to simulate reactor cores and predict changes in global climate. That may sound like an unlikely set of skills to bring to the world of high finance — but it turns out GPUs are a perfect fit.

Bankers, hedge funds, actuaries, and other hyper-connected capitalists are increasingly relying on GPUs to help manage money. Just talk to Rob Arthurs, CEO of Hedge Connection, which helps bring together fund managers and qualified investors. “Why wouldn’t hedge fund managers want the same tools being used to build supercomputers?” Arthurs asks.

From nuclear science to the world of finance

GPUs have long been used to tackle one of the toughest computing chores in modern finance: Monte Carlo simulations. Originally developed by nuclear scientists at the Los Alamos National Laboratory, Monte Carlo simulations can be used to model complex systems. Because GPUs are parallel processors – they rely on large numbers of cores working on different parts of a problem simultaneously — they can run algorithms involved in such simulations more efficiently than machines that rely on CPUs alone.

The result: GPUs are now being used by some of the finance industry’s biggest players. In 2008, market news giant Bloomberg began using GPUs to help it grind through the Monte Carlo simulations it uses to calculate prices for hard-to-price securities. Standard Live Canada uses GPUs to help it manage complex financial instruments such as derivatives.

And in 2011, JP Morgan became the first major bank to detail its use of GPUs publicly, claiming that NVIDIA Tesla GPUs helped it tackle calculations Monte Carlo and finite difference algorithms as much as 100 times faster while slashing the cost of running these calculations by 80%.

Keeping it low key

While major financial players are making headlines, hedge funds have been quietly scooping up GPUs as well. Few will detail how they use them: the most successful hedge funds don’t want to give competitors an edge, Arthurs explains. “They really don’t like talking about it,” he says.

While the banks and hedge funds using GPUs are staying quiet, the companies supplying them with the software needed to unlock the power of GPUs are proliferating. These tools range from powerful, general purpose tools widely used by mathematicians and scientists — such as Wolfram Mathematica — to far more specialized products. Synerscope (see “Synerscope: Data Analysis for the Rest of Us”) use GPUs to help financial institutions — and other customers — visualize complex data sets. Murex — whom we’ll profile in an upcoming post — uses GPUs to speed up the sophisticated analytics software used to assess the risk of a portfolio.

Coupled with powerful new tools, even a handful of GPUs can give hedge funds computing power possessed by only the most advanced supercomputers a decade ago. And a few are buying scores — even hundreds — of GPUs. “There’s one that likes to say they can turn electricity into money,” Arthurs says.

  • DeeBG

    This is very interesting and the title has a lot to do with what I’m working on at the moment.  I’m sure some of you are aware of the crypto-currency known as Bitcoin.  Currently there is a large movement to switch from GPUs to ASICs, however it is likely that GPU “mining” will continue for some time.  Modern ATI/AMD cards have always been the cards of choice due to the way they process SHA256 hashes (normally using OpenCL technology) and nVidia cards have always been looked down upon due to slower hash processing using the same code.

    However, as far as I can tell, no one has really taken a good look at CUDA functionality, especially as seen in the Fermi and Kepler revisions.  More as a proof of concept, I have started coding an open-source CUDA-based Bitcoin miner.  Using current unoptimized OpenCL programming, my GTX 660 Ti can perform an average of 100MHash/s.  I believe that a 3x – 5x fold increase could be seen with some simple optimization (based on my experience with game development).  Wish me luck =).

  • Sagar Rawal

     Good luck! The ASICs are expected to be way faster with much less power usage…but as a proof of concept, your work should be quite rewarding!

  • DeeBG

    Well, due to the way SHA hashes, I not only couldn’t match FPGA output but I couldn’t match what developers were doing with OpenCL on AMD cards.  However, I’m happy to announce that I am currently pushing a git commit that makes nVidia cards that support CUDA 5.x the fastest and most economical way to produce Litecoins in the market today (Litecoins are Bitcoins younger brother that uses scrypt rather than SHA for hashing).  scrypt hashing requires much more memory than SHA hashing, and therefor I was able to offload much of the work to the GPU itself and have it run at full 100% load.  For those of you who know about Litecoin, 1 stock 660 Ti was able to average (over 24h) 224kh/s.  I already have ideas that might be able to work the card up to the 250 mark (although it would likely top out at 240… I wish I could get my hands on a Titan… break the megahash barrier =p).