THE WORLD ISN’T FLAT, IT’S PARALLEL

by Mark Lange

This post is an entry inThe World Isn’t Flat, It’s Parallel series running on nTersect, focused on the GPU’s importance and the future of parallel processing. Today, GPUs can operate faster and more cost-efficiently than CPUs in a range of increasingly important sectors, such as medicine, national security, natural resources and emergency services. For more information on GPUs and their applications, keep your eyes on The World Isn’t Flat, It’s Parallel.

Reading a hot best-seller is a serial process. Start at the beginning and read it to the end. But a task like counting the number of vowels in that same book can best be done as a parallel process. Give each paragraph to a different person, and it gets done far more quickly.

So it is with computing. Some tasks lend themselves to serial computing. But the complexity and data processing requirements that underlie our most challenging problems are rapidly moving beyond the capacity of serial processing.
We’ve all gotten used to Thomas Friedman’s idea that the world is flat. In solving problems with computers, we’ve similarly accepted the assumption that the world is serial.

In fact, the world is parallel.

Technology reflects the thinking in force at the time of its creation. And over time, it reflects our own self-imposed limitations. At some point, they have to be eclipsed.

Our previous computing approach, conceptually already over 40 years old, was to make single, serial CPU cores faster. Moore’s Law has enabled us to make faster, cheaper transistors but because of power constraints, it can no longer make single cores faster. All we can do to make CPUs faster is to add cores. But this guarantees diminishing returns — each new core can only process a small handful of threaded instructions. Everything must still be processed sequentially.

The practical effect – for an individual analyst, an individual piece of software or a multi-billion dollar research program – is effectively the same: Get in line.

Sequential processing is no longer adequate for the work before us now. The real question isn’t whether there’s some natural limit to Moore’s Law. It’s why we’ve allowed our progress to be limited by sequential CPUs, when massively-parallel GPU processors are already proving themselves orders of magnitude faster and cheaper.

Consider a range of distinct and (at first glance) entirely disconnected problems: 911 response time… dangerous weather patterns… breast cancer… national security… cleaner clothes… energy discovery… and financial derivatives valuation.

What these problems have in common is that a lack of computing speed impedes our ability to solve them. All of these issues – as disparate as they seem — have literally and demonstrably hit the limit of what’s possible with traditional, CPU-driven computers.

But all of them can be solved faster and cost-effectively with GPU machines, which have been proven to be hundreds of times faster than CPU clusters, at one-tenth the cost.

Consider what is now possible:

  • Reducing 911 emergency response time – City planners and municipal response teams are combining datasets with physical mapping, population demographics, local resources, surface layers and vectors that involve many gigabytes of information in Geographical Information Systems. With GPUs, calculations that previously took 20 minutes to complete are now done in 30 seconds. What used to take 30 seconds is now done in real time.
  • Predicting dangerous weather patterns – The most widely-used weather forecasting model in the world is “running out of gas for time-critical forecasts on conventional clusters,” according to its lead software developer at the National Center for Atmospheric Research. Adding more CPUs no longer improves speed. But NCAR says the effect of applying GPUs to the problem has been “transformative.” The result? More accurate and faster forecasting, critical for agencies around the world – particularly in regions in need of early warning, and those most likely to be affected by climate change.
  • Fighting breast cancer – By replacing all 16 of its CPU clusters with two massively parallel (and far less expensive) Tesla GPU systems, an ultrasound process that once took three appointments can now be done in a single 30 minute appointment. This reduces anxiety, pain, and ultimately cancer incidence.
  • Maintaining national security – GPUs form the basis for the most advanced tactical and strategic systems in the world. Seven GPU chips support the F-22 Raptor, The U.S. Air Force considers the F-22’s combination of stealth, speed, agility, precision and situational awareness unmatched — by any known or planned fighter.
  • Keeping clothes cleaner – Researchers at Temple University are working to develop computer simulations that give companies like Proctor & Gamble a fast and cost-effective way to identify more effective and environmentally-sound detergents. Different chemicals attach themselves to different kinds of oils and soils more effectively. Traditionally, developing new detergents required extensive time and cost intensive testing in wet labs. Instead, the massive computational power provided by GPUs enables simulations of vast numbers of combinations, modeling the way different molecules attach themselves to (and banish) dirt.
  • Securing energy supplies – With the search for energy becoming more complex and expensive, energy firms are constantly assessing massive quantities of seismic and geological data to determine the most efficient way to extract oil and gas and to maximize the utility of the reserves. A recent analysis of 740 square kilometers using 24 Tesla GPUs was completed 600 times faster than a traditional cluster of 66 CPUs — while using 95 percent less energy to run and cool the systems. Then consider improvements in automotive and transportation aerodynamics and fuel efficiency using GPU-powered design. Going parallel helps us discover energy, and conserve energy — and uses less energy to do both.
  • Financial derivatives valuation –Recent market dynamics have brought even more focus on the need for accurate, predictive risk assessment models . Financial institutions can now use models that enlist the GPU’s massively parallel processing, to assess risk for a single trade or a portfolio accurately, with more confidence. Speed increases of 30X to over 100X mean that pricing a large portfolio of exotic swaps and derivatives can be handled in minutes instead of hours – supporting better decision-making and institutional stability.

All of these cases — like the code running on GPUs themselves – are just a small sampling of the array of problems that we can only solve time- and cost-effectively through parallel processing.

We are poised to make enormous strides in these domains, among countless others that demand the ability to process massive amounts of data quickly, inexpensively and accurately.

What GPU technology does – fundamentally – is bring simplicity to complexity. It helps us crack problems that, until recently, we simply couldn’t afford to solve, or couldn’t solve at all – whether it’s the design of a cardiac stent, a car body or a new molecule.

There’s a strong case that the only way civilization moves from one level to the next is by holding its earlier accomplishments with a loose grip.

CPU-based computing has served us well for decades. However, traditional sequential CPUs are not getting any faster, while our computational needs are growing exponentially.

The optimal – in fact, the only effective – computing that can take on the massive data problems we face is through massive parallel processing. And only GPUs can provide it.