Just over the horizon, exascale computing promises 1,000 times more processing power than today’s petascale systems. But there are still many questions about potential challenges and opportunities in the path to exascale.
A panel of experts told GTC attendees Wednesday that developing applications capable of leveraging exascale systems will be key to realizing the benefits of next-generation supercomputers.
“It’s time to get serious about what we’re going to do to make sure we have applications ready for exascale systems,” said panel moderator Mike Bernhardt, publisher of The Exacale Report. He suggested that the race to exascale is likely to be won or lost based on how well the software industry optimizes its applications for massive parallelism.
Panelists wholeheartedly agreed with that premise.
“I’m not worried that we won’t have applications that can run on these platforms,” said Olav Lindtjorn, HPC advisor for oil-services giant Schlumberger. “I’m more concerned about being able to run them in parallel.”
Steve Scott, CTO of NVIDIA’s Tesla business, said he’s skeptical of vendor predictions that apps optimized to run on exascale systems will be available by the end of this decade. “Will apps run on them? Yes. Will they run well? Absolutely not,” Scott said.
Panelists were divided in their opinion about whether new programming models were needed to drive the “exascaling” of applications. Scott said that regardless of which coding tools developers use, the software industry has to find a way to express locality and expose parallelism to take full advantage of exascale systems.
Jeffrey Vetter, distinguished R&D staff member and leader of the future technologies group at Oak Ridge National Laboratory, opined that new programming models will be most important in building robust exascale apps that can contend with system failures, load balancing requirements and the like.
Schlumberger’s Lindtjorn, meanwhile, said he’s not convinced that vendors will have the necessary programming tools ready in time. But, he believes existing tools can be used to achieve the kind of performance levels expected of exascale systems.
The panelists wrapped up the session on an encouraging note. They all agreed that, despite the remaining obstacles on the road to true exascale applications, the HPC community shouldn’t let its enthusiasm for exascale wane.
“It’s a great time to be a computer scientist,” said Vetter. “There’s a lot of exploration going on. The key is to remain optimistic that we’re going to get there.”
Satoshi Matsuoka, a computer scientist from Tokyo Institute of Technology, encouraged applications developers to seek out conversations with computer scientists for answers. “It’s really enjoyable,” Matsuoka said of getting such inquiries. “It gives me interesting problems to solve.”
Scott left attendees with a word of caution: Think big if you have code that you’d like to see running on exascale systems several years from now. “Don’t think about incrementally increasing your parallelism,” he said. “You need to be thinking, ‘Wow, how can I give myself 1,000 times as much parallelism as I have now?’”