Search and rescue ain’t what it used to be.
Gone are the days of rescue teams and their dogs heading into dangerous situations not knowing what they’re going to face. Technology has transformed the art of rescue into a science.
One key advance has been the use of robotic devices, which do everything from evaluating surroundings to assessing the situation of those who need help.
But as Pawel Musialik, a programmer and researcher at Poland’s Institute of Mathematical Machines (IMM), told attendees during a session at the GPU Technology Conference, getting the most out of these robots takes planning.
“We want to provide tools for rescue teams to get the best use of unmanned platforms,” Musialik said. “They’re not experts in software development.”
IMM is one of a handful of entities that comprise the Integrated Components for Assisted Rescue and Unmanned Search operations (ICARUS) project.
Formed after the 2011 earthquake and tsunami in Japan, ICARUS is a joint research effort spearheaded by the European Commission to make the use of robots more practical during search-and-rescue efforts.
Musialik and IMM have been working on developing systems that will help search-and-rescue teams direct ground and aerial robots with less pre-mission preparation.
That means enabling robots to categorize classes of objects (buildings or vegetation, say), understand the relationships between those objects (overlapping or adjacent) and then operate based on rules to make determinations such as whether a situation is unsafe.
The hardware IMM is using takes advantage of NVIDIA GPUs and the CUDA parallel processing architecture. Rugged computers equipped with two NVIDIA GRID K2 cards are combined with GeForce GTX-powered laptops.
Pulling data from sources such as geographical information systems and ground and aerial point clouds, IMM has established models that help instruct robots in real time. That information, combined with detailed graphical visualizations, is creating more informed rescue robots, Musialik said.
“We couldn’t do point classifications with CPUs,” he said. For instance, Musialik showed an example of a CPU-generated image in which the software couldn’t distinguish between a monument and surrounding vegetation. Once a GPU was added to the equation, the monument was clearly identified.
With GPUs, they can get the system to feed robots increasingly granular data.
The moral: If you ever find yourself trapped in a crumbled building or deep ravine, worry not. GPU-powered robots may be on their way.