Known as NVAIL, our program provides these research partners with access to powerful GPU computing resources.
Researchers from Tsinghua University and Georgia Tech are exploring ways to detect vulnerabilities in neural networks that use graph structured data. An Oxford University team is training multiple AI agents to efficiently operate together in the same environment. And Carnegie Mellon University researchers are determining how a neural network can more quickly learn the optimal path around a space.
Strengthening Neural Networks Against Attack
If shown an image of a watermelon that had some scrambled pixels laid over it, a human would still be able to easily identify it. But that can be enough to fool a neural network into misclassifying the pictured object as an elephant instead — a kind of adversarial attack hackers can use to manipulate the algorithm.
While existing research on adversarial attacks has focused on images, a joint paper from Georgia Tech, Ant Financial and Tsinghua University shows for the first time that this vulnerability extends to neural networks for graph data as well.
Graph structured data consists of nodes, which store data, and edges, which connect nodes to one another. The researchers experimented by adding and deleting edges to see where the neural network begins to perform badly in response to edge modifications.
Social network data, like the graph of how a single user is connected to a web of Facebook friends, is one example of graph structured data. Another is data on money transactions between individuals — such as records of who has sent money to whom.
Fooling a graph neural network that looks at financial data could result in a fraudulent transaction being labeled as legitimate. “If such models are not robust, if that’s easy to be attacked, that raises more concerns about using these models,” said Hanjun Dai, Ph.D. student at Georgia Tech and lead author on the paper.
The team used the cuDNN software library and ran their experiments on Tesla and GeForce GTX 1080 Ti GPUs. While the paper focuses on investigating the problem of adversarial attacks on graph structured data, the goal is for future research to propose solutions to strengthen graph neural networks so they provide reliable results despite attempted attacks.
Teamwork Makes the Neural Net Work
Driving is a multiplayer activity. Though each driver only has control over a single vehicle, the driver’s actions affect everyone else on the road. The person behind the wheel must also consider the actions of fellow motorists when deciding what to do.
Translating this kind of multilayered understanding into AI is a challenge.
An AI agent takes in information and feedback from its environment to learn and make decisions. But when there are multiple agents operating in the same space, researchers are tasked with teaching each AI to understand how the other agents affect the final outcome.
If an agent can’t reason about the behavior of others, it wouldn’t be able to properly reconcile its observations.
“For instance, it could find itself in exactly the same situation as earlier, take the same action and something different could happen,” said Oxford University doctoral student Tabish Rashid, a co-author on a paper that will be presented at ICML. “That causes conflicting learning to happen. It makes it difficult to learn what to do.”
This problem can be avoided during training, where researchers can allow multiple agents to communicate with one another and know other agents’ actions. But in the real world, one AI agent will not always have communication with or insight into the plans of others — so it must be able to act independently.
The Oxford researchers proposed a novel method that takes advantage of the training setting. Using the strategy game StarCraft II, they trained several agents together in an environment where agents could share information freely. After this centralized training, the agents were tested on how well they could perform independently.
Co-author Mikayel Samvelyan, former master’s student at Oxford, said this approach transfers well beyond the research setting: “You can train agents in a simulator and then use the strategies they learned in the real world.”
The team used an NVIDIA DGX-1 AI supercomputer and several GeForce GTX 1080 Ti GPUs for their work.
Planning the Perfect Path
Watching a cleaning robot wend its way around a swimming pool can be a mildly entertaining pastime on a lazy summer day. But is it taking the most efficient path around the pool to save time and energy?
Neural networks can help robots learn the optimal path around an environment faster and with less input information. A research group at Carnegie Mellon authored a paper outlining a path-finding model that’s simpler to train and more generic than current algorithms.
This makes it easier for developers to take the same base model, quickly apply it to different solutions, and optimize it. Applications for path-finding are diverse, ranging from household robots to factory robots, drones and autonomous vehicles.
Using 2D and 3D mazes, the team trained the neural network, powered by the NVIDIA DGX-1, an essential tool for accelerating deep learning research. Out in the world, an AI may not always have a map or know the structure of an environment beforehand — so the model was developed to learn just from images of the environment.
“Navigation is one of the core components for pretty much any intelligent system,” said Ruslan Salakhutdinov, computer science professor at Carnegie Mellon. Path planning networks like this one could become a building block that developers plug into larger robotic systems, he said.
Attendees of ICML, which runs July 10-15, can hear about each of these projects at the conference. Come by the NVIDIA booth (B02:12, Hall B) to connect with our AI experts, take a look at the new DGX-2 supercomputer and check out the latest demos.