How Neural Networks Can Read Thoughts and Restore Movement to Paralyzed Limbs

by Isha Salian

While diving into the Atlantic Ocean off the shores of North Carolina with his friends in 2010, Ian Burkhart, then a college student, sustained a devastating spinal cord injury that left him paralyzed from the chest down.

But with a brain-computer interface powered by neural networks, he can now use his right hand to pick up objects, pour liquids and play Guitar Hero.

Ian Burkhart plays a guitar video game at Ohio State University’s Wexner Medical Center, with researcher Nick Annetta looking on. Photo courtesy of Battelle.

Burkhart is the first participant in a clinical trial led by Ohio State University and Battelle, a nearby independent research and development organization.

A Blackrock Microsystems microchip implanted in Burkhart’s brain connects to a computer running algorithms developed at Battelle. The algorithms interpret his neural activity and send signals to an electrode sleeve on his right hand. The sleeve, also invented at Battelle, stimulates the nerves and muscles in his arm to elicit a specific hand movement.

For now, Burkhart can only use the system, called NeuroLife, when in a laboratory located at Ohio State. But the eventual goal is for NeuroLife to become portable enough to mount on the user’s chair for home use.

If people at home could use the NeuroLife system for daily tasks like eating, brushing their teeth and getting dressed, it “would make a big impact on their ability to live independently,” said David Friedenberg, senior research statistician at Battelle and co-author on their latest paper, published in Nature Medicine.

“We want to make it easy enough that the user and their caregiver can set it up,” he said, “where you don’t need a bunch of Ph.D.s and engineers in the room to make it all work.”

Neural Networks Read Neural Signals

AI is being developed for a wide range of assistive technology tools, from prosthetic hands to better hearing aids. Deep learning models can provide a synthesized voice for individuals with impaired speech, help the blind see, and translate sign language into text.

One reason assistive device developers turn to deep learning is because it works well for decoding noisy signals — like electrical activity from the brain.

Using an NVIDIA Quadro GPU, a deep learning neural decoder — the algorithm that translates neural activity into intended command signals — was trained on brain signals from scripted sessions with Burkhart, where he was asked to think about executing specific hand motions. The neural network learned which brain signals corresponded to which desired movements.

However, a key challenge in creating robust neural decoding systems is that brain signals vary from day to day. “If you’re tired on one day, or distracted, that might influence the neural activity patterns that are meant to control the different movements,” said Michael Schwemmer, principal research statistician in Battelle’s advanced analytics group.

To recalibrate the neural network, Burkhart must think about moving his hand in specific ways. In this image from September 2018, he’s at work at Ohio State University’s Wexner Medical Center. Photo courtesy of Battelle.

So when Burkhart came into the lab twice a week, each session started with a 15- to 30-minute recalibration of the neural decoder — during which he would work through a scripted session, thinking in turn about moving different parts of his hand.

These biweekly sessions generated new brain data, which was used to update two neural networks. One leveraged labeled data for supervised learning, and another used unsupervised learning.

Together, these networks achieved over 90 percent accuracy in decoding Burkhart’s brain signals and predicting the motions he was thinking about. The unsupervised model sustained this accuracy level for more than a year and did not require explicit recalibration.

Using deep learning also sped up the time it takes the NeuroLife system to process a user’s brain signals and send it to the electrode sleeve. The current reaction time lag is 0.8 seconds, an 11 percent improvement over previous methods.

“If you’re trying to pick up a glass of water, you want to think about it and move. You don’t want a long lag,” said Friedenberg. “That’s something we measure pretty carefully.”

The feature photo above shows Ian Burkhart in conversation with Gaurav Sharma, one of the lead researchers on the project. Photo by Jo McCulty, The Ohio State University, courtesy of Battelle.