Tumor Tracking: How a Neural Network Compares Brain MRIs in a Flash

by Isha Salian

Binge-watching three seasons of “The Office” can make you feel as if your brain has turned into mush. But in actuality, the brain is always pretty mushy and malleable — making neurosurgery even more difficult than it sounds.

To gauge their success, brain surgeons compare MRI scans taken before and after the procedure to determine whether a tumor has been fully removed.

This process takes time, so if an MRI is being taken mid-operation, the doctor must compare scans by eye. But the brain shifts around during surgery, making that task difficult to accomplish, but no less critical.

Finding a faster way to compare MRI scans could help doctors better treat brain tumors. To that end, a group of MIT researchers has come up with a deep learning solution to compare brain MRIs in under a second.

This could help surgeons check operation success in near real-time during the procedure with intraoperative MRI. It could also help oncologists rapidly analyze how a tumor is responding to treatment by comparing a patient’s MRIs taken over several months or years.

When the Pixels Align

Putting two MRI scans together requires a machine learning algorithm to match each pixel in the original 3D scan to its corresponding location in another scan. It’s not easy to do a good job of this — current state-of-the-art algorithms take up to two hours to align brain scans.

That’s too long to be used for an in-surgery setting. And when hospitals or researchers want to analyze thousands or hundreds of thousands of scans to analyze disease patterns, it’s not practical either.

“For each pixel in one image, the traditional algorithms need to find the approximate location in the other image where the anatomical structures are the same,” said Guha Balakrishnan, an MIT postdoctoral researcher and lead author on the study. “It takes a lot of iterations for these algorithms.”

Using a neural network instead speeds up the process by adding in learning. The researchers’ unsupervised algorithm, called VoxelMorph, learns from unlabeled pairs of MRI scans, quickly identifying what brain structures and features look like and matching the images. Using an NVIDIA TITAN X GPU, this inference work takes about a second to align a pair of scans, compared with a minute on a CPU.

The researchers trained the neural network on a diverse dataset of around 7,000 MRI scans from public sources, using a method called atlas-based registration. This process aligns each training image with a single reference MRI scan, an ideal or average image known as the atlas.

The team is working with Massachusetts General Hospital to run retrospective studies on the millions of scans in their database.

“An experiment that would take two days is now done in a few seconds,” said co-author Adrian Dalca, an MIT postdoctoral fellow. “This enables a new world of research where alignment is just a small step.”

The researchers are working to improve their deep learning model’s performance on lower-quality scans that include noise. This is key for scan alignment to work in a clinical setting.

Research datasets consist of nice, clean scans taken of patients who wait a long time in the MRI machine for a high-quality image. But “if someone’s having a stroke, you want the quickest image possible,” Dalca said. “That’s a different quality scan.”

The team will present a new paper this fall at the medical imaging conference MICCAI. Balakrishnan is also developing a variation of their algorithm that uses semi-supervised learning, combining a small amount of labeled data with an otherwise unlabeled training dataset. He found that this model can improve the neural network’s accuracy by 8 percent, pushing its performance above the traditional, slower algorithms.

Besides brain scans, this alignment solution has potential applications for other medical images like heart and lung CT scans or even ultrasounds, which are particularly noisy, Balakrishnan says. “I think to some degree, it’s unbounded.”