How Deep Learning Promotes Early Detection of Cancer
Seeing may be believing. But when treating cancer, our eyes sometimes misinform us.
When interpreting CT and MRI scans, it can be difficult to distinguish the borders between organs and tumors, to determine how tumors have changed since the previous exam, and to spot new tumors when the focus is on another area.
Researchers at Germany’s Fraunhofer Institute for Medical Image Computing are applying GPUs and deep learning to ramp up the accuracy of cancer diagnoses. With AI-powered image analysis, doctors can better avoid false positives that lead to unnecessary treatment, and increase the likelihood of spotting new tumors that may appear.
“We believe that early detection is key,” said Markus Harz, a research scientist at Fraunhofer MEVIS. “When an abnormality is detected in images, proper diagnostic workup is the next challenge.”
Until a few years ago, Harz and his research colleagues relied on the classical “feature engineering” approach still prevalent today. Researchers would program the computer themselves to detect image traits that help to classify image data using algorithms like linear regression or random forests.
However, the team’s first experiments with deep learning revealed that it could solve very challenging problems, including detecting the location and identifying the contours of organs and abnormalities.
Harz remembers the moment the effort took shape: He was sitting with a radiologist who’d been reviewing the case of a patient at high risk for breast cancer. She’d been comparing two MRIs, side by side, trying to spot any changes between dozens of lumps depicted in each. She finally spotted one lump that had grown in size, and turned out to be malignant.
The radiologist admitted this was a lucky hit, and that most cases don’t benefit from such good fortune. She teamed with Harz and his colleagues, and they started working on an algorithm that would spatially align the MRIs.
“With this, you can subtract them and directly see the difference,” he said.
As the team has refined its models for image analysis, it’s seen how deep learning improves diagnostic results. But there’s another challenge to providing solutions in clinical settings: regulatory clearance. To get it, Harz said he and his colleagues have been building an infrastructure to validate their deep learning algorithms.
“Both clinicians and medical device manufacturers want to see proof of the superiority of cognitive medical computing,” said Harz.
GPUs Deliver the Goods
GPUs figure prominently in the researchers’ work. They train their deep learning models on NVIDIA GPUs running on multiple machines, supported by CUDA and cuDNN, as well as the Theano and TensorFlow libraries. Harz estimates that the local GPUs are delivering at least a 20x improvement in performance over CPUs.
Visualization experts at Fraunhofer MEVIS also are using GPUs to power their work using shader programs to achieve photorealistic renderings of medical images in real time. And Fraunhofer MEVIS’ medical image registration group has been able to speed up its algorithms using general performance GPUs in conjunction with the OpenCL parallel programming standard.
The technology promises to provide a more accurate characterization of how a tumor changes over time than is possible with the human eye, thus enabling medical staff to diagnose more confidently.
“(Diagnosticians) would manually measure a two-dimensional diameter, whereas the computer can generate a volumetric representation that much better represents the growth or shrinkage of a tumor,” said Harz. This not only helps automate the most tedious clinical work, but also lets diagnosticians characterize abnormalities far beyond just their size.
Harz and his research colleagues are focused on refining their validation framework, connecting their deep learning network to hospital infrastructures, and building a seamless integration with typical clinical storage systems. And they’re trying to accomplish all of this without interfering with the systems’ current ability to serve images to clinicians.
Down the line, the team plans to curate unstructured data, establish large-scale data crawling and work on commercializing the technology. Harz hopes the technology can become a critical part of the cancer diagnosis chain whenever data is too complex to be analyzed by humans.