Deep Learning Opens Door to Intelligent Medical InstrumentsNovember 26, 2017
For centuries, doctors have wanted to see inside their patients, using the best available tools to help them detect, diagnose and treat disease. Innovations in diagnostic imaging technology, such as CT scans, 3D-ultrasound and MRI, have helped save millions of lives. Running complex mathematics, these instruments are computers that convert signals captured by sensors into 2D and 3D images read by doctors.
Healthcare providers want these machines to do more, but there are significant technical challenges. Doctors want them to be fast, safe and precise. And providers need small, portable real-time diagnostics at the point of care. Meanwhile, demand for ever-increasing resolution and image fidelity requires the computational power of a supercomputer to be embedded within them.
Volta GPUs Accelerate Processing of Signals, Algorithms
The development of a new type of computer from NVIDIA is now making this possible. Using massively parallel computing, our latest Volta GPUs can process these signal and imaging algorithms at speeds that previously required many racks of traditional data center CPUs.
The performance of NVIDIA GPUs has enabled computer scientists to apply deep learning to the imaging challenge. Loosely inspired by the workings of the human brain, a deep convolutional neural network learns to recognize important features of an object directly from the data it sees during training, and creates a vision model that can be applied to recognize or segment images with surprising effectiveness.
A New Age of Intelligent Medical Instruments
The combination of deep learning, NVIDIA GPU computing and medical imaging is spurring a new age of intelligent medical instruments. Pioneers in the diagnostic imaging community have jumped on the NVIDIA GPU platform to achieve amazing results in each of the major stages of the medical imaging pipeline — reconstruction, image processing and visualization.
Reconstruction is the process of converting the signals captured during the data acquisition stage into an image. Over the past decade, advances in signal processing algorithms have made it possible to reconstruct CT images with improved quality while reducing X-ray dosages by nearly 80 percent. GE Healthcare’s new Revolution Frontier CT uses NVIDIA GPUs to perform the supercomputing needed to process complex reconstruction algorithms.
Researchers at Massachusetts General Hospital, A.A. Martinos Center for Biomedical Imaging, and Harvard University developed a new deep learning framework for image reconstruction called AUTOMAP. Conventional image reconstruction is implemented with handcrafted signal processing using discrete transforms and various filtering algorithms. AUTOMAP replaces this approach with a unified image reconstruction framework that learns the reconstruction relationship between sensor and image domain without expert knowledge.
In addition to reconstruction, the image processing stage of medical imaging performs detection, classification and segmentation. It can then automatically apply annotations and measurements to assist radiologists working with today’s complex 3D imaging datasets.
A New Volumetric Convolutional Neural Net
Researchers at the Technical University of Munich, Ludwig Maximilian University of Munich and Johns Hopkins University have developed a volumetric convolutional neural network called V-net that performs native 3D segmentation of prostate MRI data. In 3D segmentation, the organ of interest is delineated and each of the voxels of the organ in the 3D image is grouped and assigned the same label.
V-net was trained end-to-end on MRI volumes depicting the prostate, and learned to predict segmentation for the whole volume at once. Training V-net to segment each of the millions of 3D voxels in an image required substantial computation, which is why the team used NVIDIA GPUs.
Today’s visualization is driven by volumetric rendering — 3D post-processing of CT and MRI data is used to visualize complex anatomical information. Instead of trying to analyze multiple 2D images, doctors use volumetric rendering to generate an all-in-one 3D representation. And with powerful NVIDIA GPUs, doctors can manipulate and view images from different perspectives to get a good spatial understanding of the anatomy.
Anatomical Visualization with Cinematic Rendering
At Johns Hopkins, Dr. Elliot Fishman and researchers at Siemens developed a new approach, called cinematic rendering, using physically based simulation of light diffusion to produce a photorealistic depiction of the human body. Inspired by computer graphics and GPU technology used in computer-animated films, cinematic rendering uses global illumination, which takes thousands of direct and indirect light rays into account to produce a photorealistic image.
These stunning images would enable radiologists to recognize subtle texture changes, depth perception and spatial relationships to surrounding anatomy. Additionally, Dr. Fishman and the team at Hopkins are investigating how deep learning algorithms will benefit from the leap in image fidelity of cinematic rendering.
Diagnostic imaging is one of our most important life-saving technologies, which is why its use has grown exponentially in the last decade. Each year there are hundreds of millions of imaging exams in the U.S. alone. The surging worldwide demand for medical imaging is estimated to become a $49 billion market by 2020.
Opening the Door to a New Cycle of Breakthroughs
NVIDIA GPU computing and deep learning will enable a new cycle of breakthroughs that improve image fidelity, reduce radiation dose and drive further miniaturization.
With NVIDIA GPUs, future medical imaging systems will be compact AI supercomputers that will give doctors Superman’s see-through abilities, and instantly respond to voice commands to find and highlight anatomical areas of interest.
As early detection is one of the best ways to save lives, these intelligent medical instruments will become an increasingly vital weapon against disease.