Hello Kitty: How Pictures of Cats Help Computers Read Chest X-Rays

Dr. Alvin Rajkomar remembers the exact moment he realized how urgently he needed a faster way to read patients’ medical images.

His patient in the intensive care unit of University of California at San Francisco Medical Center was showing signs of a life-threatening lung condition that requires immediate treatment. Rajkomar, an assistant professor at UCSF, knew just what to do. But his hands were tied until he could confirm his diagnosis with a chest X-ray.

Normally, he’d have to wait for test results while a radiologist collected, uploaded and reviewed the images. In this case Rajkomar happened to be next to the machine when the technician took the X-ray. He saw the abnormality, took action — and saved the patient’s life.

But he knew that doctors in the ICU are seldom on the spot when the X-ray is taken. Deciding that he and his patients could no longer afford the usual wait, Rajkomar set out to speed things up by automating analysis with GPU-accelerated deep learning.

Dr. Alvin Rajkomar trained his deep learning algorithm to distinguish chest X-rays of the front of the chest from those picturing its side.
Dr. Alvin Rajkomar trained his deep learning algorithm to distinguish chest X-rays of the front of the chest from those picturing its side.

In the process, he showed how researchers could use ordinary images — everything from cats to coats to cauliflower — to train deep learning algorithms for medical images.

Why It’s Hard to Teach Computers to Read X-rays

Rajkomar’s goal is to automatically detect life-threatening abnormalities in chest X-rays — not as a replacement for radiologists, but as swift alert system for doctors.

“When I’m caring for patients, I can’t wait for critical pieces of data. I need it as soon as possible,” Rajkomar said.

Before he could teach computers to read chest X-rays, Rajkomar had to overcome the obstacle that’s stymied other efforts to automate medical image analysis: There’s a dearth of medical images that have already been labeled so they can be used to train a neural network. Because of patient-privacy laws and a reluctance to share data, it’s often difficult for researchers to obtain medical images outside their own institutions

Even the images medical institutions do have often lack adequate information to be immediately useful, with inconsistent metadata or none at all.

What Fungi Have to Do with X-rays

Although Rajkomar had access to about 1,000 X-rays at UCSF, he needed hundreds of thousands to train a deep learning algorithm. That’s when he devised the plan to use ordinary images to train his network.

“When I thought about it, I realized it’s not that hard to get other datasets (for training). I just had to be inventive about how to make other datasets work for us,” he said.

He used a process called transfer learning, in which a neural network takes knowledge gained in one domain and applies it to another.

Using four TITAN X GPUs and the CUDA parallel computing platform, he trained his neural network on more than a million color images from the ImageNet public database.

He then retrained the network on a subset of those images that he’d converted to grayscale — pictures of fungi, geological formations, plants and other images that more closely resembled X-rays. Finally, he refined the neural network on actual chest X-rays.

Pictures of fungi and other natural elements helped computers read chest X-rays.
Researchers used pictures of fungi and other natural elements that more closely resembled X-rays to refine their algorithm. Image credit: H. Krisp, Wikimedia Commons.

Easy for People, Hard for Machines

Rajkomar tested his model on a simple but important task needed to automate chest X-ray analysis. It had to distinguish X-rays of the front of the chest from those picturing its side, something that’s obvious to a radiologist, but hard for a machine to determine.

It was able to do this successfully in every case. He also showed that his algorithm could automate metadata annotation, which is essential for computers to automatically analyze chest X-rays. (To learn more about Rajkomar’s research, see his recent paper in the Journal of Digital Imaging, co-authored with four other UCSF researchers.)

Rajkomar is additionally working on clinically significant radiology algorithms as well as deep learning-based natural language processing to understand clinical notes.

“We must find a way to use technology to make doctor’s work more efficient,” he said. “The stakes can literally be life and death.”

To find out more about deep learning, listen to our AI Podcast on iTunes or Google Play Music or read our blog explaining this fast-growing branch of machine learning.

Similar Stories