How an AI App Can Translate a Photo into a Skin Cancer Diagnosis

by Tony Kontzer

Getting to a doctor is rarely easy. Those in remote locations can travel hundreds of miles, incurring lodging and transit costs along the way.

For patients with skin cancer — or with a suspicious skin lesion — GPU deep learning may be the key to replacing that time, effort and expense with something as simple as a picture taken with a smartphone.

That’s the scenario painted by a Stanford University collaboration between AI researchers and the school’s dermatology department. They created a deep learning algorithm that enables photos of skin lesions to be classified as benign or malignant with as much accuracy as can be achieved by a dermatologist.

“For a long time we’ve been excited by the notion of early-stage diagnostics and disease detection,” said Andre Esteva, an electrical engineering Ph.D. student in the university’s Artificial Intelligence Lab.

More than 5 million new cases of skin cancer are diagnosed every year in the U.S. alone. It’s especially prevalent in sunny rural areas, where many light-skinned people tend to live, Esteva says.

Thus far, the technology has been demonstrated on a desktop app, but it could easily be applied to a mobile one. The team’s work was supervised by Sebastian Thrun, an AI pioneer and adjunct professor at SAIL, and was the subject of a paper published in a recent issue of Nature.

Using GPUs to Train on 130,000 Images

Esteva said GPU technology was indispensable in getting the results they achieved. The team trained their models on a dozen NVIDIA TITAN X GPUs, while CUDA and cuDNN powered their deep learning algorithms.

“GPUs are orders of magnitude faster and more effective than CPUs for training these networks,” he said.

The team worked from a training dataset of about 130,000 skin disease images, spanning more than 2,000 disease categories. They relied on open-source medical repositories and a local hospital for images.

“There’s no huge dataset of skin cancer that we can just train our algorithms on, so we had to make our own,” said Brett Kuprel, co-lead author of the paper and a graduate student under Thrun. “We created a nice taxonomy out of data that was very messy — the labels alone were in several languages, including German, Arabic and Latin.”

More Mobile Diagnostics Coming

As groundbreaking as the effort has been, it may only be the tip of the iceberg for AI-powered health diagnostics. Esteva said he expects AI to steadily work its way into medical practices, pushed by the consumerization of healthcare, with great potential for mobile health diagnostics.

“We envision a future for healthcare where technology and AI provide universal access, and really extend the reach of providers outside of the clinic,” he said. “Mobile devices have enough compute power to not only run AI algorithms but to connect individuals to automated care and disease screening.”

As smartphones continue their march to ubiquity, he said, research teams will be able to tap that mobile compute power to use AI and image recognition to do everything from diagnosing cardiac and skin conditions to managing blood tests and psychiatric screenings.

In the meantime, the team intends to continue testing its solution in real-world clinical settings before attempting to establish the algorithm as the basis for a mobile application.