Healthcare is a multitrillion-dollar global industry, growing each year as average life expectancy rises — and with nearly unlimited facets and sub-specialties.
For medical professionals, new technologies can change the way they work, enable more accurate diagnoses and improve care. For patients, healthcare innovations lessen suffering and save lives.
Deep learning can be implemented at every stage of healthcare, creating tools that doctors and patients can take advantage of to raise the standard of care and quality of life.
How AI Is Changing Patient Care
Providing patient care is a series of critical choices, from decisions made on a 911 call to the recommendations a primary physician makes at an annual physical. The challenge is getting the right treatments to patients as fast and efficiently as possible.
Nearly half the countries and territories in the world have less than one physician per 1,000 people, a third of the threshold value to deliver quality healthcare, according to a 2018 study in The Lancet. Meanwhile, as healthcare data goes digital, the amount of information medical providers collect and refer to is growing.
In intensive care units, these factors come together in a perfect storm — patients who need round-the-clock attention; large, continuous data feeds to interpret; and a crucial need for fast, accurate decisions.
Researchers at MIT’s Computer Science and Artificial Intelligence Lab developed a deep learning tool called ICU Intervene, which uses hourly vital sign measurements to predict eight hours in advance whether patients will need treatments to help them breathe, require blood transfusions or need interventions to improve heart function.
Corti, a Denmark-based startup, is stepping in at another time-sensitive interaction: phone calls with emergency services. The company is using an NVIDIA Jetson TX2 module to analyze emergency call audio and help dispatchers identify cardiac arrest cases in under a minute.
LexiconAI, a member of the NVIDIA Inception program, is helping doctors spend more time with their patients every day. The startup built a mobile app that uses speech recognition to capture medical information from doctor-patient conversations — making it possible to automatically fill in electronic health records.
How AI Is Changing Pathology
Just as millions of medical scans are taken each year, so too are hundreds of millions of tissue biopsies. While pathologists have long used physical slides to analyze specimens and make diagnoses, these slides are increasingly being scanned to create digital pathology datasets.
Inception startup Proscia uses deep learning to analyze these digital slides, scoring over 99 percent accuracy for classifying three common skin pathologies. Using AI can help standardize diagnoses, which is important. Depending on the type and stage of disease, two pathologists looking at the same tissue may disagree on a diagnosis more than half the time.
SigTuple, another Inception startup, developed an AI microscope to analyze blood and bodily fluids. The microscope scans physical slides under a lens and uses GPU-accelerated deep learning to analyze the digital images either on SigTuple’s AI platform in the cloud or on the microscope itself.
Compared to scanners that automatically convert glass slides to digital images and interpret the results, SigTuple’s microscope does this at a fraction of the cost. The company hopes its tool will address the global pathologist shortage, a crucial problem in many countries.
How AI Is Changing Predictive Health
A host of AI tools are being developed to detect risk factors for diseases months before symptoms appear. These will help doctors make earlier diagnoses, conduct longevity studies or take preventative action. Taking advantage of the ability of deep learning models to spot patterns in large datasets, these tools may extract insights from electronic health records, physical features or genetic information.
One mobile app, Face2Gene, uses facial recognition and AI to identify about 50 known genetic syndromes from photos of patients’ faces. It’s used by around 70 percent of geneticists worldwide and could help cut down the time it takes to get an accurate diagnosis.
Another deep learning tool, developed by researchers at NYU, analyzes lab tests, X-rays and doctors notes to predict ailments like heart failure, severe kidney disease and liver problems three months faster than traditional methods.
Using AI and a wide range of electronic health records helped the researchers draw new connections among hundreds of health measurements that could predict diseases like diabetes.
How AI Is Enabling Healthcare Apps
Healthcare doesn’t start and end at the doctor’s office. And with wearables, smartphones and IoT devices, there’s no shortage of devices to monitor health from anywhere.
A service called SpiroCall, for example, makes it possible for patients to check lung function by breathing into a smartphone, either by dialing a toll-free number or recording a sound file on an app. The data is sent to a central server, which uses a deep learning model to assess lung health.
For athletes at risk of suffering concussions on the playing field, an AI-powered app is using a smartphone camera to analyze how an athlete’s pupils respond to light, a metric medical professionals use to diagnose brain injury.
And in the realm of mental health, Canadian startup Aifred Health is using GPU-accelerated deep learning to better tailor depression treatments to individual patients. Using data on a patient’s symptoms, demographics and medical test results, the neural network helps doctors as they prescribe treatments.
How AI Is Enabling Devices for People with Disabilities
A billion people around the world experience some form of disability. AI-powered technology can provide some of them with a greater level of independence, making it easier to perform daily tasks or get around.
Aira, a member of the Inception program, has created an AI platform that connects to smart glasses, helping people with impaired vision with tasks like reading labels on medication bottles. And a professor at Ohio State University is using GPUs and deep learning to create a hearing aid that can bump the volume of speech while filtering out background noise.
Researchers at OSU and Battelle, a nonprofit research organization, are developing a brain-computer interface powered by neural networks that can read thoughts and restore movement to paralyzed limbs.
And a team at Georgia Tech developed an AI prosthetic hand that helped jazz musician Jason Barnes play piano for the first time in five years. The prosthesis uses electromyogram sensors to recognize muscle movement and allows for individual finger control.
See the NVIDIA healthcare page for more.
Main image licensed from iStock.