Approvals for AI-based healthcare products are streaming in from regulators around the globe, with medical imaging leading the way.
It’s just the start of what’s expected to become a steady flow as submissions rise and the technology becomes better understood.
More than 90 medical imaging products using AI are now cleared for clinical use, thanks to approvals from at least one global regulator, according to Signify Research Ltd., a U.K. consulting firm in healthcare technology.
Regulators in Europe and the U.S. are leading the pace. Each has issued about 60 approvals to date. Asia is making its mark with South Korea and Japan issuing their first approvals recently.
Entrepreneurs are at the forefront of the trend to apply AI to healthcare.
At least 17 companies in NVIDIA’s Inception program, which accelerates startups, have received regulatory approvals. They include some of the first companies in Israel, Japan, South Korea and the U.S. to get regulatory clearance for AI-based medical products. Inception members get access to NVIDIA’s experts, technologies and marketing channels.
“Radiology AI is now ready for purchase,” said Sanjay Parekh, a senior market analyst at Signify Research.
The pipeline promises significant growth over the next few years.
“A year or two ago this technology was still in the research and validation phase. Today, many of the 200+ algorithm developers we track have either submitted or are close to submitting for regulatory approval,” said Parekh.
Startups Lead the Way
Trends in clearances for AI-based products will be a hot topic at the gathering this week of the Radiological Society of North America, Dec. 1-6 in Chicago. The latest approvals span products from startups around the globe that will address afflictions of the brain, heart and bones.
In mid-October, Inception partner LPIXEL Inc. won one of the first two approvals for an AI-based product from the Pharmaceuticals and Medical Devices Agency in Japan. LPIXEL’s product, called EIRL aneurysm, uses deep learning to identify suspected aneurysms using a brain MRI. The startup employs more than 30 NVIDIA GPUs, delivering more accurate results faster than traditional approaches.
In November, Inception partner ImageBiopsy Lab (Vienna) became the first company in Austria to receive 510(k) clearance for an AI product from the U.S. Food and Drug Administration. The Knee Osteoarthritis Labelling Assistant (KOALA) uses deep learning to process within seconds radiological data on knee osteoarthritis, a malady that afflicts 70 million patients worldwide.
In late October, HeartVista (Los Gatos, Calif.) won FDA 510(k) clearance for its One Click MRI acquisition software. The Inception partner’s AI product enables adoption for many patients of non-invasive cardiac MRIs, replacing an existing invasive process.
Regulators in South Korea cleared products from two Inception startups — Lunit and Vuno. They were among the first four companies to get approval to sell AI-based medical products in the country.
In China, a handful of Inception startups are in the pipeline to receive the country’s first class-three approvals needed to let hospitals pay for a product or service. They include companies such as 12Sigma and Shukun that already have class-two clearances.
Healthcare giants are fully participating in the trend, too.
Earlier this month, GE Healthcare recently won clearance for its Deep Learning Image Reconstruction engine that uses AI to improve reading confidence for head, whole body and cardiovascular images. It’s one of several medical imaging apps on GE’s Edison system, powered by NVIDIA GPUs.
Coming to Grips with Big Data
Zebra Medical Vision, in Israel, is among the most experienced AI startups in dealing with global regulators. European regulators approved more than a half dozen of its products, and the FDA has approved three with two more submissions pending.
AI creates new challenges regulators are still working through. “The best way for regulators to understand the quality of the AI software is to understand the quality of the data, so that’s where we put a lot of effort in our submissions,” said Eyal Toledano, co-founder and CTO at Zebra.
The shift to evaluating data has its pitfalls. “Sometimes regulators talk about data used for training, but that’s a distraction,” said Toledano.
“They may get distracted by looking at the test data, sometimes it is difficult to realise the idea that you can train your model on noisy data in large quantities but still generalize well. I really think they should focus on evaluation and test data,” he said.
In addition, it can be hard to make fair comparisons between new products that use deep learning and legacy product that don’t. That’s because until recently products only published performance metrics. They are allowed to keep their data sets hidden as trade secrets while companies submitting new AI products that would like to measure against each other or against other AI algorithms cannot compare apples to apples as done in public challenges
Zebra participated in feedback programs the FDA created to get a better understanding of the issues in AI. The company currently focuses on approvals in the U.S. and Europe because their agencies are seen as leaders with robust processes that other countries are likely to follow.
A Tour of Global Regulators
Breaking new ground, the FDA published in June a 20-page proposal for guidelines on AI-based medical products. It opens the door for the first time to products that improve as they learn.
It suggested products “follow pre-specified performance objectives and change control plans, use a validation process … and include real-world monitoring of performance once the device is on the market,” said FDA Commissioner Scott Gottlieb in an April statement.
AI has “the potential to fundamentally transform the delivery of health care … [with] earlier disease detection, more accurate diagnosis, more targeted therapies and significant improvements in personalized medicine,” he added.
For its part, the European Medicines Agency, Europe’s equivalent of the FDA, released in October 2018 a report on its goals through 2025. It includes plans to set up a dedicated AI test lab to gain insight into ways to support data-driven decisions. The agency is holding a November workshop on the report.
China’s National Medical Products Administration also issued in June technical guidelines for AI-based software products. It set up in April a special unit to set standards for approving the products.
Parekh, of Signify, recommends companies use data sets that are as large as possible for AI products and train algorithms for different types of patients around the world. “An algorithm used in China may not be applicable in the U.S. due to different population demographics,” he said.
Overall, automating medical processes with AI is a dual challenge.
“Quality needs to be not only as good as what a human can do, but in many cases it must be much better,” said Toledano, of Zebra. In addition, “to deliver(ing) value, you can’t just build an algorithm that detects something, it needs to deliver actionable results and many insights for many stakeholders such as both general practitioners and specialists,” he added.