AI just as effective as clinicians in diagnostics, study suggests

Rebecca Pifer for Healthcare Dive

Dive Brief:

  • Artificial intelligence can detect diseases from medical imaging with the same accuracy as healthcare professionals, according to a systematic review of 82 articles published in The Lancet Digital Health journal Tuesday.
  • However, AI didn’t outperform human diagnosis. In a small subset of the studies comparing AI and clinician diagnostic accuracy, researchers found deep learning algorithms correctly detected disease in 87% of cases, compared to clinicians’ 86% accuracy rate. AI and healthcare professionals had similar rates of identifying healthy medical images, at 93% and 91% accuracy, respectively.
  • But due to a lack of comprehensive studies directly comparing the performance of humans and machines or reviewing AI in a real clinical environment, the diagnostic potential of deep learning — the use of algorithms trained to detect patterns in unstructured data such as medical images — remains uncertain, researchers determined.

Dive Insight:

Artificial intelligence has the potential to be deeply disruptive across the healthcare sector, especially in cutting down administrative waste, streamlining billing, and improving patient matching and population health management. Tech giants like Amazon, Google and Intel are leveraging their hefty AI capabilities as they move into healthcare, and providers and payers become more open to the technology.

However, AI’s value add in diagnostics, a realm dogged with variability, is unproven though much-hyped among investors and the public.

University Hospitals Birmingham NHS researchers vetted more than 20,500 articles published between 2012 and 2019, but only ended up including 1% of them in their meta-analysis. The included studies spanned breast cancer, orthopaedic trauma, respiratory disease, cardiology, facial surgery and more.

Of the 82 articles researchers looked at, only 25 validated the AI models externally by using medical images from a different population, and just 14 directly compared clinician and AI diagnostic abilities side by side.

“Within those handful of high-quality studies, we found that deep learning could indeed detect diseases ranging from cancers to eye diseases as accurately as health professionals,” said Alastair Denniston, a professor at University Hospitals Birmingham NHS Foundation Trust, who led the research. The Foundation Trust runs four hospitals in and around Birmingham, England.

The Food and Drug Administration has approved more than 30 AI algorithms for use in healthcare to date. They run the gamut from Imagen OsteoDetect to identify wrist fractures in bone images to IDx-DR to detect diabetic retinopathy in eye scans, to Viz.AI Contact to discern signs of stroke in CT scans.

Radiology and image analysis are the most logical areas of application for diagnostic AI, experts say, as machine learning algorithms can easily be trained on the reams of available data. Of the more than 100 medical imaging AI startups in 2018, the majority were for image analysis, according to Frost & Sullivan, a healthcare consultancy.

The need is there. The volume of medical images globally is beginning to outpace the capability of available specialists to look at them, especially in low- and middle-income countries. The AI-based medical imaging market is on track to reach $2 billion worldwide by 2023 as companies invest to match market growth.

But experts maintain AI should be thought of as a tool for doctors and not a doctor itself.

“I think really where this is all going is that this is a tool that levels up and lifts the decision support that’s already in systems for clinicians and the radiologist,” Intel’s general manager of health and life sciences David Ryan recently told Healthcare Dive. “It’s really an assistive device.”

And, due to a spate of limitations in the meta-analysis published Tuesday, researchers cautioned against drawing strong conclusions about AI’s usefulness as an autonomous diagnostic tool.

“Perhaps the better conclusion is that, in the narrow public body of work comparing AI to human physicians, AI is no worse than humans,” University of Pennsylvania radiology professor Tessa Cook wrote in a comment on the study. “But the data are sparse and it may be too soon to tell.”

Share Article: