×
AI risks to patients prompt researchers to urge medical caution
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

AI-driven healthcare prediction models risk creating harmful “self-fulfilling prophecies” when accuracy is prioritized over patient outcomes, according to new research from the Netherlands. The study reveals that even highly accurate AI systems can inadvertently worsen health disparities if they’re trained on data reflecting historical treatment biases, potentially leading to reduced care for already marginalized patients. This warning comes at a critical time as the NHS increasingly adopts AI for diagnostics and as the UK government pursues its “AI superpower” ambitions.

The big picture: Researchers demonstrate that AI outcome prediction models (OPMs) can lead to patient harm even when they achieve high accuracy scores after deployment.

  • OPMs use patient health histories and lifestyle information to help clinicians evaluate treatment options, but mathematical modeling shows they can reinforce existing healthcare disparities.
  • The study, published in data-science journal Patterns, calls for a fundamental shift in AI healthcare development priorities, moving away from predictive performance toward improvements in treatment approaches and patient outcomes.

Real-world implications: Professor Ewen Harrison illustrated the potential harms with a practical example of how prediction can become self-fulfilling.

  • An AI system predicting poor recovery prospects for certain patients might lead clinicians to provide less rehabilitation support, ultimately causing “a slower recovery, more pain and reduced mobility.”
  • This feedback loop could particularly impact patients from groups that have historically received inequitable healthcare based on race, gender, or socioeconomic factors.

Why human oversight matters: The research emphasizes that human clinical judgment remains essential when implementing AI-driven healthcare systems.

  • Researchers highlighted the “inherent importance” of applying “human reasoning” to algorithmic predictions to prevent reinforcing biases.
  • Dr. Catherine Menon warned that without proper oversight, these models risk “worsening outcomes for patients who have typically been historically discriminated against in medical settings.”

Current applications: AI is already being used throughout England’s National Health Service for various diagnostic functions.

  • The technology currently assists clinicians in reading X-rays and CT scans and helps accelerate stroke diagnoses.
  • Prime Minister Sir Keir Starmer has positioned AI as a potential solution to NHS waiting lists as part of his broader vision to establish the UK as an “AI superpower.”
Why AI could end up harming patients, as researchers urge caution

Recent News

AI transforms customer journey through advertising landscape

AI systems now autonomously manage personalized customer interactions throughout the entire buying journey, bridging traditional gaps between marketing and sales departments.

Wikipedia blocks AI scrapers to reduce server strain

The online encyclopedia provides structured datasets to AI developers as an alternative to resource-heavy scraping that disproportionately affects obscure content.

UAE leans into AI-written law, raising questions about human legal judgement

The UAE seeks to become the first nation to incorporate AI into legislation drafting, raising questions about the balance between technological efficiency and human judgment in lawmaking.