Health care leaders gathered at a recent Newsweek virtual event to address the critical challenge of implementing AI in medicine while maintaining patient safety. As artificial intelligence rapidly advances in health care settings, industry experts emphasized the need for robust governance frameworks, transparent oversight, and proactive quality assurance measures to prevent bias and errors in AI systems that could affect patient outcomes. This evolving landscape requires health organizations to balance innovation with careful risk management as they navigate the complexities of AI deployment in clinical environments.
The big picture: Health care organizations are simultaneously adopting AI tools while creating governance frameworks to ensure these technologies remain safe and effective over time.
- Dr. Brian Anderson, CEO of the Coalition for Health AI, highlighted that unlike traditional medical tools, AI systems change over time, potentially degrading in performance or experiencing drift.
- “We’re building this plane as we’re flying, so there is a real urgency to make sure these models and these tools are safe and that we’re managing them robustly and appropriately,” Anderson explained.
Governance approaches: Health care experts advocate for centralized oversight combined with implementation-specific monitoring that leverages local expertise.
- Dr. Michael Pencina of Duke Health emphasized the importance of an “umbrella oversight that sets the standards” alongside implementation-specific monitoring for individual algorithms.
- Dr. Andreea Bodnari, founder of Alignmt.AI, noted that the healthcare industry has an opportunity to introduce “proactive quality assurance for care delivery” through AI governance.
Transparency initiatives: The Coalition for Healthy AI promotes “model cards” that function like nutrition labels for AI tools, detailing their composition and performance characteristics.
- These transparency measures aim to build trust with physicians and patients by clearly documenting how AI systems operate and what their limitations might be.
- The approach parallels food product labeling, providing users with standardized information about what’s “inside” the AI tools they’re using.
Legal landscape: The regulatory and liability framework for healthcare AI remains in development, creating uncertainty for organizations implementing these technologies.
- Dr. Danny Tobey, chair of DLA Piper’s AI & Data Analytics Practice, noted that legal answers will emerge through “litigation and regulation and legislation,” creating a period of uncertainty.
- Organizations must navigate this evolving landscape while still moving forward with beneficial AI implementations.
Practical recommendations: Experts offered straightforward advice for health systems beginning their AI governance journey.
- Organizations should start by taking inventory of existing AI solutions already in use across their systems.
- Governance approaches should be appropriately scaled to available resources and risk levels, with more scrutiny allocated to high-risk applications.
- Health systems should avoid overcomplicated governance structures that might impede beneficial innovation.
Recent Stories
DOE fusion roadmap targets 2030s commercial deployment as AI drives $9B investment
The Department of Energy has released a new roadmap targeting commercial-scale fusion power deployment by the mid-2030s, though the plan lacks specific funding commitments and relies on scientific breakthroughs that have eluded researchers for decades. The strategy emphasizes public-private partnerships and positions AI as both a research tool and motivation for developing fusion energy to meet data centers' growing electricity demands. The big picture: The DOE's roadmap aims to "deliver the public infrastructure that supports the fusion private sector scale up in the 2030s," but acknowledges it cannot commit to specific funding levels and remains subject to Congressional appropriations. Why...
Oct 17, 2025Tying it all together: Credo’s purple cables power the $4B AI data center boom
Credo, a Silicon Valley semiconductor company specializing in data center cables and chips, has seen its stock price more than double this year to $143.61, following a 245% surge in 2024. The company's signature purple cables, which cost between $300-$500 each, have become essential infrastructure for AI data centers, positioning Credo to capitalize on the trillion-dollar AI infrastructure expansion as hyperscalers like Amazon, Microsoft, and Elon Musk's xAI rapidly build out massive computing facilities. What you should know: Credo's active electrical cables (AECs) are becoming indispensable for connecting the massive GPU clusters required for AI training and inference. The company...
Oct 17, 2025Vatican launches Latin American AI network for human development
The Vatican hosted a two-day conference bringing together 50 global experts to explore how artificial intelligence can advance peace, social justice, and human development. The event launched the Latin American AI Network for Integral Human Development and established principles for ethical AI governance that prioritize human dignity over technological advancement. What you should know: The Pontifical Academy of Social Sciences, the Vatican's research body for social issues, organized the "Digital Rerum Novarum" conference on October 16-17, combining academic research with practical AI applications. Participants included leading experts from MIT, Microsoft, Columbia University, the UN, and major European institutions. The conference...