Emergency medical services face a pivotal moment as artificial intelligence transforms everything from patient care to administrative workflows. At the California Ambulance Association Annual Conference in Monterey, six industry experts gathered for an unconventional panel discussion that revealed both the promise and perils of AI adoption in emergency healthcare.
The session, dubbed “Six Experts – One Weird AI Showdown,” featured a unique format: no sales pitches or product demonstrations, just rapid-fire insights delivered in two-minute bursts after panelists buzzed in to speak. The diverse group included Brendan Cameron from ABC, Christian Carrasquillo from Fast Medical AI, Dave O’Rielly from Traumasoft, Nidhish Dhru from Huly, Jonathan Feit from Beyond Lucid Technologies, and Mike Taigman from FirstWatch, a healthcare analytics company.
Despite the playful format, serious themes emerged about how emergency medical services—the ambulance crews, paramedics, and emergency medical technicians who provide pre-hospital care—should navigate AI adoption responsibly.
The readiness imperative
The panel’s most urgent message centered on organizational preparedness. Emergency medical service agencies must develop AI capabilities through internal expertise, skilled hiring, or trusted consultants who can evaluate solutions for their specific operational needs.
Without these safeguards, some organizations risk being left behind as competitors leverage AI for competitive advantages. Jonathan Feit emphasized this point by noting the recent creation of a federal Chief AI Officer role, highlighting how even government agencies recognize the need for specialized AI oversight.
The challenge extends beyond basic technical understanding. Risk and compliance managers in many EMS organizations lack the specialized knowledge to properly evaluate AI adoption strategies. As one panelist noted, this responsibility shouldn’t fall to “the best video game player who happens to be a medic.”
Job displacement fears and workforce reality
The most heated exchanges focused on whether AI would eliminate jobs in emergency medical services. Panelists offered contrasting perspectives on this fundamental concern.
Nidhish Dhru delivered a stark warning about manual tasks: “Anything you do manually today—scanning, attaching, pushing paper—that job is gone. Not today, maybe tomorrow, but gone.” This applies particularly to administrative roles involving billing, data entry, and document processing that characterize much of EMS operations.
However, other experts argued that chronic understaffing in emergency medical services means AI will likely reallocate human resources rather than eliminate positions entirely. The technology could free up personnel to focus on clinical care and patient interaction—areas where human judgment and empathy remain irreplaceable.
Brendan Cameron reframed the job threat: “You won’t lose your job to AI. You’ll lose it to someone who knows how to use AI.” This perspective suggests that AI literacy, rather than AI itself, represents the real competitive differentiator for individual careers and organizations.
Implementation timeline and realistic expectations
When pressed for specific timelines, the experts agreed on a measured rollout across different timeframes. Minimal changes are expected within the next year as organizations focus on foundational planning and pilot programs.
The three-year horizon shows more promise for meaningful transformation. Administrative processes and continuous quality improvement—the systematic approach healthcare organizations use to enhance patient care and operational efficiency—may see significant AI augmentation during this period.
By the five-year mark, panelists predicted broader systemic changes throughout emergency medical services, though none forecasted the emergence of “superintelligence” or fully autonomous systems in emergency care.
Jonathan Feit offered a contrarian perspective on AI’s impact, suggesting that misuse of AI tools might actually increase EMS workload: “People are already ingesting things because ‘ChatGPT said so.’ That’s job security.” This refers to situations where patients follow AI-generated medical advice that leads to emergency situations requiring professional intervention.
Six practical AI applications for emergency services
The discussion identified specific areas where AI can provide immediate value to emergency medical services:
Clinical decision support for differential diagnosis involves AI systems that help emergency responders identify potential medical conditions by analyzing patient symptoms, vital signs, and medical history. This technology can suggest possible diagnoses for paramedics to consider, particularly valuable in complex cases or when treating unfamiliar conditions.
Revenue capture through intelligent billing uses AI to automatically identify billable services and ensure proper coding for insurance reimbursement. Emergency medical services often lose significant revenue due to incomplete or incorrect billing documentation, making this a high-impact application.
Data quality monitoring and gap identification employs AI to review patient care reports and operational data, flagging inconsistencies, missing information, or documentation errors that could impact patient care or regulatory compliance.
Process automation for repetitive administrative tasks eliminates manual work like scheduling, inventory management, and routine reporting, allowing staff to focus on patient care rather than paperwork.
Organizational AI governance councils provide structured oversight for AI adoption, ensuring implementations align with clinical standards, regulatory requirements, and organizational goals while managing associated risks.
Real-time patient-specific insights surface critical information like advance directives—legal documents specifying a patient’s healthcare preferences—or special needs alerts such as autism spectrum disorder considerations that help responders provide more appropriate care.
Security vulnerabilities and compliance challenges
The panel identified cybersecurity as a more immediate threat than AI autonomy concerns. Mike Taigman dismissed fears about AI systems escaping human control, emphasizing that malicious actors using AI to breach healthcare data systems pose the real danger.
Christian Carrasquillo delivered a sobering warning about current practices: “HIPAA violations from AI aren’t an ‘if.’ They’re a ‘when’—and it’s probably already happened.” HIPAA, the Health Insurance Portability and Accountability Act, establishes strict privacy protections for patient health information.
Many healthcare providers are unknowingly copying patient data into free AI tools without realizing this information remains stored on external servers, potentially violating federal privacy laws. This practice creates significant legal and financial risks for emergency medical service organizations.
The legal framework surrounding AI liability remains underdeveloped, leaving unclear whether errors fall on technology vendors, individual medical professionals, or healthcare organizations. This uncertainty complicates risk management and insurance considerations for EMS agencies considering AI adoption.
Accuracy requirements and human oversight
The discussion revealed nuanced thinking about AI accuracy standards in emergency medical care. While billing mistakes can be corrected retroactively, clinical decisions involving life-and-death situations demand much higher reliability thresholds.
Jonathan Feit emphasized this critical distinction: “There are parts of our profession that have a zero margin for error. There is no option. You can’t kill them again.” This reality requires context-specific accuracy standards and mandatory human oversight for high-stakes clinical applications.
Interestingly, several panelists suggested that AI-generated patient care documentation might actually improve upon current standards. Mike Taigman noted that AI narratives are “unlikely to be worse” than existing EMS documentation, which often suffers from inconsistency and incomplete information.
Expert concerns and industry challenges
The panel concluded by sharing their deepest concerns about AI adoption in emergency medical services. These worries extend beyond technical implementation to fundamental industry dynamics.
Several experts expressed concern about power concentration among major technology companies that control AI development and data access. This consolidation could limit healthcare organizations’ autonomy and increase dependence on external providers for critical operational functions.
Accountability dilution represents another significant risk, as individuals and organizations might defer responsibility to AI systems with phrases like “it was the AI.” This attitude could undermine the professional judgment and personal responsibility that define quality emergency medical care.
Premature adoption without proper expertise poses immediate dangers. Some EMS agencies are already using AI models to guide staffing decisions without understanding the technology’s limitations or potential biases, potentially compromising patient care and operational effectiveness.
The industry’s tendency to chase emerging technologies before mastering existing tools also drew criticism. As Brendan Cameron observed: “We all want the slam dunk with AI, but in EMS, we haven’t even learned to dribble, pass or make the layup with the tech we already have.”
Strategic recommendations for EMS organizations
The expert panel delivered clear guidance for emergency medical service organizations considering AI adoption. Rather than waiting for technology vendors to define possibilities, agencies should proactively develop internal capabilities and governance structures.
Organizations should invest in training quality assurance and continuous improvement staff to understand AI capabilities and limitations. Designating dedicated AI leadership ensures adoption proceeds safely, compliantly, and beneficially rather than haphazardly.
Policy development, staff training, validation procedures, and compliance frameworks cannot wait for AI technology to mature. These foundational elements require immediate attention as AI tools are already available and evolving rapidly, often outpacing regulatory frameworks and organizational comfort levels.
The panel’s consensus was clear: artificial intelligence will transform emergency medical services, but success depends on whether individual organizations and the broader industry steer that transformation proactively or simply react to changes imposed by external forces. The time for preparation is now, before AI adoption becomes a competitive necessity rather than a strategic choice.
Recent Stories
DOE fusion roadmap targets 2030s commercial deployment as AI drives $9B investment
The Department of Energy has released a new roadmap targeting commercial-scale fusion power deployment by the mid-2030s, though the plan lacks specific funding commitments and relies on scientific breakthroughs that have eluded researchers for decades. The strategy emphasizes public-private partnerships and positions AI as both a research tool and motivation for developing fusion energy to meet data centers' growing electricity demands. The big picture: The DOE's roadmap aims to "deliver the public infrastructure that supports the fusion private sector scale up in the 2030s," but acknowledges it cannot commit to specific funding levels and remains subject to Congressional appropriations. Why...
Oct 17, 2025Tying it all together: Credo’s purple cables power the $4B AI data center boom
Credo, a Silicon Valley semiconductor company specializing in data center cables and chips, has seen its stock price more than double this year to $143.61, following a 245% surge in 2024. The company's signature purple cables, which cost between $300-$500 each, have become essential infrastructure for AI data centers, positioning Credo to capitalize on the trillion-dollar AI infrastructure expansion as hyperscalers like Amazon, Microsoft, and Elon Musk's xAI rapidly build out massive computing facilities. What you should know: Credo's active electrical cables (AECs) are becoming indispensable for connecting the massive GPU clusters required for AI training and inference. The company...
Oct 17, 2025Vatican launches Latin American AI network for human development
The Vatican hosted a two-day conference bringing together 50 global experts to explore how artificial intelligence can advance peace, social justice, and human development. The event launched the Latin American AI Network for Integral Human Development and established principles for ethical AI governance that prioritize human dignity over technological advancement. What you should know: The Pontifical Academy of Social Sciences, the Vatican's research body for social issues, organized the "Digital Rerum Novarum" conference on October 16-17, combining academic research with practical AI applications. Participants included leading experts from MIT, Microsoft, Columbia University, the UN, and major European institutions. The conference...