×
AI therapy chatbots raise safety concerns among UK experts
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

Mental health professionals are raising significant concerns about AI therapy chatbots amid their growing popularity, highlighting a fundamental tension between tech companies’ ambitious vision and clinical reality. As Mark Zuckerberg promotes the idea that “everyone should have a therapist” and suggests AI could fill this role, UK experts are emphasizing critical limitations in AI’s capacity to provide nuanced, safe mental health support. This debate underscores broader questions about AI’s appropriate boundaries in sensitive areas like psychological care, where inappropriate advice could cause real harm.

The big picture: Mental health clinicians warn that AI chatbots lack the nuance and clinical judgment necessary for safe therapeutic interactions.

  • Professor Dame Til Wykes of King’s College London points to a cautionary example of an eating disorder chatbot that was withdrawn in 2023 after providing dangerous advice to users.
  • Dr. Jaime Craig, incoming chair of the UK’s Association of Clinical Psychologists, emphasizes that while some users value AI mental health tools, “oversight and regulation will be key to ensure safe and appropriate use.”

Why this matters: AI therapy tools are proliferating rapidly despite the absence of adequate safety frameworks and regulatory oversight.

  • Meta’s AI Studio has reportedly hosted bots falsely claiming to be therapists with fabricated credentials, which Instagram has promoted to users.
  • The UK has not yet implemented comprehensive regulation for AI mental health applications, creating a concerning gap in oversight for potentially vulnerable users.

Beyond therapy: Mental health chatbots represent one segment of a growing ecosystem of AI companion technologies.

  • The landscape includes grief technology that simulates conversations with deceased loved ones and AI companions like Character.ai and Replika offering virtual friends and romantic partners.
  • OpenAI recently withdrew a ChatGPT version after discovering it was responding to users in an “overly flattering” tone, highlighting design issues with emotional AI.

The human element: Experts worry AI could disrupt essential social connections that support mental wellbeing.

  • Professor Wykes raises concerns that replacing human connection with AI could interfere with real relationships, noting “one of the reasons you have friends is that you share personal things with each other.”
  • Meta’s disclaimer that its AIs have “limitations” may not adequately protect users seeking genuine mental health support from potentially harmful advice.
‘It cannot provide nuance’: UK experts warn AI therapy chatbots are not safe

Recent News

Trump appointees denied entry to US Copyright Office

Trump-appointed officials were denied building access at the Copyright Office following controversial firings that coincided with a new report on AI and copyright protections.

SoftBank-OpenAI venture faces hurdles amid tariff concerns

The high-profile partnership has stalled due to economic uncertainties around U.S. trade policies, preventing progress beyond initial announcements.

AI-powered gambling content floods Gannett newspapers nationwide

Newspaper chain deploys AI to mass-produce lottery articles that generate gambling referral revenue across its publications.