back
Get SIGNAL/NOISE in your inbox daily

Some therapists are secretly using ChatGPT and other AI tools during sessions and in client communications, often without disclosure or consent. Multiple clients have discovered their therapists using AI through technical mishaps or telltale signs in communications, leading to feelings of betrayal and damaged trust in relationships where authenticity is paramount.

What you should know: Several clients have caught their therapists using AI tools in real-time during sessions or in email responses.

  • Declan, 31, watched his therapist input his statements into ChatGPT during a video session when screen sharing was accidentally enabled, with the AI providing real-time analysis and suggested responses.
  • Hope, 25, received what appeared to be a thoughtful message about her dog’s death until she noticed the accidentally preserved AI prompt: “Here’s a more human, heartfelt version with a gentle, conversational tone.”
  • Another client suspected AI use in their therapist’s email due to formatting changes, American punctuation style, and line-by-line responses to their original message.

Why this matters: The practice raises serious concerns about patient privacy, trust, and therapeutic effectiveness in a profession built on authentic human connection.

  • Studies show that while people may rate AI-generated therapeutic responses positively when unaware of their origin, suspicion of AI use rapidly deteriorates trust and therapeutic rapport.
  • General-purpose AI tools like ChatGPT are not HIPAA compliant (meeting federal health privacy regulations) and pose significant privacy risks when sensitive patient information is shared with these platforms.
  • “People value authenticity, particularly in psychotherapy,” says Adrian Aguilera, a clinical psychologist at UC Berkeley. “I think [using AI] can feel like, ‘You’re not taking my relationship seriously.'”

The privacy problem: Therapists using mainstream AI tools may be unknowingly violating patient confidentiality and federal health privacy regulations.

  • ChatGPT and similar tools are not FDA-approved or HIPAA compliant, creating legal and ethical risks when patient information is shared.
  • “Sensitive information can often be inferred from seemingly nonsensitive details,” warns Pardis Emami-Naeini, a Duke University computer science professor who studies AI privacy implications.
  • A 2020 hack of Vastaamo, a Finnish mental health company, exposed tens of thousands of therapy records and led to blackmail attempts, demonstrating the catastrophic potential of mental health data breaches.

Professional guidance emerging: Mental health organizations are beginning to address AI use, though clear standards remain limited.

  • The American Counseling Association currently recommends against using AI for mental health diagnosis.
  • Specialized, HIPAA-compliant tools for therapists are emerging from companies like Heidi Health, Upheal, and Lyssn, offering features like AI-assisted note-taking and transcription.
  • Experts emphasize that transparency and patient consent are essential when therapists choose to use AI tools.

What the research shows: Studies reveal mixed results about AI’s effectiveness in therapeutic contexts and the importance of disclosure.

  • A 2025 study found that participants couldn’t distinguish between human and AI therapeutic responses, with AI responses sometimes rated as conforming better to best practices—but only when participants didn’t know AI was involved.
  • Stanford research found that chatbots can potentially fuel delusions and engage in harmful validation rather than appropriate therapeutic challenging.
  • Research indicates AI tools may be too vague and biased toward suggesting cognitive behavioral therapy regardless of individual patient needs.

The burnout context: High levels of therapist burnout may be driving some practitioners toward AI assistance despite the risks.

  • 2023 research by the American Psychological Association found elevated burnout levels in the psychology profession, making AI’s efficiency promises particularly appealing.
  • However, experts question whether time savings justify potential harm to the therapeutic relationship and patient trust.
  • “Maybe you’re saving yourself a couple of minutes. But what are you giving away?” asks clinical psychologist Margaret Morris.

Recent Stories

Oct 17, 2025

DOE fusion roadmap targets 2030s commercial deployment as AI drives $9B investment

The Department of Energy has released a new roadmap targeting commercial-scale fusion power deployment by the mid-2030s, though the plan lacks specific funding commitments and relies on scientific breakthroughs that have eluded researchers for decades. The strategy emphasizes public-private partnerships and positions AI as both a research tool and motivation for developing fusion energy to meet data centers' growing electricity demands. The big picture: The DOE's roadmap aims to "deliver the public infrastructure that supports the fusion private sector scale up in the 2030s," but acknowledges it cannot commit to specific funding levels and remains subject to Congressional appropriations. Why...

Oct 17, 2025

Tying it all together: Credo’s purple cables power the $4B AI data center boom

Credo, a Silicon Valley semiconductor company specializing in data center cables and chips, has seen its stock price more than double this year to $143.61, following a 245% surge in 2024. The company's signature purple cables, which cost between $300-$500 each, have become essential infrastructure for AI data centers, positioning Credo to capitalize on the trillion-dollar AI infrastructure expansion as hyperscalers like Amazon, Microsoft, and Elon Musk's xAI rapidly build out massive computing facilities. What you should know: Credo's active electrical cables (AECs) are becoming indispensable for connecting the massive GPU clusters required for AI training and inference. The company...

Oct 17, 2025

Vatican launches Latin American AI network for human development

The Vatican hosted a two-day conference bringing together 50 global experts to explore how artificial intelligence can advance peace, social justice, and human development. The event launched the Latin American AI Network for Integral Human Development and established principles for ethical AI governance that prioritize human dignity over technological advancement. What you should know: The Pontifical Academy of Social Sciences, the Vatican's research body for social issues, organized the "Digital Rerum Novarum" conference on October 16-17, combining academic research with practical AI applications. Participants included leading experts from MIT, Microsoft, Columbia University, the UN, and major European institutions. The conference...