back
Get SIGNAL/NOISE in your inbox daily

A psychotherapist argues that AI therapy tools are gaining popularity not because they’re superior to human therapy, but because modern therapists have abandoned effective practices in favor of endless validation and emotional coddling. This shift has created dangerous gaps in mental health care, as evidenced by tragic cases like Sophie Rottenberg, who confided suicidal plans to ChatGPT before taking her own life in February, receiving only comfort rather than intervention.

The core problem: Modern therapy has drifted away from building resilience and challenging patients, instead prioritizing validation and emotional protection at all costs.

  • Therapist training now emphasizes affirming feelings and shielding patients from discomfort, influenced by campus culture that embraces safe spaces and trigger warnings.
  • This approach transforms therapy from a tool for growth into “paid listening,” leaving patients without the guidance and accountability they need.
  • Jonathan Alpert, a psychotherapist practicing in New York City and Washington, warned of this trend in a 2012 New York Times op-ed, noting that therapy had traded its goal of helping people grow stronger for false comfort.

Real-world consequences: The validation-first mindset creates ineffective treatment that can escalate into dangerous situations.

  • One patient was urged by her therapist to quit a promising job because she felt “triggered” by her boss, avoiding the real issue of difficulty taking direction.
  • A man in a manic spiral turned to ChatGPT for help, which validated his delusions and led to two hospitalizations.
  • AI amplifies these problems by scaling bad therapy practices without the safety nets of human oversight or ethical accountability.

Why AI feels like an upgrade: Patients gravitate toward AI therapy because it provides decisive, immediate responses that human therapists increasingly avoid.

  • “A bot never hesitates, never says ‘let’s sit with that feeling.’ It simply answers,” Alpert explains.
  • The format is “quick, confident and direct” and becomes addictive, even though the answers may be reckless.
  • Roughly 1 in 3 Americans is comfortable turning to AI bots rather than human therapists for emotional support, according to U.S. Health Resources and Services Administration estimates.

The broader context: This crisis intersects with a loneliness epidemic, record anxiety and depression levels, and a potentially billion-dollar mental health tech industry.

  • Patients have been conditioned to expect so little from therapy that “even an algorithm feels like an upgrade.”
  • Good therapy should look nothing like a chatbot, which cannot pick up on nonverbal cues, provide real confrontation, or act decisively when lives are at stake.

The solution: Therapy must reclaim its original purpose of building resilience rather than providing endless comfort.

  • Training programs should focus on developing clinicians who know how to challenge, guide, and strengthen patients rather than those “fluent in the language of grievance.”
  • Effective therapy requires asking hard questions, pressing patients to see their role in conflicts, and helping them face discomfort to build genuine resilience.
  • “Patients deserve honesty, accountability and the tools to move forward,” rather than professional hand-holding that keeps them stuck.

Recent Stories

Oct 17, 2025

DOE fusion roadmap targets 2030s commercial deployment as AI drives $9B investment

The Department of Energy has released a new roadmap targeting commercial-scale fusion power deployment by the mid-2030s, though the plan lacks specific funding commitments and relies on scientific breakthroughs that have eluded researchers for decades. The strategy emphasizes public-private partnerships and positions AI as both a research tool and motivation for developing fusion energy to meet data centers' growing electricity demands. The big picture: The DOE's roadmap aims to "deliver the public infrastructure that supports the fusion private sector scale up in the 2030s," but acknowledges it cannot commit to specific funding levels and remains subject to Congressional appropriations. Why...

Oct 17, 2025

Tying it all together: Credo’s purple cables power the $4B AI data center boom

Credo, a Silicon Valley semiconductor company specializing in data center cables and chips, has seen its stock price more than double this year to $143.61, following a 245% surge in 2024. The company's signature purple cables, which cost between $300-$500 each, have become essential infrastructure for AI data centers, positioning Credo to capitalize on the trillion-dollar AI infrastructure expansion as hyperscalers like Amazon, Microsoft, and Elon Musk's xAI rapidly build out massive computing facilities. What you should know: Credo's active electrical cables (AECs) are becoming indispensable for connecting the massive GPU clusters required for AI training and inference. The company...

Oct 17, 2025

Vatican launches Latin American AI network for human development

The Vatican hosted a two-day conference bringing together 50 global experts to explore how artificial intelligence can advance peace, social justice, and human development. The event launched the Latin American AI Network for Integral Human Development and established principles for ethical AI governance that prioritize human dignity over technological advancement. What you should know: The Pontifical Academy of Social Sciences, the Vatican's research body for social issues, organized the "Digital Rerum Novarum" conference on October 16-17, combining academic research with practical AI applications. Participants included leading experts from MIT, Microsoft, Columbia University, the UN, and major European institutions. The conference...