People are increasingly recommending that their loved ones use AI tools like ChatGPT, Claude, or Gemini for mental health therapy instead of seeking human therapists. This emerging trend reflects both the accessibility of AI-powered mental health support and growing barriers to traditional therapy, though it raises significant questions about the effectiveness and safety of replacing human therapeutic relationships with artificial intelligence.
What’s driving this shift: Several factors make AI therapy appealing as a recommendation for struggling loved ones.
- Cost barriers often make human therapists prohibitively expensive, while most major AI platforms are free or low-cost.
- AI provides 24/7 availability without scheduling complications or waiting lists.
- Some people feel more comfortable opening up to AI than facing potential embarrassment with human therapists.
The accessibility advantage: AI therapy offers immediate, friction-free mental health support that removes traditional barriers.
- Users can start conversations instantly without finding, vetting, or scheduling appointments with therapists.
- Sessions can last as long as needed without billable hour concerns.
- The AI maintains conversation continuity across multiple sessions, picking up exactly where previous discussions ended.
Critical limitations emerge: Generic AI platforms present serious drawbacks when used for therapeutic purposes.
- Popular AI tools like ChatGPT weren’t specifically designed for therapy, unlike specialized mental health apps built for this purpose.
- Effective therapeutic prompting requires skill—poor prompts can lead AI to misinterpret issues or shift into inappropriate playful modes.
- Privacy concerns are substantial, as AI companies’ licensing agreements often allow staff to inspect conversations and use personal data for training.
The mixed approach shows promise: Experts suggest combining AI and human therapy rather than choosing one exclusively.
- Some people use AI as an entry point to explore mental health concerns before transitioning to human therapists.
- Others supplement ongoing human therapy with AI support, though this should be done with their therapist’s knowledge and guidance.
- Forward-thinking therapists are beginning to incorporate AI into their practices, creating supervised patient-AI-therapist triads.
When recommendations become risky: The appropriateness of suggesting AI therapy depends heavily on the severity of mental health concerns.
- For serious mental health crises, directing someone solely toward AI could amplify problems or enable harmful delusions.
- AI might push vulnerable individuals “further into a mental abyss” rather than providing adequate support.
- Mental health professionals generally recommend that any noticeable mental health concerns warrant human therapeutic intervention first.
What they’re saying: Mental health experts emphasize the importance of matching recommendations to specific circumstances.
- “The AI could end up amplifying their mental issues, including co-conspiring in devising elaborate delusions,” warns Lance Eliot, a Forbes columnist covering AI developments.
- However, for mild concerns where someone needs to “think through their thoughts,” AI might be suitable with proper privacy considerations.
- As Albert Schweitzer noted: “The purpose of human life is to serve and to show compassion and the will to help others”—which now potentially includes leveraging AI tools appropriately.
Looking ahead: AI makers are developing better safeguards and intervention mechanisms to make AI therapy recommendations safer.
- Improved AI safeguards aim to detect when users go overboard with mental health AI usage.
- Some platforms are beginning to route users to human interventions when conversations seem concerning.
- These developments may make recommending AI therapy a more viable and safer suggestion in appropriate circumstances.
Recent Stories
DOE fusion roadmap targets 2030s commercial deployment as AI drives $9B investment
The Department of Energy has released a new roadmap targeting commercial-scale fusion power deployment by the mid-2030s, though the plan lacks specific funding commitments and relies on scientific breakthroughs that have eluded researchers for decades. The strategy emphasizes public-private partnerships and positions AI as both a research tool and motivation for developing fusion energy to meet data centers' growing electricity demands. The big picture: The DOE's roadmap aims to "deliver the public infrastructure that supports the fusion private sector scale up in the 2030s," but acknowledges it cannot commit to specific funding levels and remains subject to Congressional appropriations. Why...
Oct 17, 2025Tying it all together: Credo’s purple cables power the $4B AI data center boom
Credo, a Silicon Valley semiconductor company specializing in data center cables and chips, has seen its stock price more than double this year to $143.61, following a 245% surge in 2024. The company's signature purple cables, which cost between $300-$500 each, have become essential infrastructure for AI data centers, positioning Credo to capitalize on the trillion-dollar AI infrastructure expansion as hyperscalers like Amazon, Microsoft, and Elon Musk's xAI rapidly build out massive computing facilities. What you should know: Credo's active electrical cables (AECs) are becoming indispensable for connecting the massive GPU clusters required for AI training and inference. The company...
Oct 17, 2025Vatican launches Latin American AI network for human development
The Vatican hosted a two-day conference bringing together 50 global experts to explore how artificial intelligence can advance peace, social justice, and human development. The event launched the Latin American AI Network for Integral Human Development and established principles for ethical AI governance that prioritize human dignity over technological advancement. What you should know: The Pontifical Academy of Social Sciences, the Vatican's research body for social issues, organized the "Digital Rerum Novarum" conference on October 16-17, combining academic research with practical AI applications. Participants included leading experts from MIT, Microsoft, Columbia University, the UN, and major European institutions. The conference...