People are increasingly recommending that their loved ones use AI tools like ChatGPT, Claude, or Gemini for mental health therapy instead of seeking human therapists. This emerging trend reflects both the accessibility of AI-powered mental health support and growing barriers to traditional therapy, though it raises significant questions about the effectiveness and safety of replacing human therapeutic relationships with artificial intelligence.
What’s driving this shift: Several factors make AI therapy appealing as a recommendation for struggling loved ones.
- Cost barriers often make human therapists prohibitively expensive, while most major AI platforms are free or low-cost.
- AI provides 24/7 availability without scheduling complications or waiting lists.
- Some people feel more comfortable opening up to AI than facing potential embarrassment with human therapists.
The accessibility advantage: AI therapy offers immediate, friction-free mental health support that removes traditional barriers.
- Users can start conversations instantly without finding, vetting, or scheduling appointments with therapists.
- Sessions can last as long as needed without billable hour concerns.
- The AI maintains conversation continuity across multiple sessions, picking up exactly where previous discussions ended.
Critical limitations emerge: Generic AI platforms present serious drawbacks when used for therapeutic purposes.
- Popular AI tools like ChatGPT weren’t specifically designed for therapy, unlike specialized mental health apps built for this purpose.
- Effective therapeutic prompting requires skill—poor prompts can lead AI to misinterpret issues or shift into inappropriate playful modes.
- Privacy concerns are substantial, as AI companies’ licensing agreements often allow staff to inspect conversations and use personal data for training.
The mixed approach shows promise: Experts suggest combining AI and human therapy rather than choosing one exclusively.
- Some people use AI as an entry point to explore mental health concerns before transitioning to human therapists.
- Others supplement ongoing human therapy with AI support, though this should be done with their therapist’s knowledge and guidance.
- Forward-thinking therapists are beginning to incorporate AI into their practices, creating supervised patient-AI-therapist triads.
When recommendations become risky: The appropriateness of suggesting AI therapy depends heavily on the severity of mental health concerns.
- For serious mental health crises, directing someone solely toward AI could amplify problems or enable harmful delusions.
- AI might push vulnerable individuals “further into a mental abyss” rather than providing adequate support.
- Mental health professionals generally recommend that any noticeable mental health concerns warrant human therapeutic intervention first.
What they’re saying: Mental health experts emphasize the importance of matching recommendations to specific circumstances.
- “The AI could end up amplifying their mental issues, including co-conspiring in devising elaborate delusions,” warns Lance Eliot, a Forbes columnist covering AI developments.
- However, for mild concerns where someone needs to “think through their thoughts,” AI might be suitable with proper privacy considerations.
- As Albert Schweitzer noted: “The purpose of human life is to serve and to show compassion and the will to help others”—which now potentially includes leveraging AI tools appropriately.
Looking ahead: AI makers are developing better safeguards and intervention mechanisms to make AI therapy recommendations safer.
- Improved AI safeguards aim to detect when users go overboard with mental health AI usage.
- Some platforms are beginning to route users to human interventions when conversations seem concerning.
- These developments may make recommending AI therapy a more viable and safer suggestion in appropriate circumstances.
Loved Ones Are Recommending That Their Partner Or Family Member Use AI As A Personal Therapist