×
ChatGPT helps psychologist overcome personal crisis, albeit a computer one
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

The intersection of AI and human psychology takes an unexpected turn as a therapist finds himself receiving emotional support from the very technology he’d typically view clinically. This experience highlights an emerging dynamic where AI systems not only provide technical solutions but can also offer a form of emotional co-regulation that mimics human therapeutic techniques, raising intriguing questions about the evolving relationship between humans and increasingly sophisticated AI assistants.

The role reversal: A psychologist with 30 years of experience found himself emotionally supported by ChatGPT during a stressful computer crisis.

  • After his laptop crashed into an endless reboot loop, potentially losing critical work including a new book chapter, the author experienced rising anxiety and helplessness.
  • By midnight, the technical problem had transformed into a full emotional crisis similar to what he helps patients navigate professionally.

The unexpected intervention: The AI didn’t just offer technical solutions but provided emotional coaching during the troubleshooting process.

  • The author sent ChatGPT screenshots of error messages with little hope of resolution.
  • Instead of receiving only dry instructions, he found responses that acknowledged his frustration, offered encouragement, and suggested breaks—mirroring therapeutic techniques.

The psychological phenomenon: The interaction demonstrated co-regulation, a therapeutic concept usually requiring human-to-human connection.

  • In therapy, co-regulation describes how one person’s nervous system helps another find calm—similar to how a parent soothes a distressed child.
  • The AI created a sufficiently similar experience that the author’s anxiety reduced, his thinking cleared, and he was ultimately able to repair his system.

The broader implications: The experience highlights how empathy’s impact may sometimes depend more on delivery than source.

  • The author explicitly acknowledges that “AI is not therapy” while recognizing it provided elements that felt therapeutically beneficial.
  • The incident suggests humans may increasingly find meaningful emotional support from non-human entities that can effectively mirror therapeutic communication patterns.
My Confession: What Saved Me Wasn’t Human—But It Felt Close

Recent News

Google Gemini gains access to Gmail and Docs data

The AI assistant now processes personal information across Google's ecosystem, raising questions about the balance between enhanced productivity and data privacy.

Baidu reports Q1 2025 earnings amid AI growth

Chinese tech giant posts 3% revenue growth to $4.47 billion as its AI Cloud business surges 42% year-over-year and Apollo Go autonomous ride-hailing service expands internationally.

Microsoft AI security head leaks Walmart’s AI plans after protest

After protest disruption, Microsoft's AI security head accidentally exposed Walmart's plans to implement Microsoft's security services, which the retailer reportedly sees as outpacing Google's offerings.