×
AI therapists raise questions of privacy, safety in mental health care
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

The evolution of AI in psychology has progressed from diagnostic applications to therapeutic uses, raising fundamental questions about the technology’s role in mental healthcare. Psychologists have been exploring AI applications since 2017, with early successes in predicting conditions like bipolar disorder and future substance abuse behavior, but today’s concerns focus on more complex issues of privacy, bias, and the irreplaceable human elements of therapeutic relationships.

The big picture: AI’s entry into psychology began with diagnosis and prediction but now confronts the more nuanced challenge of providing therapy, with experts warning about significant ethical concerns.

  • Early AI applications showed promising results, with one system predicting future binge drinking in adolescents with over 70% accuracy based on brain scans.
  • A separate algorithm successfully identified patients with bipolar disorder from a dataset of 5,000 people who provided diagnostic interviews, questionnaires, and blood samples.

Why this matters: The integration of AI into mental healthcare raises fundamental questions about data privacy, algorithmic bias, and the essential human elements that make therapy effective.

  • If AI becomes your therapist, questions arise about whether your information will remain confidential or if corporate owners might use personal material to enhance their datasets.
  • Mental health applications of AI demonstrate what one expert calls “disconcerting levels of bias” in machine decision-making, potentially incorporating harmful assumptions into therapeutic interactions.

Between the lines: Even as AI capabilities advance, the technology appears unable to replicate core components of effective therapy, particularly authentic human connection.

  • Human therapists remain essential due to their capacity for genuine empathy, emotional connection, and nuanced understanding.
  • These limitations suggest AI may find a role as a supplemental tool rather than a replacement for human practitioners in mental healthcare.

Historical context: Concerns about AI in psychology predate today’s advanced language models, with ethicists raising alarms several years before ChatGPT’s 2022 arrival.

  • AI ethics researcher Fiona McEvoy warned in 2020 that “as consumers, we don’t know what we don’t know, and therefore it’s almost impossible to make a truly informed decision” about AI in mental healthcare.
Should I Use an AI Therapist?

Recent News

Grok stands alone as X restricts AI training on posts in new policy update

X explicitly bans third-party AI companies from using tweets for model training while still preserving access for its own Grok AI.

Coming out of the dark: Shadow AI usage surges in enterprise IT

IT leaders report 90% concern over unauthorized AI tools, with most organizations already suffering negative consequences including data leaks and financial losses.

Anthropic CEO opposes 10-year AI regulation ban in NYT op-ed

As AI capabilities rapidly accelerate, Anthropic's chief executive argues for targeted federal transparency standards rather than blocking state-level regulation for a decade.