×
Stanford, CMU and Georgia Tech Develop AI Model for Mental Health Support
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

AI-powered peer counselor training: A collaborative effort between Stanford, Carnegie Mellon, and Georgia Tech has developed an AI model to provide feedback and improve the skills of novice peer counselors in emotional support conversations.

  • The project, presented in a working paper accepted for the 2024 Association for Computational Linguistics conference, aims to address the growing demand for mental health support and the challenges in preparing peer counselors for their roles.
  • Interdisciplinary collaboration between computer scientists and psychologists was crucial in developing this AI-assisted training model, combining expertise in both AI and counseling intervention skills.

Developing a feedback framework: The research team worked with psychotherapists to create a practical blueprint for providing helpful feedback to peer counselors.

  • The framework includes three key components: defining the counselor’s understanding of the conversation, suggesting improvements to their response, and providing a specific suggested response aligned with the conversation’s goals.
  • The model can also offer positive reinforcement for good responses, ensuring a balanced approach to feedback.

Creating a high-quality dataset: Researchers collected and annotated a dataset of feedback from 400 emotional support conversations to train the AI model.

  • The dataset was co-annotated using GPT-4 for initial drafts and domain experts for final edits, ensuring high-quality ground truth for fine-tuning the model.
  • This approach aimed to minimize poor feedback and create a reliable foundation for the AI model’s training.

Innovative self-checking process: The researchers implemented a unique self-checking mechanism to ensure the quality of the AI-generated feedback.

  • The model feeds its proposed feedback into its own framework to confirm alignment with conversation goals, effectively double-checking itself to mitigate the risk of poor advice.
  • Human experts reviewed the model’s output and confirmed its value as a tool for coaching peer counselors with limited formal training.

Potential applications: The AI feedback model shows promise in various training environments, particularly where direct supervision may be limited.

  • In educational settings, the model could complement instructor feedback by offering detailed reminders of conversation goals and suggestions for improvement.
  • The researchers envision a “safe sandbox” environment where novice counselors can practice with AI patients and receive feedback without privacy concerns.
  • This approach allows counselors to experiment, make mistakes, and gain valuable experience before working with real people in need.

Scaling up support: The ultimate goal is to make this AI tool widely available as an additional learning resource for peer counselor training.

  • The researchers emphasize that the model is not intended to replace clinical supervision but rather to complement existing training processes.
  • By providing both a pedagogical and practical tool, the AI model can support organizations that lack sufficient instructors to offer comprehensive feedback to their counselors.

Broader implications: The development of AI-assisted peer counselor training represents a significant step forward in addressing mental health support challenges.

  • This innovative approach has the potential to improve the quality and accessibility of emotional support services by enhancing the skills of peer counselors.
  • As the demand for mental health support continues to grow, AI-powered training tools may play an increasingly important role in preparing volunteers to provide effective assistance to those in need.
Using AI To Train Peer Counselors

Recent News

AI-powered computers are adding more time to workers’ tasks, but there’s a catch

Early AI PC adopters report spending more time on tasks than traditional computer users, signaling growing pains in the technology's implementation.

The global bootcamp that teaches intensive AI safety programming classes

Global bootcamp program trains next wave of AI safety professionals through intensive 10-day courses funded by Open Philanthropy.

‘Anti-scale’ and how to save journalism in an automated world

Struggling news organizations seek to balance AI adoption with growing public distrust, as the industry pivots toward community-focused journalism over content volume.