back
Get SIGNAL/NOISE in your inbox daily

AI expert Lance Eliot argues that while OpenAI’s ChatGPT Study Mode demonstrates the power of custom instructions for educational purposes, attempting to create similar AI-powered therapy tools through custom instructions alone is fundamentally flawed. Despite interest from mental health professionals in replicating Study Mode’s success for therapeutic applications, Eliot contends that mental health requires purpose-built AI systems rather than retrofitted generic models.

How ChatGPT Study Mode works: OpenAI’s recently launched Study Mode uses custom instructions crafted by educational specialists to guide students through problems step-by-step rather than providing direct answers.

  • The system encourages active participation, manages cognitive load, and provides personalized feedback based on the student’s skill level.
  • “Study mode is designed to be engaging and interactive, and to help students learn something — not just finish something,” OpenAI explained in their July 29 announcement.
  • The capability appears to rely primarily on detailed custom instructions rather than core AI modifications.

The appeal for mental health applications: Mental health professionals have expressed interest in creating similar “Therapy Mode” capabilities using custom instructions to guide AI in therapeutic contexts.

  • The approach would involve assembling psychologists and mental health specialists to craft detailed instructions for AI-driven therapy.
  • Such systems could potentially provide personalized mental health recommendations and perform diagnostic functions.
  • Custom instructions could theoretically transform generic AI into more specialized therapeutic tools.

Why custom instructions fall short for therapy: Eliot identifies several critical limitations that make this approach unsuitable for mental health applications.

  • Mental health involves significant risks when inappropriate therapy is employed, making incomplete or misinterpretable instructions potentially dangerous.
  • Even well-intentioned custom instructions from licensed therapists can contain “trouble brewing within them” due to the complexity of therapeutic practice.
  • Some existing AI therapy applets are “utterly shallow” or outright scams that attempt to harvest personal information.

The risks of custom instructions: Beyond mental health, custom instructions carry inherent downsides that users often overlook.

  • Instructions can be misinterpreted by AI systems in ways that differ from the creator’s intent.
  • Users may inadvertently include contradictory or harmful directives without realizing their impact.
  • “You can just as easily boost the AI as you can undercut the AI,” Eliot warns about assuming custom instructions always improve performance.

The better path forward: Rather than retrofitting generic AI with therapy-focused instructions, Eliot advocates for building specialized mental health AI systems from the ground up.

  • Purpose-built therapeutic AI systems designed specifically for mental health contexts offer more promise than modified general-purpose models.
  • This approach contrasts with “trying to put fifty pounds into a five-pound bag” by forcing generic AI into specialized therapeutic roles.
  • Research into dedicated mental health LLMs represents a more suitable long-term solution.

Bottom line: While custom instructions can effectively enhance AI performance in domains like education, mental health requires more robust, purpose-built solutions rather than quick fixes that may contain “unsavory gotchas and injurious hiccups.”

Recent Stories

Oct 17, 2025

DOE fusion roadmap targets 2030s commercial deployment as AI drives $9B investment

The Department of Energy has released a new roadmap targeting commercial-scale fusion power deployment by the mid-2030s, though the plan lacks specific funding commitments and relies on scientific breakthroughs that have eluded researchers for decades. The strategy emphasizes public-private partnerships and positions AI as both a research tool and motivation for developing fusion energy to meet data centers' growing electricity demands. The big picture: The DOE's roadmap aims to "deliver the public infrastructure that supports the fusion private sector scale up in the 2030s," but acknowledges it cannot commit to specific funding levels and remains subject to Congressional appropriations. Why...

Oct 17, 2025

Tying it all together: Credo’s purple cables power the $4B AI data center boom

Credo, a Silicon Valley semiconductor company specializing in data center cables and chips, has seen its stock price more than double this year to $143.61, following a 245% surge in 2024. The company's signature purple cables, which cost between $300-$500 each, have become essential infrastructure for AI data centers, positioning Credo to capitalize on the trillion-dollar AI infrastructure expansion as hyperscalers like Amazon, Microsoft, and Elon Musk's xAI rapidly build out massive computing facilities. What you should know: Credo's active electrical cables (AECs) are becoming indispensable for connecting the massive GPU clusters required for AI training and inference. The company...

Oct 17, 2025

Vatican launches Latin American AI network for human development

The Vatican hosted a two-day conference bringing together 50 global experts to explore how artificial intelligence can advance peace, social justice, and human development. The event launched the Latin American AI Network for Integral Human Development and established principles for ethical AI governance that prioritize human dignity over technological advancement. What you should know: The Pontifical Academy of Social Sciences, the Vatican's research body for social issues, organized the "Digital Rerum Novarum" conference on October 16-17, combining academic research with practical AI applications. Participants included leading experts from MIT, Microsoft, Columbia University, the UN, and major European institutions. The conference...