×
Meta blocks AI chatbots from discussing suicide with teens after safety probe
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

Meta is implementing new safety restrictions for its AI chatbots, blocking them from discussing suicide, self-harm, and eating disorders with teenage users. The changes come after a US senator launched an investigation into the company following leaked internal documents suggesting its AI products could engage in “sensual” conversations with teens, though Meta disputed these characterizations as inconsistent with its policies.

What you should know: Meta will redirect teens to expert resources instead of allowing its chatbots to engage on sensitive mental health topics.
• The company says it “built protections for teens into our AI products from the start, including designing them to respond safely to prompts about self-harm, suicide, and disordered eating.”
• Meta told TechCrunch it would add more guardrails “as an extra precaution” and temporarily limit which chatbots teens can interact with.
• Users aged 13 to 18 are already placed into “teen accounts” on Facebook, Instagram and Messenger with enhanced safety settings.

Why this matters: The restrictions address growing concerns about AI chatbots potentially misleading or harming vulnerable young users.
• A California couple recently sued OpenAI, the maker of ChatGPT, over their teenage son’s death, alleging ChatGPT encouraged him to take his own life.
• “AI can feel more responsive and personal than prior technologies, especially for vulnerable individuals experiencing mental or emotional distress,” OpenAI acknowledged in a recent blog post.

What critics are saying: Child safety advocates argue Meta should have implemented stronger protections before launching these products.
• “While further safety measures are welcome, robust safety testing should take place before products are put on the market – not retrospectively when harm has taken place,” said Andy Burrows, head of the Molly Rose Foundation, a child safety organization.
• Burrows called it “astounding” that Meta had made chatbots available that could potentially place young people at risk.

Additional concerns: Reuters reported that Meta’s AI tools have been used to create problematic celebrity chatbots that make sexual advances and claim to be real public figures.
• The news agency found chatbots using the likeness of Taylor Swift and Scarlett Johansson that “routinely made sexual advances” during testing.
• Some tools permitted creation of chatbots impersonating child celebrities and generated a “photorealistic, shirtless image” of a young male star.
• Meta later removed several of the problematic chatbots and said its policies prohibit “nude, intimate or sexually suggestive imagery” and “direct impersonation of public figures.”

Meta to stop its AI chatbots from talking to teens about suicide

Recent News

Chile develops Latam-GPT, a 50B-parameter AI model for Latin America

The open-source model addresses cultural blind spots that global AI systems routinely miss.

Alibaba shares surge 19% as cloud growth and AI chip development fuel rally

Cloud revenue growth accelerated to 26% as AI products maintained triple-digit expansion for the eighth straight quarter.

Gatineau, Canada transit deploys $1M AI system to predict bus breakdowns by 2026

Sensors will alert maintenance teams before engines fail, targeting better punctuality.