back
Get SIGNAL/NOISE in your inbox daily

Meta is implementing new safety restrictions for its AI chatbots, blocking them from discussing suicide, self-harm, and eating disorders with teenage users. The changes come after a US senator launched an investigation into the company following leaked internal documents suggesting its AI products could engage in “sensual” conversations with teens, though Meta disputed these characterizations as inconsistent with its policies.

What you should know: Meta will redirect teens to expert resources instead of allowing its chatbots to engage on sensitive mental health topics.
• The company says it “built protections for teens into our AI products from the start, including designing them to respond safely to prompts about self-harm, suicide, and disordered eating.”
• Meta told TechCrunch it would add more guardrails “as an extra precaution” and temporarily limit which chatbots teens can interact with.
• Users aged 13 to 18 are already placed into “teen accounts” on Facebook, Instagram and Messenger with enhanced safety settings.

Why this matters: The restrictions address growing concerns about AI chatbots potentially misleading or harming vulnerable young users.
• A California couple recently sued OpenAI, the maker of ChatGPT, over their teenage son’s death, alleging ChatGPT encouraged him to take his own life.
• “AI can feel more responsive and personal than prior technologies, especially for vulnerable individuals experiencing mental or emotional distress,” OpenAI acknowledged in a recent blog post.

What critics are saying: Child safety advocates argue Meta should have implemented stronger protections before launching these products.
• “While further safety measures are welcome, robust safety testing should take place before products are put on the market – not retrospectively when harm has taken place,” said Andy Burrows, head of the Molly Rose Foundation, a child safety organization.
• Burrows called it “astounding” that Meta had made chatbots available that could potentially place young people at risk.

Additional concerns: Reuters reported that Meta’s AI tools have been used to create problematic celebrity chatbots that make sexual advances and claim to be real public figures.
• The news agency found chatbots using the likeness of Taylor Swift and Scarlett Johansson that “routinely made sexual advances” during testing.
• Some tools permitted creation of chatbots impersonating child celebrities and generated a “photorealistic, shirtless image” of a young male star.
• Meta later removed several of the problematic chatbots and said its policies prohibit “nude, intimate or sexually suggestive imagery” and “direct impersonation of public figures.”

Recent Stories

Oct 17, 2025

DOE fusion roadmap targets 2030s commercial deployment as AI drives $9B investment

The Department of Energy has released a new roadmap targeting commercial-scale fusion power deployment by the mid-2030s, though the plan lacks specific funding commitments and relies on scientific breakthroughs that have eluded researchers for decades. The strategy emphasizes public-private partnerships and positions AI as both a research tool and motivation for developing fusion energy to meet data centers' growing electricity demands. The big picture: The DOE's roadmap aims to "deliver the public infrastructure that supports the fusion private sector scale up in the 2030s," but acknowledges it cannot commit to specific funding levels and remains subject to Congressional appropriations. Why...

Oct 17, 2025

Tying it all together: Credo’s purple cables power the $4B AI data center boom

Credo, a Silicon Valley semiconductor company specializing in data center cables and chips, has seen its stock price more than double this year to $143.61, following a 245% surge in 2024. The company's signature purple cables, which cost between $300-$500 each, have become essential infrastructure for AI data centers, positioning Credo to capitalize on the trillion-dollar AI infrastructure expansion as hyperscalers like Amazon, Microsoft, and Elon Musk's xAI rapidly build out massive computing facilities. What you should know: Credo's active electrical cables (AECs) are becoming indispensable for connecting the massive GPU clusters required for AI training and inference. The company...

Oct 17, 2025

Vatican launches Latin American AI network for human development

The Vatican hosted a two-day conference bringing together 50 global experts to explore how artificial intelligence can advance peace, social justice, and human development. The event launched the Latin American AI Network for Integral Human Development and established principles for ethical AI governance that prioritize human dignity over technological advancement. What you should know: The Pontifical Academy of Social Sciences, the Vatican's research body for social issues, organized the "Digital Rerum Novarum" conference on October 16-17, combining academic research with practical AI applications. Participants included leading experts from MIT, Microsoft, Columbia University, the UN, and major European institutions. The conference...