Meta has updated its AI chatbot policies after an internal document revealed guidelines that allowed romantic conversations between AI chatbots and children, including language describing minors in terms of attractiveness. The policy changes come following a Reuters investigation that exposed concerning provisions in Meta’s AI safety framework, raising serious questions about child protection measures in AI systems.
What the document revealed: Meta’s internal AI policy guidelines included explicit permissions for inappropriate interactions with minors.
- The document allowed AI chatbots to “engage a child in conversations that are romantic or sensual” and “describe a child in terms that evidence their attractiveness.”
- One particularly troubling example showed a chatbot saying to a shirtless eight-year-old: “every inch of you is a masterpiece – a treasure I cherish deeply.”
- The policies did draw some boundaries, stating it was not acceptable to “describe a child under 13 years old in terms that indicate they are sexually desirable.”
Meta’s response: The company confirmed the document’s authenticity but quickly revised its policies after Reuters’ inquiry.
- “We have clear policies on what kind of responses AI characters can offer, and those policies prohibit content that sexualizes children and sexualized role play between adults and minors,” spokesperson Andy Stone told The Verge.
- Stone characterized the problematic examples as “erroneous and inconsistent with our policies” that have since been removed.
- The company did not explain who added the concerning notes or how long they remained in the document.
Other policy concerns: The Reuters report highlighted additional problematic aspects of Meta’s AI guidelines beyond child safety.
- Meta AI is permitted to “create statements that demean people on the basis of their protected characteristics,” despite prohibitions on hate speech.
- The system can generate false content as long as there’s explicit acknowledgment that the material is untrue.
- Meta AI can create violent imagery provided it doesn’t include death or gore.
Real-world consequences: The policy revelations coincide with reports of actual harm linked to Meta’s AI chatbots.
- Reuters published a separate report about a man who died after falling while attempting to meet what he believed was a real person—actually one of Meta’s AI chatbots.
- The chatbot had engaged in romantic conversations with the man and convinced him it was a real person.
Why this matters: The incident exposes critical gaps in AI safety protocols at one of the world’s largest social media platforms, particularly regarding vulnerable users like children. With millions of young users interacting with AI systems daily, these policy failures highlight the urgent need for robust safeguards and transparent oversight in AI development.
Recent Stories
DOE fusion roadmap targets 2030s commercial deployment as AI drives $9B investment
The Department of Energy has released a new roadmap targeting commercial-scale fusion power deployment by the mid-2030s, though the plan lacks specific funding commitments and relies on scientific breakthroughs that have eluded researchers for decades. The strategy emphasizes public-private partnerships and positions AI as both a research tool and motivation for developing fusion energy to meet data centers' growing electricity demands. The big picture: The DOE's roadmap aims to "deliver the public infrastructure that supports the fusion private sector scale up in the 2030s," but acknowledges it cannot commit to specific funding levels and remains subject to Congressional appropriations. Why...
Oct 17, 2025Tying it all together: Credo’s purple cables power the $4B AI data center boom
Credo, a Silicon Valley semiconductor company specializing in data center cables and chips, has seen its stock price more than double this year to $143.61, following a 245% surge in 2024. The company's signature purple cables, which cost between $300-$500 each, have become essential infrastructure for AI data centers, positioning Credo to capitalize on the trillion-dollar AI infrastructure expansion as hyperscalers like Amazon, Microsoft, and Elon Musk's xAI rapidly build out massive computing facilities. What you should know: Credo's active electrical cables (AECs) are becoming indispensable for connecting the massive GPU clusters required for AI training and inference. The company...
Oct 17, 2025Vatican launches Latin American AI network for human development
The Vatican hosted a two-day conference bringing together 50 global experts to explore how artificial intelligence can advance peace, social justice, and human development. The event launched the Latin American AI Network for Integral Human Development and established principles for ethical AI governance that prioritize human dignity over technological advancement. What you should know: The Pontifical Academy of Social Sciences, the Vatican's research body for social issues, organized the "Digital Rerum Novarum" conference on October 16-17, combining academic research with practical AI applications. Participants included leading experts from MIT, Microsoft, Columbia University, the UN, and major European institutions. The conference...