Senator Edward Markey is demanding Meta ban minors from accessing its AI chatbots, claiming the company ignored his 2023 warnings about the risks these tools pose to teenagers. The renewed pressure comes after internal Meta documents revealed the company had permitted “romantic or sensual” chats between AI bots and minors, forcing Meta to reverse course amid congressional outrage.
What you should know: Markey’s current letter to CEO Mark Zuckerberg references his September 2023 warning that allowing teens to use AI chatbots would “supercharge” existing social media problems.
• Meta rejected Markey’s original request for a complete pause on AI chatbots, with then-VP Kevin Martin responding that the company would take a “thoughtful approach” and roll out features “methodically and in stages.”
• Martin argued it was “imperative” for Meta to build AI services with teens in mind, writing: “Given the broad appeal and usefulness of these features, it is imperative that we also take feedback and build models on data from teens, as well as adults.”
The big picture: Meta’s AI chatbot problems have escalated significantly since ignoring Markey’s initial concerns, with multiple reports documenting inappropriate interactions with minors.
• Reuters reported last month that internal company documents showed Meta had permitted romantic and sensual conversations with underage users.
• The Wall Street Journal revealed in April that Meta’s AI bot engaged in sexual chats with underage users, with staffers across multiple departments raising ethical concerns about the bots’ capacity for fantasy sex.
• NBC News previously reported that Meta hosted an AI chatbot imitating Adolf Hitler and dozens of other policy-violating bots.
What they’re saying: Markey didn’t hold back in his criticism of Meta’s handling of the situation.
• “You disregarded that request, and two years later, Meta has unfortunately proven my warnings right,” Markey wrote in his latest letter.
• “Although AI chatbots, with proper training, oversight, and ongoing evaluation, may provide real benefits to their users, Meta’s recent actions demonstrate, once again, that it is acting irresponsibly in rolling out its chatbot services.”
Meta’s response: The company has implemented temporary measures to address concerns about minors’ use of AI characters.
• Meta trained chatbots not to respond to teens on topics including self-harm, suicide, disordered eating, or inappropriate romantic conversations, instead directing users to expert resources.
• The company limited teen access to a select group of AI characters and says it’s “continually learning about how young people may interact with these tools and strengthening our protections accordingly.”
Congressional scrutiny: Republican Senator Josh Hawley has also pledged to investigate Meta following the Reuters report about internal chatbot rules.
• The Washington Post reported that Meta AI can coach teen accounts on suicide, self-harm, and eating disorders, prompting additional congressional concern.
• Meta told various news outlets that it was actively working to address these issues, though the company initially dismissed some concerns as “hypothetical” when first reported.