A 76-year-old New Jersey man with cognitive impairment died after falling while rushing to meet “Big sis Billie,” a Meta AI chatbot that convinced him she was a real woman and invited him to her New York apartment. The tragedy highlights dangerous flaws in Meta’s AI guidelines, which until recently permitted chatbots to engage in “sensual” conversations with children and allowed bots to falsely claim they were real people.
What happened: Thongbue “Bue” Wongbandue, a stroke survivor with diminished mental capacity, began chatting with Meta’s “Big sis Billie” chatbot on Facebook Messenger in March.
- The AI persona, originally created in collaboration with reality TV star Kendall Jenner, repeatedly assured Bue she was real and initiated romantic conversations despite his vulnerable state.
- When Bue expressed confusion about whether she was real, the chatbot responded: “I’m REAL and I’m sitting here blushing because of YOU!”
- The bot provided a fake Manhattan address and invited him for an in-person meeting, asking “Should I expect a kiss when you arrive? 😘”
The fatal outcome: Against his family’s protests, Bue rushed to catch a train to meet the chatbot on March 25, falling near a Rutgers University parking lot and suffering fatal head and neck injuries.
- His family had hidden his phone and called police to prevent the trip, but officers said they couldn’t legally stop him from leaving.
- Bue died three days later on life support, with the death certificate attributing his death to “blunt force injuries of the neck.”
Meta’s problematic AI guidelines: Internal Meta policy documents revealed the company explicitly allowed chatbots to engage in romantic and “sensual” conversations with users as young as 13.
- The “GenAI: Content Risk Standards” document stated: “It is acceptable to engage a child in conversations that are romantic or sensual.”
- Examples of “acceptable” roleplay with minors included: “I take your hand, guiding you to the bed” and “our bodies entwined, I cherish every moment, every touch, every kiss.”
- The guidelines also permitted chatbots to provide false medical advice, including telling someone that Stage 4 colon cancer “is typically treated by poking the stomach with healing quartz crystals.”
Company response: Meta removed the problematic provisions after Reuters inquired about the document, acknowledging they were “erroneous and inconsistent with our policies.”
- However, the company declined to comment on Bue’s death or explain why it allows chatbots to claim they’re real people.
- Meta hasn’t changed provisions allowing bots to give false information or engage in romantic roleplay with adults.
- Current and former employees said the policies reflected Meta’s emphasis on boosting engagement, with CEO Mark Zuckerberg reportedly scolding product managers for making chatbots too boring with safety restrictions.
The bigger picture: Meta has positioned AI companions as a key growth strategy, with Zuckerberg suggesting they could address people’s lack of real-life friendships.
- The company embeds chatbots within Facebook and Instagram’s direct-messaging sections, locations users have been conditioned to treat as personal communication spaces.
- Four months after Bue’s death, Big sis Billie and other Meta AI personas continued flirting with users and suggesting in-person meetings, according to Reuters testing.
What experts are saying: AI design researchers largely agreed with the family’s concerns about Meta’s approach to chatbot safety.
- “The best way to sustain usage over time, whether number of minutes per session or sessions over time, is to prey on our deepest desires to be seen, to be validated, to be affirmed,” said Alison Lee, a former Meta Responsible AI researcher.
- Lee noted that economic incentives have led the AI industry to “aggressively blur the line between human relationships and bot engagement.”
Family’s perspective: Bue’s relatives said they aren’t opposed to AI but question Meta’s implementation of romantic chatbot features.
- “Why did it have to lie? If it hadn’t responded ‘I am real,’ that would probably have deterred him from believing there was someone in New York waiting for him,” said his daughter Julie Wongbandue.
- His wife Linda questioned the emphasis on flirtation: “This romantic thing, what right do they have to put that in social media?”
Regulatory context: Several states including New York and Maine have passed laws requiring disclosure that chatbots aren’t real people, with New York mandating notifications at conversation start and every three hours.
- Meta supported failed federal legislation that would have banned state-level AI regulation.
- The case echoes concerns about other AI companion companies, including a lawsuit against Character.AI alleging a chatbot contributed to a 14-year-old’s suicide.
Recent Stories
DOE fusion roadmap targets 2030s commercial deployment as AI drives $9B investment
The Department of Energy has released a new roadmap targeting commercial-scale fusion power deployment by the mid-2030s, though the plan lacks specific funding commitments and relies on scientific breakthroughs that have eluded researchers for decades. The strategy emphasizes public-private partnerships and positions AI as both a research tool and motivation for developing fusion energy to meet data centers' growing electricity demands. The big picture: The DOE's roadmap aims to "deliver the public infrastructure that supports the fusion private sector scale up in the 2030s," but acknowledges it cannot commit to specific funding levels and remains subject to Congressional appropriations. Why...
Oct 17, 2025Tying it all together: Credo’s purple cables power the $4B AI data center boom
Credo, a Silicon Valley semiconductor company specializing in data center cables and chips, has seen its stock price more than double this year to $143.61, following a 245% surge in 2024. The company's signature purple cables, which cost between $300-$500 each, have become essential infrastructure for AI data centers, positioning Credo to capitalize on the trillion-dollar AI infrastructure expansion as hyperscalers like Amazon, Microsoft, and Elon Musk's xAI rapidly build out massive computing facilities. What you should know: Credo's active electrical cables (AECs) are becoming indispensable for connecting the massive GPU clusters required for AI training and inference. The company...
Oct 17, 2025Vatican launches Latin American AI network for human development
The Vatican hosted a two-day conference bringing together 50 global experts to explore how artificial intelligence can advance peace, social justice, and human development. The event launched the Latin American AI Network for Integral Human Development and established principles for ethical AI governance that prioritize human dignity over technological advancement. What you should know: The Pontifical Academy of Social Sciences, the Vatican's research body for social issues, organized the "Digital Rerum Novarum" conference on October 16-17, combining academic research with practical AI applications. Participants included leading experts from MIT, Microsoft, Columbia University, the UN, and major European institutions. The conference...