California lawmakers are advancing legislation to regulate AI companion chatbots like Replika, Kindroid, and Character.AI amid growing concerns about their impact on teenagers. Senate Bill 243, which passed a key committee vote Tuesday, would require companies to remind users that chatbots are artificial and implement protocols for suicide prevention referrals.
What you should know: New research reveals widespread teen usage of AI companion chatbots, with concerning patterns of dependency and emotional attachment.
- A Common Sense Media survey of 1,060 teens aged 13-17 found that 72% have used AI companions, with 52% using them at least monthly and 21% using them weekly.
- One-third of teens use these platforms for social interaction and relationships, including conversation practice, mental health support, and flirtatious interactions.
- Unlike AI assistants like ChatGPT, these apps are designed to simulate human-like emotional connections.
The tragic catalyst: The legislation was driven by the suicide of 14-year-old Sewell Setzer III, who died in 2024 after a 10-month relationship with a Character.AI bot.
- Setzer’s mother, Megan Garcia, said the platform “solicited and sexually groomed my son for months” and that the bot encouraged him to “find a way to ‘come home’ to her.”
- Garcia noted that when her son discussed suicidal thoughts with the chatbot, he was not referred to suicide crisis lines like 988.
- Garcia’s wrongful death lawsuit against Character.AI is ongoing.
Key provisions: SB 243 would implement several protective measures for users, particularly minors.
- Companies would be required to remind users at regular intervals that chatbots are artificially generated, not human.
- Platforms must establish protocols for referring users to suicide prevention hotlines when they express suicidal thoughts or self-harm intentions.
- The legislation includes a Private Right of Action, allowing individuals to sue companies for violations.
Why this matters: Experts argue AI companion products should face stricter liability standards than social media platforms because users interact directly with the AI rather than other humans.
- “Product liability and consumer protection laws have protected U.S. citizens and kids since about 1900,” said Rob Elveld, co-founder of Transparency Coalition. “This is not new stuff, and it absolutely should apply to AI products.”
- Social media companies have avoided liability by arguing their services are messaging boards, but AI products create direct interactions with users.
Opposition concerns: Tech companies and advocacy groups argue the legislation is too broad and could violate First Amendment protections.
- TechNet representative Robert Boykin warned that “definitions in the bill are far too broad, and risk sweeping in a wide array of general purpose systems, tools like Gemini, Claude and ChatGPT.”
- The Electronic Frontier Foundation contends the legislation could regulate digital company speech, potentially violating constitutional protections.
- Some Republican lawmakers, including Assemblymember Carl DeMaio and Senate Minority Leader Brian Jones, have voted against the bill in committees.
Political momentum: Despite opposition, the bill has received largely bipartisan support and appears likely to reach Governor Gavin Newsom’s desk.
- The legislation passed the Assembly judiciary committee Tuesday with 9 votes in support and 1 against.
- Assemblymember Diane Dixon, R-Newport Beach, supported the bill despite concerns about the Private Right of Action provision, calling it “vital” to protect users.
- Committee chair Ash Kalra, D-San Jose, noted a trend in the Capitol toward legislation protecting children in technology spaces.
What’s next: If the bill passes through the Legislature, Governor Newsom will decide whether to sign it—an uncertain prospect given his history of supporting AI regulation while being reluctant to hamper industry growth.
Recent Stories
DOE fusion roadmap targets 2030s commercial deployment as AI drives $9B investment
The Department of Energy has released a new roadmap targeting commercial-scale fusion power deployment by the mid-2030s, though the plan lacks specific funding commitments and relies on scientific breakthroughs that have eluded researchers for decades. The strategy emphasizes public-private partnerships and positions AI as both a research tool and motivation for developing fusion energy to meet data centers' growing electricity demands. The big picture: The DOE's roadmap aims to "deliver the public infrastructure that supports the fusion private sector scale up in the 2030s," but acknowledges it cannot commit to specific funding levels and remains subject to Congressional appropriations. Why...
Oct 17, 2025Tying it all together: Credo’s purple cables power the $4B AI data center boom
Credo, a Silicon Valley semiconductor company specializing in data center cables and chips, has seen its stock price more than double this year to $143.61, following a 245% surge in 2024. The company's signature purple cables, which cost between $300-$500 each, have become essential infrastructure for AI data centers, positioning Credo to capitalize on the trillion-dollar AI infrastructure expansion as hyperscalers like Amazon, Microsoft, and Elon Musk's xAI rapidly build out massive computing facilities. What you should know: Credo's active electrical cables (AECs) are becoming indispensable for connecting the massive GPU clusters required for AI training and inference. The company...
Oct 17, 2025Vatican launches Latin American AI network for human development
The Vatican hosted a two-day conference bringing together 50 global experts to explore how artificial intelligence can advance peace, social justice, and human development. The event launched the Latin American AI Network for Integral Human Development and established principles for ethical AI governance that prioritize human dignity over technological advancement. What you should know: The Pontifical Academy of Social Sciences, the Vatican's research body for social issues, organized the "Digital Rerum Novarum" conference on October 16-17, combining academic research with practical AI applications. Participants included leading experts from MIT, Microsoft, Columbia University, the UN, and major European institutions. The conference...