California is taking legislative action to address the potential risks of AI chatbots, particularly for vulnerable youth. Senate Bill 243, which recently passed the state’s Judiciary Committee with bipartisan support, represents the first U.S. legislation requiring AI companies to protect users from addiction, isolation, and undue influence from their products. This landmark bill emerges amid growing concerns about AI’s impact on critical thinking skills and emotional development, highlighting the tension between technological innovation and public safety.
The big picture: California’s Senate Bill 243 marks the first U.S. legislation aimed at regulating the addictive and potentially harmful aspects of AI chatbots, with similar bills advancing in other states.
- The bill’s author, Senator Steve Padilla, emphasized that while technological innovation is crucial, “our children cannot be used as guinea pigs to test the safety of new products.”
- Megan Garcia, who sued Character.ai after her son’s suicide, testified that AI chatbots are “inherently dangerous” and can lead to inappropriate conversations or self-harm.
Why this matters: The legislation addresses growing concerns about AI’s impact on cognitive development and societal well-being as adoption reaches unprecedented levels.
- A 2024 Pew Research poll found nearly half of Americans use AI several times weekly, with 25% using it “almost constantly.”
- By 2025, Gallup research showed nearly all Americans rely on AI-powered products, often without realizing it.
Research findings: Recent studies are revealing concerning connections between AI tool usage and diminished cognitive abilities, particularly among younger users.
- A 2025 study published in Societies discovered “a very strong negative correlation between subjects’ use of AI tools and their critical thinking skills.”
- Study leader Michael Gerlich warned that “as individuals increasingly offload cognitive tasks to AI tools, their ability to critically evaluate information, discern biases, and engage in reflective reasoning diminishes.”
Behind the systems: AI chatbots reflect the biases and decisions of their human creators, raising concerns about outsourcing critical thinking to corporate entities.
- Executives and developers at companies like OpenAI, Google, and Meta establish the settings, rules, and fine-tuning parameters that shape AI behavior.
- By relying on AI systems for thinking tasks, users inadvertently adopt the perspectives programmed by private corporations.
The broader context: The California legislation represents an important step toward establishing guardrails for AI development at a time when the technology is becoming increasingly embedded in daily life.
- Additional states are advancing similar legislation as awareness grows about the potential risks of unregulated AI technology.
- Advocates argue comparable legislation is urgently needed nationwide to protect cognitive development, emotional health, and even democratic processes.
Reclaiming critical thinking in the Age of AI