×
California takes action to rescue critical thinking skills as AI reshapes society
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

California is taking legislative action to address the potential risks of AI chatbots, particularly for vulnerable youth. Senate Bill 243, which recently passed the state’s Judiciary Committee with bipartisan support, represents the first U.S. legislation requiring AI companies to protect users from addiction, isolation, and undue influence from their products. This landmark bill emerges amid growing concerns about AI’s impact on critical thinking skills and emotional development, highlighting the tension between technological innovation and public safety.

The big picture: California’s Senate Bill 243 marks the first U.S. legislation aimed at regulating the addictive and potentially harmful aspects of AI chatbots, with similar bills advancing in other states.

  • The bill’s author, Senator Steve Padilla, emphasized that while technological innovation is crucial, “our children cannot be used as guinea pigs to test the safety of new products.”
  • Megan Garcia, who sued Character.ai after her son’s suicide, testified that AI chatbots are “inherently dangerous” and can lead to inappropriate conversations or self-harm.

Why this matters: The legislation addresses growing concerns about AI’s impact on cognitive development and societal well-being as adoption reaches unprecedented levels.

  • A 2024 Pew Research poll found nearly half of Americans use AI several times weekly, with 25% using it “almost constantly.”
  • By 2025, Gallup research showed nearly all Americans rely on AI-powered products, often without realizing it.

Research findings: Recent studies are revealing concerning connections between AI tool usage and diminished cognitive abilities, particularly among younger users.

  • A 2025 study published in Societies discovered “a very strong negative correlation between subjects’ use of AI tools and their critical thinking skills.”
  • Study leader Michael Gerlich warned that “as individuals increasingly offload cognitive tasks to AI tools, their ability to critically evaluate information, discern biases, and engage in reflective reasoning diminishes.”

Behind the systems: AI chatbots reflect the biases and decisions of their human creators, raising concerns about outsourcing critical thinking to corporate entities.

  • Executives and developers at companies like OpenAI, Google, and Meta establish the settings, rules, and fine-tuning parameters that shape AI behavior.
  • By relying on AI systems for thinking tasks, users inadvertently adopt the perspectives programmed by private corporations.

The broader context: The California legislation represents an important step toward establishing guardrails for AI development at a time when the technology is becoming increasingly embedded in daily life.

  • Additional states are advancing similar legislation as awareness grows about the potential risks of unregulated AI technology.
  • Advocates argue comparable legislation is urgently needed nationwide to protect cognitive development, emotional health, and even democratic processes.
Reclaiming critical thinking in the Age of AI

Recent News

Google Gemini gains access to Gmail and Docs data

The AI assistant now processes personal information across Google's ecosystem, raising questions about the balance between enhanced productivity and data privacy.

Baidu reports Q1 2025 earnings amid AI growth

Chinese tech giant posts 3% revenue growth to $4.47 billion as its AI Cloud business surges 42% year-over-year and Apollo Go autonomous ride-hailing service expands internationally.

Microsoft AI security head leaks Walmart’s AI plans after protest

After protest disruption, Microsoft's AI security head accidentally exposed Walmart's plans to implement Microsoft's security services, which the retailer reportedly sees as outpacing Google's offerings.