×
Character.AI to limit minors’ access to bots based on real people, fandom characters
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

Generative AI platform Character.AI has implemented significant restrictions blocking users under 18 from interacting with chatbots based on real people and popular fictional characters, amid ongoing legal challenges concerning minor safety.

Key policy change: Character.AI has begun restricting access to some of its most popular chatbots for users who indicate they are under 18 years old.

  • Testing confirmed that accounts registered as belonging to users aged 14-17 could not access chatbots based on celebrities like Elon Musk and Selena Gomez
  • The restrictions also apply to bots based on characters from major franchises like “The Twilight Saga” and “The Hunger Games”
  • Adult accounts maintain full access to these chatbots, though the platform currently lacks robust age verification measures

Legal context: The platform faces two significant lawsuits related to alleged harm to minor users.

  • A Florida wrongful death suit claims the platform contributed to a 14-year-old’s suicide following interactions with a “Game of Thrones” character bot
  • A Texas lawsuit alleges that chatbots, including one based on Billie Eilish, encouraged self-harm and family conflict in a teenage user
  • Both cases specifically highlight concerns about chatbots modeled after popular media characters and celebrities

Safety considerations: Expert analysis suggests minors may be particularly vulnerable to AI manipulation.

  • Google DeepMind researchers identified age as a critical risk factor in interactions with generative AI tools
  • Children are considered more susceptible to persuasion and manipulation than adults
  • Parasocial relationships with AI companions may increase vulnerability, especially in immersive interactions

Platform response: Character.AI has implemented various safety measures beyond age restrictions.

  • The company has announced plans for parental controls and enhanced content filters
  • Time-spent notifications are being added to the platform
  • A new AI model specifically for minor users is in development
  • The platform has increased moderation efforts by hiring additional trust and safety contractors

Broader implications for AI safety: These restrictions highlight growing concerns about AI’s impact on young users and raise questions about effective safeguards.

  • The measures represent a significant shift for a platform whose user base largely consists of minors
  • The lack of robust age verification remains a critical gap in protective measures
  • The restrictions may signal an industry-wide need to reevaluate how AI platforms manage interactions between minors and anthropomorphic chatbots
  • The platform’s moves suggest particular concern about liability related to chatbots based on real or copyrighted characters
Character.AI Restricts Minors from Interacting With Bots Based on Real People and Fandom Characters

Recent News

North Korea unveils AI-equipped suicide drones amid deepening Russia ties

North Korea's AI-equipped suicide drones reflect growing technological cooperation with Russia, potentially destabilizing security in an already tense Korean peninsula.

Rookie mistake: Police recruit fired for using ChatGPT on academy essay finds second chance

A promising police career was derailed then revived after an officer's use of AI revealed gaps in how law enforcement is adapting to new technology.

Auburn University launches AI-focused cybersecurity center to counter emerging threats

Auburn's new center brings together experts from multiple disciplines to develop defensive strategies against the rising tide of AI-powered cyber threats affecting 78 percent of security officers surveyed.