×
ChatGPT adds age verification to protect teens from harmful content
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

OpenAI CEO Sam Altman announced that ChatGPT is developing an automated age-detection system that may require users to provide ID verification when their age cannot be determined. The move comes as OpenAI faces mounting pressure over teen safety concerns, including a high-profile lawsuit alleging the chatbot contributed to a 16-year-old’s suicide.

What you should know: ChatGPT is implementing multiple safety measures specifically designed for users under 18.

  • The platform will use behavioral analysis to estimate user age, defaulting to under-18 protections when uncertain.
  • Altman clarified that “ChatGPT is intended for people 13 and up” in a blog post titled “Teen safety, freedom, and privacy.”
  • Content will be filtered for teens, including flirtatious responses and discussions around self-harm.

The verification process: OpenAI acknowledges the privacy trade-offs but considers them necessary for protecting minors.

  • “In some cases or countries we may also ask for an ID; we know this is a privacy compromise for adults but believe it is a worthy tradeoff,” Altman wrote.
  • The age-prediction system will analyze how people interact with ChatGPT to determine if they’re likely under 18.
  • When doubt exists, the system will automatically apply teen safety restrictions.

Emergency intervention features: ChatGPT will actively respond to signs of self-harm among teenage users.

  • If a teenager expresses suicidal thoughts, the chatbot will attempt to contact their parents directly.
  • When parental contact isn’t possible, ChatGPT will try to alert authorities.
  • The system will monitor interactions for “worrying or potentially harmful behaviour” and notify linked parent accounts.

New parental controls: Parents will gain significant oversight capabilities through account linking.

  • Parents can set “blackout hours” when teens cannot access the platform.
  • Chat history can be disabled at parental discretion.
  • The system will send alerts to parents when detecting concerning interactions with their child.

What they’re saying: Altman acknowledged the controversial nature of these safety measures while defending their necessity.

  • “We realize that these principles are in conflict and not everyone will agree with how we are resolving that conflict,” he wrote.
  • “These are difficult decisions, but after talking with experts, this is what we think is best and want to be transparent in our intentions.”
  • The company emphasizes it will “prioritize safety ahead of privacy and freedom for teens” because minors need “significant protection.”
Sam Altman suggests ChatGPT could ask you for ID to keep using it

Recent News

AI tool predicts 1,000+ diseases that can afflict one up to 20 years in advance

The AI predicts 1,000+ diseases from medical records and lifestyle factors up to 20 years in advance.

Trump DOJ targets AI companies for anticompetitive practices

Officials are watching the AI stack for exclusionary behavior that blocks rivals' access.

Nvidia, Microsoft, and OpenAI back UK AI startup with $700M

Britain positions itself as an "AI maker, not a taker" in the global race.