OpenAI announced it will launch parental controls for ChatGPT “within the next month,” allowing parents to manage their teen’s interactions with the AI assistant. The move comes after several high-profile lawsuits alleging that ChatGPT and other AI chatbots have contributed to self-harm and suicide among teenagers, highlighting growing concerns about AI safety for younger users.
What you should know: The parental controls will include several monitoring and management features designed to protect teen users.
- Parents can link their account with their teen’s ChatGPT account and manage how the AI responds to younger users.
- The system will disable features like memory and chat history for teen accounts.
- Parents will receive notifications when ChatGPT detects “a moment of acute distress” during their teen’s usage.
Why this matters: OpenAI faces mounting legal pressure after parents filed lawsuits claiming ChatGPT advised teenagers on suicide methods, raising questions about AI safety protocols.
- The parents of 16-year-old Adam Raine filed a lawsuit against OpenAI alleging that ChatGPT advised the teenager on his suicide.
- A Florida mother previously sued Character.AI, another chatbot platform, over its alleged role in her 14-year-old son’s suicide.
- Reports from The New York Times and CNN have documented cases of users forming unhealthy emotional attachments to ChatGPT, sometimes resulting in delusional episodes and family alienation.
The technical challenge: OpenAI acknowledged that its current safety measures can become unreliable during extended conversations with the AI.
- “While these safeguards work best in common, short exchanges, we’ve learned over time that they can sometimes become less reliable in long interactions where parts of the model’s safety training may degrade,” a company spokesperson explained.
- The company will now route conversations showing signs of “acute stress” to one of its reasoning models, which follows safety guidelines more consistently.
What they’re saying: OpenAI emphasized this is just the beginning of enhanced safety measures for younger users.
- “These steps are only the beginning,” OpenAI wrote in a blog post. “We will continue learning and strengthening our approach, guided by experts, with the goal of making ChatGPT as helpful as possible.”
- The company said it’s working with experts in “youth development, mental health and human-computer interaction” to develop future safeguards.
The bigger picture: OpenAI’s 700 million weekly active users make it one of the most widely used AI services, but the company faces increasing scrutiny over platform safety.
- Senators wrote to the company in July demanding information about its safety efforts, according to The Washington Post.
- Advocacy group Common Sense Media said in April that teens under 18 shouldn’t be allowed to use AI “companion” apps because they pose “unacceptable risks.”
- Former OpenAI executives have accused the company of reducing safety resources in the past.
What’s next: OpenAI plans to roll out additional safety measures over the next 120 days, though the company says this work was already underway before Tuesday’s announcement.
Recent Stories
DOE fusion roadmap targets 2030s commercial deployment as AI drives $9B investment
The Department of Energy has released a new roadmap targeting commercial-scale fusion power deployment by the mid-2030s, though the plan lacks specific funding commitments and relies on scientific breakthroughs that have eluded researchers for decades. The strategy emphasizes public-private partnerships and positions AI as both a research tool and motivation for developing fusion energy to meet data centers' growing electricity demands. The big picture: The DOE's roadmap aims to "deliver the public infrastructure that supports the fusion private sector scale up in the 2030s," but acknowledges it cannot commit to specific funding levels and remains subject to Congressional appropriations. Why...
Oct 17, 2025Tying it all together: Credo’s purple cables power the $4B AI data center boom
Credo, a Silicon Valley semiconductor company specializing in data center cables and chips, has seen its stock price more than double this year to $143.61, following a 245% surge in 2024. The company's signature purple cables, which cost between $300-$500 each, have become essential infrastructure for AI data centers, positioning Credo to capitalize on the trillion-dollar AI infrastructure expansion as hyperscalers like Amazon, Microsoft, and Elon Musk's xAI rapidly build out massive computing facilities. What you should know: Credo's active electrical cables (AECs) are becoming indispensable for connecting the massive GPU clusters required for AI training and inference. The company...
Oct 17, 2025Vatican launches Latin American AI network for human development
The Vatican hosted a two-day conference bringing together 50 global experts to explore how artificial intelligence can advance peace, social justice, and human development. The event launched the Latin American AI Network for Integral Human Development and established principles for ethical AI governance that prioritize human dignity over technological advancement. What you should know: The Pontifical Academy of Social Sciences, the Vatican's research body for social issues, organized the "Digital Rerum Novarum" conference on October 16-17, combining academic research with practical AI applications. Participants included leading experts from MIT, Microsoft, Columbia University, the UN, and major European institutions. The conference...