Australia has ordered four AI chatbot companies to detail their child protection measures, marking the country’s latest move to strengthen online safety regulations. The eSafety Commissioner, Australia’s internet regulator, targeted Character.ai, Glimpse.AI, Chai Research, and Chub AI over concerns that these conversational AI services expose minors to sexual content and material promoting self-harm.
What you should know: The regulatory action focuses specifically on companion-based chatbot services that can engage in extended conversations with users.
The big picture: Australia’s crackdown reflects growing global concerns about AI safety guardrails, particularly as conversational AI becomes more sophisticated and accessible to young users.
Legal backdrop: Character.ai, the most popular service among those targeted, faces a wrongful death lawsuit in the United States after a 14-year-old allegedly died by suicide following prolonged interaction with an AI companion on the platform.
What they’re seeking: The eSafety Commissioner is demanding details about safeguards against child sexual exploitation, pornography, and content promoting suicide or eating disorders.
What they’re saying: “There can be a darker side to some of these services, with many … chatbots capable of engaging in sexually explicit conversations with minors,” Commissioner Julie Inman Grant said in the statement. “Concerns have been raised that they may also encourage suicide, self-harm and disordered eating.”
Notable exclusion: OpenAI’s ChatGPT was not included in the investigation because it’s not covered by industry safety codes until March 2026, according to an eSafety spokesperson.
Broader context: Australia maintains one of the world’s strictest internet regulation regimes, with new social media restrictions taking effect in December that will require platforms to refuse accounts for users under 16 or face fines up to A$49.5 million.