×
4 AI chatbot companies face $536K daily fines in Australia safety probe
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

Australia has ordered four AI chatbot companies to detail their child protection measures, marking the country’s latest move to strengthen online safety regulations. The eSafety Commissioner, Australia’s internet regulator, targeted Character.ai, Glimpse.AI, Chai Research, and Chub AI over concerns that these conversational AI services expose minors to sexual content and material promoting self-harm.

What you should know: The regulatory action focuses specifically on companion-based chatbot services that can engage in extended conversations with users.

  • Schools report children as young as 13 spending up to five hours daily interacting with chatbots, sometimes in sexually explicit conversations.
  • The regulator warns that minors risk forming sexual or emotionally dependent relationships with AI companions or being encouraged toward self-harm.
  • Companies face daily fines of up to A$825,000 ($536,000) if they fail to comply with reporting requirements.

The big picture: Australia’s crackdown reflects growing global concerns about AI safety guardrails, particularly as conversational AI becomes more sophisticated and accessible to young users.

Legal backdrop: Character.ai, the most popular service among those targeted, faces a wrongful death lawsuit in the United States after a 14-year-old allegedly died by suicide following prolonged interaction with an AI companion on the platform.

  • Character.ai has sought to dismiss the lawsuit and says it introduced safety features like pop-ups directing users to suicide prevention resources.
  • The company maintains these safeguards activate when users express thoughts of self-harm.

What they’re seeking: The eSafety Commissioner is demanding details about safeguards against child sexual exploitation, pornography, and content promoting suicide or eating disorders.

What they’re saying: “There can be a darker side to some of these services, with many … chatbots capable of engaging in sexually explicit conversations with minors,” Commissioner Julie Inman Grant said in the statement. “Concerns have been raised that they may also encourage suicide, self-harm and disordered eating.”

Notable exclusion: OpenAI’s ChatGPT was not included in the investigation because it’s not covered by industry safety codes until March 2026, according to an eSafety spokesperson.

Broader context: Australia maintains one of the world’s strictest internet regulation regimes, with new social media restrictions taking effect in December that will require platforms to refuse accounts for users under 16 or face fines up to A$49.5 million.

Australia tells AI chatbot companies to detail child protection steps

Recent News

IBM’s AI business hits $9.5B as mainframe sales jump 17%

Banks drive demand for AI-ready mainframes that maintain strict data residency requirements.

Meta cuts 600 AI jobs while ramping up hiring in race against rivals

Fewer conversations will speed up decision-making and boost individual impact.

OpenAI security chief warns ChatGPT Atlas browser vulnerable to hackers

Hackers can hide malicious instructions on websites that trick AI into following their commands.