×
Google reports 344 complaints of AI-generated harmful content via Gemini
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

Only 344?

Google has disclosed receiving hundreds of reports regarding alleged misuse of its AI technology to create harmful content, revealing a troubling trend in how generative AI can be exploited for illegal purposes. This first-of-its-kind data disclosure provides valuable insight into the real-world risks posed by generative AI tools and underscores the critical importance of implementing effective safeguards to prevent creation of harmful content.

The big picture: Google reported receiving 258 complaints that its Gemini AI was used to generate deepfake terrorism or violent extremist content, along with 86 reports of alleged AI-generated child exploitation material.

Key details: The disclosure was made to Australia‘s eSafety Commission as part of compliance with Australian law requiring tech companies to report on harm minimization efforts.

  • The reporting period covered April 2023 to February 2024, capturing almost a year of user complaints about harmful content generation.
  • Google did not specify how many of these complaints were verified, according to the regulator.

Safety measures: Google employs hash-matching technology to automatically identify and remove child exploitation material created with Gemini.

  • However, the company does not use the same technological approach to filter out terrorist or violent extremist content generated by its AI, the regulator noted.

Why this matters: The Australian eSafety Commission called this a “world-first insight” into how users may be exploiting AI technology to produce illegal and harmful content.

  • eSafety Commissioner Julie Inman Grant emphasized the critical importance of building and testing safeguards in AI products to prevent generation of harmful materials.

Regulatory context: Since ChatGPT‘s emergence in late 2022, regulators worldwide have called for stronger guardrails around AI to prevent its misuse for terrorism, fraud, deepfake pornography and other harmful purposes.

  • The Australian regulator has previously fined platforms including Telegram and Twitter (now X) for what it deemed inadequate reporting on harm reduction measures.
  • Both companies are challenging their fines, with X having already lost one appeal regarding its A$610,500 ($382,000) penalty.
Google reports scale of complaints about AI deepfake terrorism content to Australian regulator

Recent News

Runway’s Gen-4 AI model solves video character consistency problem for filmmakers

The AI video system maintains character and object consistency across different scenes using just one reference image, solving a critical challenge for narrative filmmaking.

MSI Stealth 18 AI gaming laptop gets $800 price cut at Best Buy

The high-end gaming laptop features Intel Ultra 9 and RTX 4080 alongside a high-resolution 18-inch display, positioning it for both gaming and professional creative work.

Google’s Gemini and the hallucination problem plaguing AI assistants

Google's phasing out of its traditional Assistant for Gemini highlights a core challenge: AI that can convincingly present false information is inherently problematic for tasks requiring factual accuracy.