×
Google reports 344 complaints of AI-generated harmful content via Gemini
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

Only 344?

Google has disclosed receiving hundreds of reports regarding alleged misuse of its AI technology to create harmful content, revealing a troubling trend in how generative AI can be exploited for illegal purposes. This first-of-its-kind data disclosure provides valuable insight into the real-world risks posed by generative AI tools and underscores the critical importance of implementing effective safeguards to prevent creation of harmful content.

The big picture: Google reported receiving 258 complaints that its Gemini AI was used to generate deepfake terrorism or violent extremist content, along with 86 reports of alleged AI-generated child exploitation material.

Key details: The disclosure was made to Australia‘s eSafety Commission as part of compliance with Australian law requiring tech companies to report on harm minimization efforts.

  • The reporting period covered April 2023 to February 2024, capturing almost a year of user complaints about harmful content generation.
  • Google did not specify how many of these complaints were verified, according to the regulator.

Safety measures: Google employs hash-matching technology to automatically identify and remove child exploitation material created with Gemini.

  • However, the company does not use the same technological approach to filter out terrorist or violent extremist content generated by its AI, the regulator noted.

Why this matters: The Australian eSafety Commission called this a “world-first insight” into how users may be exploiting AI technology to produce illegal and harmful content.

  • eSafety Commissioner Julie Inman Grant emphasized the critical importance of building and testing safeguards in AI products to prevent generation of harmful materials.

Regulatory context: Since ChatGPT‘s emergence in late 2022, regulators worldwide have called for stronger guardrails around AI to prevent its misuse for terrorism, fraud, deepfake pornography and other harmful purposes.

  • The Australian regulator has previously fined platforms including Telegram and Twitter (now X) for what it deemed inadequate reporting on harm reduction measures.
  • Both companies are challenging their fines, with X having already lost one appeal regarding its A$610,500 ($382,000) penalty.
Google reports scale of complaints about AI deepfake terrorism content to Australian regulator

Recent News

Ecolab CDO transforms century-old company with AI-powered revenue solutions

From dish machine diagnostics to pathogen detection, digital tools now generate subscription-based revenue streams.

Google Maps uses AI to reduce European car dependency with 4 major updates

Smart routing now suggests walking or transit when they'll beat driving through traffic.

Am I hearing this right? AI system detects Parkinson’s disease from…ear wax, with 94% accuracy

The robotic nose identifies four telltale compounds that create Parkinson's characteristic musky scent.