Only 344?
Google has disclosed receiving hundreds of reports regarding alleged misuse of its AI technology to create harmful content, revealing a troubling trend in how generative AI can be exploited for illegal purposes. This first-of-its-kind data disclosure provides valuable insight into the real-world risks posed by generative AI tools and underscores the critical importance of implementing effective safeguards to prevent creation of harmful content.
The big picture: Google reported receiving 258 complaints that its Gemini AI was used to generate deepfake terrorism or violent extremist content, along with 86 reports of alleged AI-generated child exploitation material.
Key details: The disclosure was made to Australia‘s eSafety Commission as part of compliance with Australian law requiring tech companies to report on harm minimization efforts.
Safety measures: Google employs hash-matching technology to automatically identify and remove child exploitation material created with Gemini.
Why this matters: The Australian eSafety Commission called this a “world-first insight” into how users may be exploiting AI technology to produce illegal and harmful content.
Regulatory context: Since ChatGPT‘s emergence in late 2022, regulators worldwide have called for stronger guardrails around AI to prevent its misuse for terrorism, fraud, deepfake pornography and other harmful purposes.