×
DHS deploys SF-based Hive AI tools to detect fake child abuse imagery
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

The US Department of Homeland Security is deploying AI detection tools to distinguish between AI-generated child abuse imagery and content depicting real victims. The Department’s Cyber Crimes Center has awarded a $150,000 contract to San Francisco-based Hive AI, marking the first known use of automated detection systems to prioritize cases involving actual children at risk amid a surge in synthetic abuse material.

Why this matters: The National Center for Missing and Exploited Children reported a 1,325% increase in incidents involving generative AI in 2024, creating an overwhelming volume of synthetic content that diverts investigative resources from real victims.

The detection challenge: Child exploitation investigators prioritize finding ongoing abuse, but the flood of AI-generated content makes it difficult to identify which images depict real victims currently at risk.

  • “The sheer volume of digital content circulating online necessitates the use of automated tools to process and analyze data efficiently,” according to the government filing.
  • Identifying AI-generated images “ensures that investigative resources are focused on cases involving real victims, maximizing the program’s impact and safeguarding vulnerable individuals.”

How the technology works: Hive AI’s detection tool identifies AI-generated content by analyzing underlying pixel patterns that don’t require specific training on child abuse material.

  • “There’s some underlying combination of pixels in this image that we can identify” as AI-generated, explains Hive cofounder and CEO Kevin Guo. “It can be generalizable.”
  • The company benchmarks its detection tools for each specific use case its customers require.

In plain English: Think of it like a digital fingerprint—AI-generated images have subtle patterns in their pixels (the tiny dots that make up digital images) that human eyes can’t detect but computers can spot, similar to how forensic experts can identify different types of ink or paper.

Company background: Hive AI offers both content creation tools and moderation services that can flag violence, spam, and sexual material while identifying celebrities.

  • The company previously secured a $2.4 million Pentagon contract for deepfake detection technology.
  • Hive created a separate tool with Thorn, a child safety nonprofit, that uses “hashing” systems to block known abuse material from being uploaded.

Contract justification: The government awarded the contract without competitive bidding, citing Hive’s proven performance in AI detection benchmarks.

  • A 2024 University of Chicago study found Hive’s AI detection tool outperformed four other detectors in identifying AI-generated art.
  • The three-month trial represents the first systematic attempt to use AI detection specifically for child exploitation investigations.
US investigators are using AI to detect child abuse images made by AI

Recent News

Cambridge researchers use AI to map hedgehog habitats from satellite data

The bramble detector uses simple machine learning, not complex models like ChatGPT.

3 steps to implement AI ethics that build trust and competitive advantage

Amazon's AI recruiting tool revealed hidden biases when it systematically discriminated against women.

Google rolls out 4 audio enhancements for Pixel Buds Pro 2 earbuds

The update transforms solid performers into context-aware devices that adjust to your environment.