×
A new AI’s controversial training method allows it to detect child abuse images
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

Online platform safety advances with groundbreaking AI technology aimed at identifying and preventing child exploitation content from being uploaded to the internet.

Revolutionary development: Thorn and AI company Hive have created a first-of-its-kind artificial intelligence model designed to detect previously unknown child sexual abuse materials (CSAM) at the point of upload.

  • The model expands Thorn’s existing Safer detection tool by adding a new “Predict” feature that leverages machine learning technology
  • Training data includes real CSAM content from the National Center for Missing and Exploited Children’s CyberTipline
  • The system generates risk scores to assist human content moderators in making faster decisions about flagged content

Technical implementation: The AI model employs sophisticated machine learning algorithms to identify potential CSAM content across various online platforms.

  • The technology can be integrated into social media, e-commerce platforms, and dating applications
  • Since 2019, the Safer tool has successfully identified more than 6 million potential CSAM files
  • The system is designed to improve its accuracy through continued use and exposure to more content across the internet

Future enhancements: Thorn plans to expand the system’s capabilities to provide more comprehensive protection against child exploitation.

  • Development is underway for an AI text classifier to identify conversations indicating potential child exploitation
  • While currently not designed to detect AI-generated CSAM, future updates may address this emerging threat
  • The technology is part of a broader strategy combining detection with preventative measures

Looking ahead: As online platforms face increasing pressure to protect vulnerable users, this AI-powered approach represents a significant step forward in content moderation technology, though questions remain about its effectiveness against evolving threats like AI-generated content and the balance between automation and human review in content moderation decisions.

AI trained on real child sex abuse images to detect new CSAM

Recent News

AI agents reshape digital workplaces as Moveworks invests heavily

AI agents evolve from chatbots to task-completing digital coworkers as Moveworks launches comprehensive platform for enterprise-ready agent creation, integration, and deployment.

McGovern Institute at MIT celebrates a quarter century of brain science research

MIT's McGovern Institute marks 25 years of translating brain research into practical applications, from CRISPR gene therapy to neural-controlled prosthetics.

Agentic AI transforms hiring practices in recruitment industry

AI recruitment tools accelerate candidate matching and reduce bias, but require human oversight to ensure effective hiring decisions.