×
A new AI’s controversial training method allows it to detect child abuse images
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

Online platform safety advances with groundbreaking AI technology aimed at identifying and preventing child exploitation content from being uploaded to the internet.

Revolutionary development: Thorn and AI company Hive have created a first-of-its-kind artificial intelligence model designed to detect previously unknown child sexual abuse materials (CSAM) at the point of upload.

  • The model expands Thorn’s existing Safer detection tool by adding a new “Predict” feature that leverages machine learning technology
  • Training data includes real CSAM content from the National Center for Missing and Exploited Children’s CyberTipline
  • The system generates risk scores to assist human content moderators in making faster decisions about flagged content

Technical implementation: The AI model employs sophisticated machine learning algorithms to identify potential CSAM content across various online platforms.

  • The technology can be integrated into social media, e-commerce platforms, and dating applications
  • Since 2019, the Safer tool has successfully identified more than 6 million potential CSAM files
  • The system is designed to improve its accuracy through continued use and exposure to more content across the internet

Future enhancements: Thorn plans to expand the system’s capabilities to provide more comprehensive protection against child exploitation.

  • Development is underway for an AI text classifier to identify conversations indicating potential child exploitation
  • While currently not designed to detect AI-generated CSAM, future updates may address this emerging threat
  • The technology is part of a broader strategy combining detection with preventative measures

Looking ahead: As online platforms face increasing pressure to protect vulnerable users, this AI-powered approach represents a significant step forward in content moderation technology, though questions remain about its effectiveness against evolving threats like AI-generated content and the balance between automation and human review in content moderation decisions.

AI trained on real child sex abuse images to detect new CSAM

Recent News

Sakana AI’s new tech is searching for signs of artificial life emerging from simulations

A self-learning AI system discovers complex cellular patterns and behaviors in digital simulations, automating what was previously months of manual scientific observation.

Dating app usage hit record highs in 2024, but even AI isn’t making daters happier

Growth in dating apps driven by older demographics and AI features masks persistent user dissatisfaction with the digital dating experience.

Craft personalized video messages from Santa with Synthesia’s new tool

Major tech platforms delivered customized Santa videos and messages powered by AI, allowing parents to create personalized holiday greetings in multiple languages.