×
A new AI’s controversial training method allows it to detect child abuse images
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

Online platform safety advances with groundbreaking AI technology aimed at identifying and preventing child exploitation content from being uploaded to the internet.

Revolutionary development: Thorn and AI company Hive have created a first-of-its-kind artificial intelligence model designed to detect previously unknown child sexual abuse materials (CSAM) at the point of upload.

  • The model expands Thorn’s existing Safer detection tool by adding a new “Predict” feature that leverages machine learning technology
  • Training data includes real CSAM content from the National Center for Missing and Exploited Children’s CyberTipline
  • The system generates risk scores to assist human content moderators in making faster decisions about flagged content

Technical implementation: The AI model employs sophisticated machine learning algorithms to identify potential CSAM content across various online platforms.

  • The technology can be integrated into social media, e-commerce platforms, and dating applications
  • Since 2019, the Safer tool has successfully identified more than 6 million potential CSAM files
  • The system is designed to improve its accuracy through continued use and exposure to more content across the internet

Future enhancements: Thorn plans to expand the system’s capabilities to provide more comprehensive protection against child exploitation.

  • Development is underway for an AI text classifier to identify conversations indicating potential child exploitation
  • While currently not designed to detect AI-generated CSAM, future updates may address this emerging threat
  • The technology is part of a broader strategy combining detection with preventative measures

Looking ahead: As online platforms face increasing pressure to protect vulnerable users, this AI-powered approach represents a significant step forward in content moderation technology, though questions remain about its effectiveness against evolving threats like AI-generated content and the balance between automation and human review in content moderation decisions.

AI trained on real child sex abuse images to detect new CSAM

Recent News

EdgeCore expands Mesa, Arizona data center capacity to 450+ MW

EdgeCore's Mesa data center expansion adds 43.87 acres to support AI computing needs, featuring water-conscious cooling systems designed for Arizona's arid climate.

LLM runs on Commodore 64 in impressive display of 80s tech staying power

The 42-year-old home computer processes a miniature language model at glacial speeds, generating simplistic stories token by token.

AI’s rapid rise in healthcare sparks urgent calls for oversight

Experts call for transparent AI oversight in healthcare to prevent bias and ensure patient safety as systems evolve over time.