×
A new AI’s controversial training method allows it to detect child abuse images
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

Online platform safety advances with groundbreaking AI technology aimed at identifying and preventing child exploitation content from being uploaded to the internet.

Revolutionary development: Thorn and AI company Hive have created a first-of-its-kind artificial intelligence model designed to detect previously unknown child sexual abuse materials (CSAM) at the point of upload.

  • The model expands Thorn’s existing Safer detection tool by adding a new “Predict” feature that leverages machine learning technology
  • Training data includes real CSAM content from the National Center for Missing and Exploited Children’s CyberTipline
  • The system generates risk scores to assist human content moderators in making faster decisions about flagged content

Technical implementation: The AI model employs sophisticated machine learning algorithms to identify potential CSAM content across various online platforms.

  • The technology can be integrated into social media, e-commerce platforms, and dating applications
  • Since 2019, the Safer tool has successfully identified more than 6 million potential CSAM files
  • The system is designed to improve its accuracy through continued use and exposure to more content across the internet

Future enhancements: Thorn plans to expand the system’s capabilities to provide more comprehensive protection against child exploitation.

  • Development is underway for an AI text classifier to identify conversations indicating potential child exploitation
  • While currently not designed to detect AI-generated CSAM, future updates may address this emerging threat
  • The technology is part of a broader strategy combining detection with preventative measures

Looking ahead: As online platforms face increasing pressure to protect vulnerable users, this AI-powered approach represents a significant step forward in content moderation technology, though questions remain about its effectiveness against evolving threats like AI-generated content and the balance between automation and human review in content moderation decisions.

AI trained on real child sex abuse images to detect new CSAM

Recent News

How AI and quantum computing are accelerating cancer drug research

A hybrid quantum-AI system identifies two promising drug candidates for a previously untreatable protein that drives a quarter of all cancers.

Microsoft and CoreWeave join New Jersey AI Hub as founding partners

Princeton-based center receives $72 million in funding to connect AI researchers, startups, and corporations on the East Coast.

iPhone Messages app now offers DALL-E AI image generation

Apple integrates OpenAI's image generator directly into iPhone texting, letting users create AI art through voice commands without leaving Messages.