×
Written by
Published on
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

Meta’s Oversight Board found that Instagram failed to promptly take down an explicit AI-generated deepfake of an Indian public figure, revealing flaws in the company’s moderation practices.

Key findings and implications: The Oversight Board’s investigation reveals that Meta’s approach to moderating non-consensual deepfakes is overly reliant on media reports, potentially leaving victims who are not public figures more vulnerable:

  • Meta only removed the deepfake of the Indian woman and added it to its internal database after the board began investigating, while a similar deepfake of an American woman was quickly deleted.
  • The board expressed concern that “many victims of deepfake intimate images are not in the public eye and are forced to either accept the spread of their non-consensual depictions or search for and report every instance.”

Recommendations for clearer, expanded policies: The Oversight Board called on Meta to revise its rules around “derogatory sexualized Photoshop or drawings” to better protect victims of non-consensual deepfakes:

  • Meta should rephrase its policies to clearly emphasize that any non-consensual sexualized deepfakes are prohibited, regardless of the victim’s public status.
  • The board advised replacing the term “Photoshop” with broader language encompassing various methods of altering and posting images online without consent.
  • It recommended moving this policy from the “Bullying and Harassment” section to the “Adult Sexual Exploitation Community Standard” for greater clarity and prominence.

Shortcomings in reporting and removal processes: The investigation also highlighted issues with Meta’s handling of user reports and content removal:

  • The deepfake of the Indian woman was only removed after the board flagged it, as the original user report was automatically closed within 48 hours due to lack of staff response.
  • The board expressed concern about the potential human rights impact of auto-closing reports and called for more information on Meta’s use of this practice.

Broader context and implications: This case underscores the growing challenges social media platforms face in combating the spread of non-consensual deepfakes and protecting victims:

  • The findings come as the U.S. Senate passes the Defiance Act, allowing deepfake porn victims to sue creators and distributors, and as other legislative efforts like the TAKE IT DOWN Act aim to criminalize revenge porn.
  • Public comments received by the board emphasized the crucial role platforms must play as the first line of defense against deepfakes, particularly for victims who lack public prominence.

Moving forward, Meta will need to implement the board’s decision, reassess its deepfake policies and moderation practices, and explore proactive measures to protect all users from non-consensual sexual content, regardless of their public status. The case highlights the urgent need for robust, consent-focused rules and swift, consistent enforcement in the face of rapidly advancing deepfake technology.

Meta Failed to Delete Explicit AI Deepfake, Oversight Board Rules

Recent News

AI Tutors Double Student Learning in Harvard Study

Students using an AI tutor demonstrated twice the learning gains in half the time compared to traditional lectures, suggesting potential for more efficient and personalized education.

Lionsgate Teams Up With Runway On Custom AI Video Generation Model

The studio aims to develop AI tools for filmmakers using its vast library, raising questions about content creation and creative rights.

How to Successfully Integrate AI into Project Management Practices

AI-powered tools automate routine tasks, analyze data for insights, and enhance decision-making, promising to boost productivity and streamline project management across industries.