×
Meta’s Oversight Board Exposes Flaws in Instagram’s Deepfake Moderation
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

Meta’s Oversight Board found that Instagram failed to promptly take down an explicit AI-generated deepfake of an Indian public figure, revealing flaws in the company’s moderation practices.

Key findings and implications: The Oversight Board’s investigation reveals that Meta’s approach to moderating non-consensual deepfakes is overly reliant on media reports, potentially leaving victims who are not public figures more vulnerable:

  • Meta only removed the deepfake of the Indian woman and added it to its internal database after the board began investigating, while a similar deepfake of an American woman was quickly deleted.
  • The board expressed concern that “many victims of deepfake intimate images are not in the public eye and are forced to either accept the spread of their non-consensual depictions or search for and report every instance.”

Recommendations for clearer, expanded policies: The Oversight Board called on Meta to revise its rules around “derogatory sexualized Photoshop or drawings” to better protect victims of non-consensual deepfakes:

  • Meta should rephrase its policies to clearly emphasize that any non-consensual sexualized deepfakes are prohibited, regardless of the victim’s public status.
  • The board advised replacing the term “Photoshop” with broader language encompassing various methods of altering and posting images online without consent.
  • It recommended moving this policy from the “Bullying and Harassment” section to the “Adult Sexual Exploitation Community Standard” for greater clarity and prominence.

Shortcomings in reporting and removal processes: The investigation also highlighted issues with Meta’s handling of user reports and content removal:

  • The deepfake of the Indian woman was only removed after the board flagged it, as the original user report was automatically closed within 48 hours due to lack of staff response.
  • The board expressed concern about the potential human rights impact of auto-closing reports and called for more information on Meta’s use of this practice.

Broader context and implications: This case underscores the growing challenges social media platforms face in combating the spread of non-consensual deepfakes and protecting victims:

  • The findings come as the U.S. Senate passes the Defiance Act, allowing deepfake porn victims to sue creators and distributors, and as other legislative efforts like the TAKE IT DOWN Act aim to criminalize revenge porn.
  • Public comments received by the board emphasized the crucial role platforms must play as the first line of defense against deepfakes, particularly for victims who lack public prominence.

Moving forward, Meta will need to implement the board’s decision, reassess its deepfake policies and moderation practices, and explore proactive measures to protect all users from non-consensual sexual content, regardless of their public status. The case highlights the urgent need for robust, consent-focused rules and swift, consistent enforcement in the face of rapidly advancing deepfake technology.

Meta Failed to Delete Explicit AI Deepfake, Oversight Board Rules

Recent News

ChatGPT upgrade propels OpenAI back to top of LLM rankings

OpenAI's latest GPT-4 upgrades outperform Google's Gemini in comprehensive testing, marking notable advances in file processing and creative tasks.

AI reporter fired after replacing human journalist

AI news anchors failed to master Hawaiian pronunciations and connect with local viewers, highlighting technological and cultural barriers to automated journalism.

4 strategies to safeguard your artwork from AI

Artists increasingly adopt defensive tools and legal measures as AI companies continue harvesting their work without consent for training data.