×
Written by
Published on
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

Meta’s Oversight Board found that Instagram failed to promptly take down an explicit AI-generated deepfake of an Indian public figure, revealing flaws in the company’s moderation practices.

Key findings and implications: The Oversight Board’s investigation reveals that Meta’s approach to moderating non-consensual deepfakes is overly reliant on media reports, potentially leaving victims who are not public figures more vulnerable:

  • Meta only removed the deepfake of the Indian woman and added it to its internal database after the board began investigating, while a similar deepfake of an American woman was quickly deleted.
  • The board expressed concern that “many victims of deepfake intimate images are not in the public eye and are forced to either accept the spread of their non-consensual depictions or search for and report every instance.”

Recommendations for clearer, expanded policies: The Oversight Board called on Meta to revise its rules around “derogatory sexualized Photoshop or drawings” to better protect victims of non-consensual deepfakes:

  • Meta should rephrase its policies to clearly emphasize that any non-consensual sexualized deepfakes are prohibited, regardless of the victim’s public status.
  • The board advised replacing the term “Photoshop” with broader language encompassing various methods of altering and posting images online without consent.
  • It recommended moving this policy from the “Bullying and Harassment” section to the “Adult Sexual Exploitation Community Standard” for greater clarity and prominence.

Shortcomings in reporting and removal processes: The investigation also highlighted issues with Meta’s handling of user reports and content removal:

  • The deepfake of the Indian woman was only removed after the board flagged it, as the original user report was automatically closed within 48 hours due to lack of staff response.
  • The board expressed concern about the potential human rights impact of auto-closing reports and called for more information on Meta’s use of this practice.

Broader context and implications: This case underscores the growing challenges social media platforms face in combating the spread of non-consensual deepfakes and protecting victims:

  • The findings come as the U.S. Senate passes the Defiance Act, allowing deepfake porn victims to sue creators and distributors, and as other legislative efforts like the TAKE IT DOWN Act aim to criminalize revenge porn.
  • Public comments received by the board emphasized the crucial role platforms must play as the first line of defense against deepfakes, particularly for victims who lack public prominence.

Moving forward, Meta will need to implement the board’s decision, reassess its deepfake policies and moderation practices, and explore proactive measures to protect all users from non-consensual sexual content, regardless of their public status. The case highlights the urgent need for robust, consent-focused rules and swift, consistent enforcement in the face of rapidly advancing deepfake technology.

Meta Failed to Delete Explicit AI Deepfake, Oversight Board Rules

Recent News

71% of Investment Bankers Now Use ChatGPT, Survey Finds

Investment banks are increasingly adopting AI, with smaller firms leading the way and larger institutions seeing higher potential value per employee.

Scientists are Designing “Humanity’s Last Exam” to Assess Powerful AI

The unprecedented test aims to assess AI capabilities across diverse fields, from rocketry to philosophy, with experts submitting challenging questions beyond current benchmarks.

Hume Launches ‘EVI 2’ AI Voice Model with Emotional Responsiveness

The new AI voice model offers improved naturalness, faster response times, and customizable voices, potentially enhancing AI-human interactions across various industries.