×
Microsoft Removes 300,000 Non-Consensual Intimate Images from Bing
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

Microsoft takes action against non-consensual intimate images: The tech giant has partnered with StopNCII to combat the spread of explicit content, including AI-generated deepfakes, on its Bing image search platform.

  • Since March, Microsoft has successfully removed nearly 300,000 intimate images that were posted without consent from Bing search results, demonstrating a significant step in addressing this issue.
  • The company has integrated its PhotoDNA technology into StopNCII’s platform, enabling individuals over 18 to create digital fingerprints of images they wish to protect from online distribution.
  • These digital fingerprints are then shared with StopNCII’s partner platforms, including popular social media sites like Instagram, Facebook, and TikTok, to identify and remove matching content across the internet.

Pilot program success and expansion: Microsoft’s collaboration with StopNCII has shown promising results, leading to further developments in their approach to combating non-consensual intimate image abuse.

  • As of August, Microsoft had taken action on 268,899 images through their pilot program with StopNCII, highlighting the scale of the problem and the effectiveness of their approach.
  • The company is now expanding its partnership with StopNCII to implement a “victim-centered approach to detection in Bing,” aiming to provide more comprehensive protection for individuals affected by this form of abuse.
  • Microsoft actively encourages adults concerned about the potential sharing of their intimate images to report them through the StopNCII platform, leveraging the partnership to provide a streamlined reporting process.

Additional measures and considerations: Microsoft has implemented several other initiatives to address the broader issue of non-consensual intimate image sharing and AI-generated explicit content.

  • The company maintains its own reporting portal, allowing users to directly flag problematic content for review and potential removal.
  • In response to the growing concern over AI-generated deepfake nudes, Microsoft has stated that it will consider removing such content when reported, acknowledging the evolving nature of this issue.
  • For images involving individuals under 18, Microsoft emphasizes that these should be reported as child exploitation imagery, highlighting the distinct legal and ethical considerations for minors.

Broader implications and future directions: Microsoft’s efforts reflect the growing recognition of non-consensual intimate image sharing as a serious issue in the digital age.

  • The partnership between a major tech company and a specialized non-profit organization like StopNCII demonstrates the potential for cross-sector collaboration in addressing complex digital ethics issues.
  • As AI technology continues to advance, the challenge of combating AI-generated deepfakes is likely to become more pressing, requiring ongoing innovation and adaptation from tech companies and advocacy groups.
  • While these efforts represent significant progress, the evolving nature of technology and online behavior suggests that continued vigilance and adaptation will be necessary to effectively protect individuals’ privacy and dignity in the digital realm.
Microsoft to Help Users Remove Deepfake Nudes From Bing Image Search

Recent News

MIT research evaluates driver behavior to advance autonomous driving tech

Researchers find driver trust and behavior patterns are more critical to autonomous vehicle adoption than technical capabilities, with acceptance levels showing first uptick in years.

Inside Microsoft’s plan to ensure every business has an AI Agent

Microsoft's shift toward AI assistants marks its largest interface change since the introduction of Windows, as the company integrates automated helpers across its entire software ecosystem.

Chinese AI model LLaVA-o1 rivals OpenAI’s o1 in new study

New open-source AI model from China matches Silicon Valley's best at visual reasoning tasks while making its code freely available to researchers.