×
Meta Tweaks AI Labeling After Mislabeling Edited Photos as Artificial
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

Meta is updating its approach to labeling AI-generated content after its “Made with AI” tags were confusing users by incorrectly flagging some lightly edited photos as AI-made.

Key changes to Meta’s AI labeling policy: Meta is tweaking its AI labeling system in response to user feedback and guidance from its Oversight Board:

  • The “Made with AI” label will be changed to “AI info” across Meta’s apps, which users can click for more context
  • Meta is working with industry partners to improve its labeling approach so it better aligns with user expectations

The problem with the previous labeling system: Meta’s AI detection relied heavily on metadata to flag AI content, leading to issues:

  • Photos that were lightly edited in Photoshop were being labeled as AI-made, even if they weren’t fully generated by AI tools like DALL-E
  • Metadata indicating minor AI edits could be easily removed, allowing actual AI images to go undetected

Challenges in identifying AI content: There is currently no perfect solution for comprehensively detecting AI images online:

  • Metadata can be a flawed indicator, as it can be added to minimally edited photos or stripped from actual AI images
  • Ultimately, users still need to be vigilant and learn to spot clues that an image may be artificially generated

Balancing AI integration with transparency: As Meta pushes forward with AI tools across its platforms, it is grappling with how to responsibly label AI content:

  • Meta first announced plans to automatically detect and label AI images in February, also asking users to proactively disclose AI content
  • However, the initial labeling system led to confusion and frustration among users whose legitimately captured and edited photos were tagged as AI

Broader implications:

Meta’s challenges with accurately labeling AI content highlight the complex issues platforms face as AI-generated images become increasingly commonplace online. While Meta is taking steps to refine its approach based on user feedback, the difficulty in distinguishing lightly edited photos from wholly artificial ones underscores the need for a multi-pronged approach.

Technical solutions like metadata analysis will likely need to be combined with ongoing efforts to educate users about the hallmarks of AI imagery. Ultimately, maintaining transparency and trust as AI proliferates will require collaboration between platforms, AI companies, and users themselves.

Meta Changes 'Made With AI' Policy After Mislabeling Images

Recent News

AI agents and the rise of Hybrid Organizations

Meta makes its improved AI image generator free to use while adding visible watermarks and daily limits to prevent misuse.

Adobe partnership brings AI creativity tools to Box’s content management platform

Box users can now access Adobe's AI-powered editing tools directly within their secure storage environment, eliminating the need to download files or switch between platforms.

Nvidia’s new ACE platform aims to bring more AI to games, but not everyone’s sold

Gaming companies are racing to integrate AI features into mainstream titles, but high hardware requirements and artificial interactions may limit near-term adoption.