×
Written by
Published on
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

Meta is updating its approach to labeling AI-generated content after its “Made with AI” tags were confusing users by incorrectly flagging some lightly edited photos as AI-made.

Key changes to Meta’s AI labeling policy: Meta is tweaking its AI labeling system in response to user feedback and guidance from its Oversight Board:

  • The “Made with AI” label will be changed to “AI info” across Meta’s apps, which users can click for more context
  • Meta is working with industry partners to improve its labeling approach so it better aligns with user expectations

The problem with the previous labeling system: Meta’s AI detection relied heavily on metadata to flag AI content, leading to issues:

  • Photos that were lightly edited in Photoshop were being labeled as AI-made, even if they weren’t fully generated by AI tools like DALL-E
  • Metadata indicating minor AI edits could be easily removed, allowing actual AI images to go undetected

Challenges in identifying AI content: There is currently no perfect solution for comprehensively detecting AI images online:

  • Metadata can be a flawed indicator, as it can be added to minimally edited photos or stripped from actual AI images
  • Ultimately, users still need to be vigilant and learn to spot clues that an image may be artificially generated

Balancing AI integration with transparency: As Meta pushes forward with AI tools across its platforms, it is grappling with how to responsibly label AI content:

  • Meta first announced plans to automatically detect and label AI images in February, also asking users to proactively disclose AI content
  • However, the initial labeling system led to confusion and frustration among users whose legitimately captured and edited photos were tagged as AI

Broader implications:

Meta’s challenges with accurately labeling AI content highlight the complex issues platforms face as AI-generated images become increasingly commonplace online. While Meta is taking steps to refine its approach based on user feedback, the difficulty in distinguishing lightly edited photos from wholly artificial ones underscores the need for a multi-pronged approach.

Technical solutions like metadata analysis will likely need to be combined with ongoing efforts to educate users about the hallmarks of AI imagery. Ultimately, maintaining transparency and trust as AI proliferates will require collaboration between platforms, AI companies, and users themselves.

Meta Changes 'Made With AI' Policy After Mislabeling Images

Recent News

AI Anchors are Protecting Venezuelan Journalists from Government Crackdowns

Venezuelan news outlets deploy AI-generated anchors to protect human journalists from government retaliation while disseminating news via social media.

How AI and Robotics are Being Integrated into Sex Tech

The integration of AI and robotics into sexual experiences raises questions about the future of human intimacy and relationships.

63% of Brands Now Embrace Gen AI in Marketing, Research Shows

Marketers embrace generative AI despite legal and ethical concerns, with 63% of brands already using the technology in their campaigns.