×
If You Didn’t Set Your Photos to Private, Meta Probably Trained Its AI on Them
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

Meta’s extensive data usage for AI training: Meta has confirmed that it has been using public posts and photos from adult Facebook and Instagram users since 2007 to train its artificial intelligence models.

  • The revelation came during an Australian government inquiry into AI adoption, where Meta’s global privacy director, Melinda Claybaugh, initially denied but later confirmed the extent of data usage.
  • This practice includes all public text and photo content posted by adult users on Facebook and Instagram over the past 17 years.
  • Users who have not explicitly set their posts to private have had their data included in Meta’s AI training datasets.

Privacy concerns and user consent: The scale of Meta’s data collection has raised significant privacy concerns, particularly regarding user awareness and consent.

  • Many users who posted content as far back as 2007 may not have been aware that their data would be used for AI training purposes.
  • The company has been vague about the specifics of its data usage, including when it started scraping and the full extent of its collection.
  • Setting posts to private now will prevent future scraping but does not remove data that has already been collected for AI training.

Age-related data usage policies: Meta claims to have safeguards in place for younger users, but questions remain about the effectiveness of these measures.

  • The company states it does not scrape data from users under 18 years old.
  • However, Meta confirmed it would scrape public photos of children from adult accounts.
  • There’s a lack of clarity on how Meta handles accounts created by users when they were minors but are now adults.

Regional differences in data protection: The ability to opt out of AI training data collection varies significantly based on geographical location and local privacy regulations.

  • European users can opt out due to stringent local privacy regulations.
  • Meta has been banned from using Brazilian personal data for AI training.
  • Users in other regions, including Australia and the United States, cannot opt out if they want to keep their posts public.
  • Claybaugh was unable to confirm if users outside the EU would be given the option to opt out in the future.

Regulatory implications: The revelation has sparked discussions about the need for stronger privacy laws and regulations globally.

  • Australian Senator David Shoebridge highlighted that if Australia had privacy laws similar to those in Europe, Australian users’ data would have been protected.
  • The lack of action on privacy legislation in many countries has allowed companies like Meta to continue monetizing and exploiting user data, including content involving children.

Transparency and communication issues: Meta’s handling of this situation raises questions about corporate transparency and communication with users.

  • The company’s vague responses to previous inquiries about data usage have contributed to a lack of clarity for users.
  • Meta’s privacy center and blog posts acknowledge the use of public posts for AI training, but the full extent of this practice was not widely known until now.

Broader implications for AI development: This situation highlights the complex relationship between data collection, AI development, and user privacy in the digital age.

  • While extensive datasets are crucial for developing advanced AI models, the ethical implications of using personal data without explicit consent are significant.
  • The incident underscores the need for a global conversation about balancing technological advancement with individual privacy rights and data protection.
  • As AI continues to evolve, clearer guidelines and regulations may be necessary to ensure responsible data usage and protect user privacy across different jurisdictions.
Meta fed its AI on almost everything you’ve posted publicly since 2007

Recent News

AI-powered computers are adding more time to workers’ tasks, but there’s a catch

Early AI PC adopters report spending more time on tasks than traditional computer users, signaling growing pains in the technology's implementation.

The global bootcamp that teaches intensive AI safety programming classes

Global bootcamp program trains next wave of AI safety professionals through intensive 10-day courses funded by Open Philanthropy.

‘Anti-scale’ and how to save journalism in an automated world

Struggling news organizations seek to balance AI adoption with growing public distrust, as the industry pivots toward community-focused journalism over content volume.