Meta‘s privacy battle with Europe over AI training data has escalated as advocacy group NOYB challenges the company’s data practices. The dispute centers on whether Meta can use personal data without explicit user consent for AI training, with noyb arguing that Meta’s claim of “legitimate interest” violates GDPR principles. This confrontation represents the latest chapter in ongoing tensions between European privacy regulators and major tech platforms over data protection rights.
The big picture: Privacy advocacy group NOYB has launched a new challenge against Meta’s plans to resume AI training using European user data, threatening a class action lawsuit.
- The group sent Meta a cease and desist letter after the company announced it would restart AI training using public posts, comments, and user interactions with Meta AI.
- This follows Meta’s initial pause of AI training in the EU and European Economic Area last June after concerns from the Irish Data Protection Commission.
Key details: Meta claims it has legal clearance to proceed based on a December opinion from the European Data Protection Board (EDPB).
- Meta stated: “We welcome the opinion provided by the EDPB in December, which affirmed that our original approach met our legal obligations.”
- NOYB disputes this interpretation, with chair Max Schrems saying: “As far as we have heard, Meta has ‘engaged’ with the authorities, but this hasn’t led to any ‘green light’.”
The central dispute: NOYB contests Meta’s claim that it has a “legitimate interest” to use user data without explicit opt-in consent.
- Schrems argues: “The European Court of Justice has already held that Meta cannot claim a ‘legitimate interest’ in targeting users with advertising. How should it have a ‘legitimate interest’ to suck up all data for AI training?”
- The advocacy group compares this to Meta’s previous data collection practices for advertising, which led to the company requiring specific opt-in consent after GDPR lawsuits in 2023.
Additional concerns: NOYB claims Meta may be unable to comply with other GDPR requirements in its AI operations.
- The group highlights potential issues with the right to be forgotten, the right to have incorrect data rectified, and users’ access rights to their data in an AI system.
- A particular problem cited is Meta’s provision of AI models as open-source software, which NOYB argues makes it impossible to recall or update models once published.
What they’re saying: Schrems describes the conflict as fundamentally about consent versus data appropriation.
- “This fight is essentially about whether to ask people for consent or simply take their data without it,” Schrems stated.
- He further noted: “Meta’s absurd claims that stealing everyone’s person data is necessary for AI training is laughable. Other AI providers do not use social network data—and generate even better models than Meta.”
Meta Accused Of Still Flouting Privacy Rules With AI Training Data