×
EU authorities open door to AI using personal data without consent
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

The European Data Protection Board’s latest guidance explores how companies can develop AI models while adhering to data privacy regulations, particularly focusing on the use of personal data in training processes.

Key framework developments: The EDPB’s new report outlines potential pathways for using personal data in AI training without explicit consent, marking a significant shift in regulatory thinking.

  • The guidance suggests that personal data could be used for AI training if the final application does not reveal private information about individuals
  • This interpretation acknowledges the technical distinction between training data and the information ultimately delivered to end users
  • The framework aims to balance innovation needs with privacy protection under GDPR rules

Compliance assessment criteria: The EDPB has developed specific questions to help organizations evaluate their AI systems’ compliance with data protection regulations.

  • Companies must demonstrate at which point personal data stops being processed in their AI models
  • Organizations need to prove their AI models do not process personal data in their final form
  • Businesses must identify and mitigate any factors that could cause their AI models to process personal data inadvertently

Data anonymization considerations: The report emphasizes the complexity of data anonymization and provides guidance for implementing appropriate safeguards.

  • Anonymization requirements must be evaluated on a case-by-case basis
  • CIOs need to consider data subjects’ reasonable expectations when handling personal information
  • Factors such as public availability of data, service nature, and data collection context play crucial roles in compliance assessment

Legitimate interest analysis: The guidance explores the concept of “legitimate interest” as a potential legal basis for data processing in AI development.

  • Organizations must evaluate whether their use of personal data serves a legitimate business purpose
  • Companies need to demonstrate they have considered less intrusive alternatives for achieving their objectives
  • The assessment should balance business needs against individual privacy rights

Industry reception: European business leaders have shown mixed reactions to the EDPB’s guidance.

  • Some industry leaders welcome the framework as providing needed clarity for AI development
  • Others have expressed concerns about specific aspects of the requirements
  • The guidance has sparked debate about the practical implementation of these requirements in real-world AI development

Future implications: This regulatory framework could shape how AI development proceeds in Europe while influencing global standards for responsible AI development, though questions remain about practical implementation and enforcement mechanisms in complex AI systems.

In potential reversal, European authorities say AI can indeed use personal data — without consent

Recent News

How edge AI and 5G will power a new generation of Industry 4.0 apps

Industrial facilities are moving critical computing power closer to their operations while building private networks, enabling safer and more automated production environments.

Imbue CEO says these are the keys to building smarter AI agents

AI agents aim to make advanced artificial intelligence as approachable as personal computers, with built-in safeguards to verify their outputs and reasoning.

A16Z on safety, censorship and innovation with AI

Growing alignment between venture capital firms and major tech companies creates a unified front in shaping AI regulatory policy, while smaller companies seek distinct treatment under proposed frameworks.