×
EU Watchdog Probes X’s Use of User Data for AI Without Consent
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

The EU’s privacy watchdog is questioning X (formerly Twitter) over its use of user posts to train the xAI chatbot Grok without obtaining consent, potentially violating GDPR rules.

Key details: The EU watchdog expressed surprise and is seeking clarity on X’s data practices, which may not comply with GDPR requirements for obtaining user consent before using personal data:

  • X users were likely opted-in by default to have their posts used as training data for Grok, an AI chatbot developed by Elon Musk’s AI company xAI.
  • Under GDPR, companies must obtain explicit consent from users before using their personal data for purposes like training AI models.
  • The EU watchdog told The Financial Times they were “surprised” by X’s actions and are now “seeking clarity” on the issue.

Broader implications: This development highlights the increasing scrutiny tech companies face over their AI data practices, particularly in the EU where GDPR sets strict standards for user privacy and consent:

  • As AI becomes more prevalent, the sources and methods used to train these systems are coming under greater regulatory oversight.
  • The EU has been at the forefront of efforts to protect user privacy and establish clear rules around the use of personal data, putting it on a collision course with tech giants racing to develop and deploy AI.
  • The outcome of the EU watchdog’s inquiry into X could set an important precedent for how social media data can be used to train AI models and the consent required from users.

Looking ahead: X and other tech companies will likely face growing pressure to be more transparent about their AI data practices and to give users greater control over how their information is used:

  • Regulators in the EU and elsewhere are poised to take a harder line against companies that fail to prioritize user privacy and consent in their rush to develop AI technologies.
  • As awareness of these issues grows among the public, companies that proactively address privacy concerns and give users clear choices may gain a competitive advantage in terms of trust and adoption.
  • The X case underscores the need for ongoing dialogue between regulators, tech companies, and the public to strike the right balance between innovation and privacy in the age of AI.
EU AI watchdog questions X.

Recent News

Autonomous race car crashes at Abu Dhabi Racing League event

The first autonomous racing event at Suzuka highlighted persistent challenges in AI driving systems when a self-driving car lost control during warmup laps in controlled conditions.

What states may be missing in their rush to regulate AI

State-level AI regulations are testing constitutional precedents on free speech and commerce, as courts grapple with balancing innovation and public safety concerns.

The race to decode animal sounds into human language

New tools and prize money are driving rapid advances in understanding animal vocalizations, though researchers caution against expecting human-like language structures.