×
EU Watchdog Probes X’s Use of User Data for AI Without Consent
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

The EU’s privacy watchdog is questioning X (formerly Twitter) over its use of user posts to train the xAI chatbot Grok without obtaining consent, potentially violating GDPR rules.

Key details: The EU watchdog expressed surprise and is seeking clarity on X’s data practices, which may not comply with GDPR requirements for obtaining user consent before using personal data:

  • X users were likely opted-in by default to have their posts used as training data for Grok, an AI chatbot developed by Elon Musk’s AI company xAI.
  • Under GDPR, companies must obtain explicit consent from users before using their personal data for purposes like training AI models.
  • The EU watchdog told The Financial Times they were “surprised” by X’s actions and are now “seeking clarity” on the issue.

Broader implications: This development highlights the increasing scrutiny tech companies face over their AI data practices, particularly in the EU where GDPR sets strict standards for user privacy and consent:

  • As AI becomes more prevalent, the sources and methods used to train these systems are coming under greater regulatory oversight.
  • The EU has been at the forefront of efforts to protect user privacy and establish clear rules around the use of personal data, putting it on a collision course with tech giants racing to develop and deploy AI.
  • The outcome of the EU watchdog’s inquiry into X could set an important precedent for how social media data can be used to train AI models and the consent required from users.

Looking ahead: X and other tech companies will likely face growing pressure to be more transparent about their AI data practices and to give users greater control over how their information is used:

  • Regulators in the EU and elsewhere are poised to take a harder line against companies that fail to prioritize user privacy and consent in their rush to develop AI technologies.
  • As awareness of these issues grows among the public, companies that proactively address privacy concerns and give users clear choices may gain a competitive advantage in terms of trust and adoption.
  • The X case underscores the need for ongoing dialogue between regulators, tech companies, and the public to strike the right balance between innovation and privacy in the age of AI.
EU AI watchdog questions X.

Recent News

How the rise of small AI models is redefining the AI race

Purpose-built, smaller AI models deliver similar results to their larger counterparts while using a fraction of the computing power and cost.

London Book Fair to focus on AI integration and declining literacy rates

Publishing industry convenes to address AI integration and youth readership challenges amid strong international rights trading.

AI takes center stage at HPA Tech Retreat as entertainment execs ponder future of industry

Studios race to buy AI companies and integrate machine learning into film production, despite concerns over creative control and job security.