×
Written by
Published on
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

The EU’s privacy watchdog is questioning X (formerly Twitter) over its use of user posts to train the xAI chatbot Grok without obtaining consent, potentially violating GDPR rules.

Key details: The EU watchdog expressed surprise and is seeking clarity on X’s data practices, which may not comply with GDPR requirements for obtaining user consent before using personal data:

  • X users were likely opted-in by default to have their posts used as training data for Grok, an AI chatbot developed by Elon Musk’s AI company xAI.
  • Under GDPR, companies must obtain explicit consent from users before using their personal data for purposes like training AI models.
  • The EU watchdog told The Financial Times they were “surprised” by X’s actions and are now “seeking clarity” on the issue.

Broader implications: This development highlights the increasing scrutiny tech companies face over their AI data practices, particularly in the EU where GDPR sets strict standards for user privacy and consent:

  • As AI becomes more prevalent, the sources and methods used to train these systems are coming under greater regulatory oversight.
  • The EU has been at the forefront of efforts to protect user privacy and establish clear rules around the use of personal data, putting it on a collision course with tech giants racing to develop and deploy AI.
  • The outcome of the EU watchdog’s inquiry into X could set an important precedent for how social media data can be used to train AI models and the consent required from users.

Looking ahead: X and other tech companies will likely face growing pressure to be more transparent about their AI data practices and to give users greater control over how their information is used:

  • Regulators in the EU and elsewhere are poised to take a harder line against companies that fail to prioritize user privacy and consent in their rush to develop AI technologies.
  • As awareness of these issues grows among the public, companies that proactively address privacy concerns and give users clear choices may gain a competitive advantage in terms of trust and adoption.
  • The X case underscores the need for ongoing dialogue between regulators, tech companies, and the public to strike the right balance between innovation and privacy in the age of AI.
EU AI watchdog questions X.

Recent News

Newton AI model learns physics autonomously from raw data

The AI model learns complex physics concepts from raw sensor data, potentially transforming fields from energy management to scientific research.

Anthropic just announced a big update to Claude — here’s what’s inside

The update brings enhanced customization and cross-device functionality to Claude AI, allowing for more personalized and efficient user experiences.

Google enhances NotebookLM with customizable AI podcasts

Google's AI writing tool now allows users to create customized podcast-style discussions based on uploaded content and specific prompts.