×
Written by
Published on
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

The EU’s privacy watchdog is questioning X (formerly Twitter) over its use of user posts to train the xAI chatbot Grok without obtaining consent, potentially violating GDPR rules.

Key details: The EU watchdog expressed surprise and is seeking clarity on X’s data practices, which may not comply with GDPR requirements for obtaining user consent before using personal data:

  • X users were likely opted-in by default to have their posts used as training data for Grok, an AI chatbot developed by Elon Musk’s AI company xAI.
  • Under GDPR, companies must obtain explicit consent from users before using their personal data for purposes like training AI models.
  • The EU watchdog told The Financial Times they were “surprised” by X’s actions and are now “seeking clarity” on the issue.

Broader implications: This development highlights the increasing scrutiny tech companies face over their AI data practices, particularly in the EU where GDPR sets strict standards for user privacy and consent:

  • As AI becomes more prevalent, the sources and methods used to train these systems are coming under greater regulatory oversight.
  • The EU has been at the forefront of efforts to protect user privacy and establish clear rules around the use of personal data, putting it on a collision course with tech giants racing to develop and deploy AI.
  • The outcome of the EU watchdog’s inquiry into X could set an important precedent for how social media data can be used to train AI models and the consent required from users.

Looking ahead: X and other tech companies will likely face growing pressure to be more transparent about their AI data practices and to give users greater control over how their information is used:

  • Regulators in the EU and elsewhere are poised to take a harder line against companies that fail to prioritize user privacy and consent in their rush to develop AI technologies.
  • As awareness of these issues grows among the public, companies that proactively address privacy concerns and give users clear choices may gain a competitive advantage in terms of trust and adoption.
  • The X case underscores the need for ongoing dialogue between regulators, tech companies, and the public to strike the right balance between innovation and privacy in the age of AI.
EU AI watchdog questions X.

Recent News

Technical considerations for business leaders implementing AI

Business leaders grapple with complex decisions as they navigate the integration of generative AI, balancing costs, risks, and potential value creation.

5 key trends to watch at the intersection of AI and healthcare

AI is transforming medicine by enhancing data analysis, drug discovery, and genomic research, with significant developments in visualization, healthcare equity, and cardiovascular disease prevention.

Illuminate is Google’s new AI podcasting tool — here’s how it works

Google's new AI tool synthesizes scientific research from arxiv.org into customizable podcasts, potentially transforming how complex information is disseminated to broader audiences.