×
X Introduces User Opt-Out for Grok AI Training Data
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

X allows users to opt out of AI training data for Grok chatbot. The social media platform X now provides a setting for users to prevent their posts and interactions from being used to train and fine-tune the company’s Grok AI assistant.

Key details of the opt-out feature:

  • The setting is accessible on the web and will soon be available on mobile.
  • Users can uncheck a box to opt out of allowing their posts, interactions, inputs, and results with Grok to be used for training and fine-tuning purposes.
  • Private accounts are automatically excluded from having their posts used to train Grok’s underlying model or generate responses to user queries.

Accessing the opt-out setting: The setting can be found by navigating through the following steps:

  • Click the three dots menu
  • Select “Settings and privacy”
  • Choose “Privacy and safety”
  • Click on “Grok”

X’s communication about AI training data usage:

  • X’s privacy policy, last updated in September 2023, mentions that the company may use collected information and publicly available data to help train machine learning or artificial intelligence models.
  • The availability of the opt-out setting was not widely communicated, with some users discovering it through reshared posts and an archived version of X’s About page for Grok from May.

Potential impact on user privacy and control:

  • The introduction of the opt-out feature demonstrates X’s effort to provide users with more control over how their data is used for AI training purposes.
  • As concerns about data privacy and the use of personal information for AI development continue to grow, the opt-out setting may help address some user concerns and promote transparency in how X leverages user data for its AI initiatives.
Here’s how to stop X from using your posts to train its AI

Recent News

New framework prevents AI agents from taking unsafe actions in enterprise settings

The framework provides runtime guardrails that intercept unsafe AI agent actions while preserving core functionality, addressing a key barrier to enterprise adoption.

Leaked database reveals China’s AI-powered censorship system targeting political content

The leaked database exposes how China is using advanced language models to automatically identify and censor indirect references to politically sensitive topics beyond traditional keyword filtering.

Study: Anthropic uncovers neural circuits behind AI hallucinations

Anthropic researchers have identified specific neural pathways that determine when AI models fabricate information versus admitting uncertainty, offering new insights into the mechanics behind artificial intelligence hallucinations.