×
LinkedIn is Training its AI Models on Your Data — Here’s How to Opt Out
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

LinkedIn’s AI training initiative: LinkedIn has implemented a new policy that allows the company to use user data for training generative AI models, with users automatically opted in without explicit consent.

Opting out of AI training: Users who wish to prevent their data from being used for future AI model training must take specific steps to opt out.

  • To opt out, users need to navigate to the Data privacy tab in their account settings and toggle off the “Data for Generative AI Improvement” option.
  • LinkedIn clarifies that opting out will only prevent future use of personal data for AI training and does not affect any training that has already taken place.
  • The company claims to use privacy-enhancing technologies to redact or remove personal data from its training sets.

Additional opt-out requirements: LinkedIn’s AI training policy extends beyond generative AI models, requiring users to take extra steps to fully protect their data.

  • The platform uses other machine learning tools for purposes such as personalization and moderation, which do not generate content.
  • To opt out of data usage for these non-generative AI tools, users must separately fill out the LinkedIn Data Processing Objection Form.
  • This two-step opt-out process highlights the complexity of data usage policies and the potential for user confusion.

Geographic considerations: LinkedIn’s AI training policy varies based on user location, with certain regions exempt from data collection.

  • Users residing in the European Union, European Economic Area, or Switzerland are not included in the AI model training program.
  • This geographic distinction underscores the impact of regional data protection regulations on corporate AI development practices.

Implications for user privacy: LinkedIn’s decision to automatically opt users into AI training raises concerns about data privacy and user consent.

  • The lack of proactive notification about this significant change in data usage has sparked criticism from privacy advocates.
  • This incident highlights the ongoing debate surrounding the balance between technological advancement and individual privacy rights in the digital age.
  • Users may be unaware of how their professional and personal information shared on the platform could be utilized in AI development.

Broader context of AI data collection: LinkedIn’s approach to AI training data collection reflects a growing trend among tech companies.

  • The revelation comes amidst increased scrutiny of how major tech platforms acquire and use user data for AI development.
  • This incident, along with Meta’s recent admission, suggests that the practice of utilizing user data for AI training may be more widespread than previously known.
  • It raises questions about the transparency of tech companies regarding their data usage policies and the extent of user control over personal information.

Analyzing the implications: LinkedIn’s AI training policy underscores the complex relationship between user data, technological innovation, and privacy concerns in the digital age.

  • The automatic opt-in approach taken by LinkedIn may set a precedent for other platforms, potentially normalizing the use of user data for AI training without explicit consent.
  • This incident highlights the need for increased transparency from tech companies about their data usage practices and more robust regulations to protect user privacy in the era of AI development.
  • As AI continues to advance, the balance between leveraging user data for innovation and respecting individual privacy rights will likely remain a contentious issue, requiring ongoing scrutiny and dialogue.
LinkedIn is training AI models on your data

Recent News

7 ways to optimize your business for ChatGPT recommendations

Companies must adapt their digital strategy with specific expertise, consistent information across platforms, and authoritative content to appear in AI-powered recommendation results.

Robin Williams’ daughter Zelda slams OpenAI’s Ghibli-style images amid artistic and ethical concerns

Robin Williams' daughter condemns OpenAI's AI-generated Ghibli-style images, highlighting both environmental costs and the contradiction with Miyazaki's well-documented opposition to artificial intelligence in creative work.

AI search tools provide wrong answers up to 60% of the time despite growing adoption

Independent testing reveals AI search tools frequently provide incorrect information, with error rates ranging from 37% to 94% across major platforms despite their growing popularity as Google alternatives.