×
LinkedIn faces class-action lawsuit for training AI models on user messages
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

Microsoft’s professional networking platform LinkedIn faces a class-action lawsuit in California over allegations it used private messages to train AI models without proper user consent.

Key allegations: A LinkedIn Premium subscriber has filed a lawsuit claiming the platform unlawfully shared private direct messages with third parties for AI training purposes.

  • The lawsuit alleges violations of the Stored Communications Act, Breach of Contract, and California’s Unfair Competition Law
  • The plaintiff is seeking $1,000 in damages and additional potential relief
  • LinkedIn has denied these claims, stating they are “false claims with no merit”

Data privacy concerns: LinkedIn implemented an opt-out setting for AI training data usage, but kept it enabled by default, raising questions about user awareness and consent.

  • The platform stopped training AI models on UK-based user data following concerns from British regulators
  • Data from EU and Switzerland-based users is also excluded from AI training
  • Users can opt out through Settings > Data Privacy > Data for Generative AI Improvement

Corporate relationships and data sharing: LinkedIn’s data sharing practices with “affiliates” extends to Microsoft-owned companies but allegedly excludes Microsoft-backed OpenAI.

  • Microsoft has acquired over 270 companies since 1986, including five AI companies
  • The exact recipients and users of LinkedIn user data remain unclear
  • The lawsuit suggests involvement of “another provider” in AI training

Privacy implications: The lawsuit raises concerns about the permanent integration of user data in AI systems and potential unauthorized future use.

  • Private conversations could potentially appear in other Microsoft products
  • The complaint notes LinkedIn has not offered to delete data from existing AI models
  • There are no apparent plans to retrain models to remove disclosed information

Looking ahead: This case highlights the growing tension between AI development and user privacy rights, particularly regarding the practice of using personal communications for AI training without explicit consent. The outcome could set important precedents for how social media platforms handle user data in AI development.

Microsoft's LinkedIn Sued Over Using DMs to Train AI

Recent News

Databricks to invest $250M in India for AI growth, boost hiring

Data analytics firm commits $250 million to expand Indian operations with a new Bengaluru research center and plans to train 500,000 professionals in AI over three years.

AI-assisted cheating proves ineffective for students

Despite claims of academic advantage, AI tools like Cluely fail to deliver practical benefits during tests and meetings, exposing a significant gap between marketing promises and real-world performance.

Rust gets multi-platform compute boost with CubeCL

CubeCL brings GPU programming into Rust's ecosystem, allowing developers to write hardware-accelerated code using familiar syntax while maintaining safety guarantees across NVIDIA, AMD, and other platforms.