LinkedIn’s AI training initiative: LinkedIn has implemented a new policy that allows the company to use user data for training generative AI models, with users automatically opted in without explicit consent.
- The professional networking platform introduced a new privacy setting and opt-out form before updating its privacy policy to reflect this change.
- LinkedIn states that it uses generative AI for features such as writing assistance, but the extent of data usage and potential applications remains unclear.
- This move follows a recent admission by Meta that it has been scraping non-private user data for AI model training since 2007.
Opting out of AI training: Users who wish to prevent their data from being used for future AI model training must take specific steps to opt out.
- To opt out, users need to navigate to the Data privacy tab in their account settings and toggle off the “Data for Generative AI Improvement” option.
- LinkedIn clarifies that opting out will only prevent future use of personal data for AI training and does not affect any training that has already taken place.
- The company claims to use privacy-enhancing technologies to redact or remove personal data from its training sets.
Additional opt-out requirements: LinkedIn’s AI training policy extends beyond generative AI models, requiring users to take extra steps to fully protect their data.
- The platform uses other machine learning tools for purposes such as personalization and moderation, which do not generate content.
- To opt out of data usage for these non-generative AI tools, users must separately fill out the LinkedIn Data Processing Objection Form.
- This two-step opt-out process highlights the complexity of data usage policies and the potential for user confusion.
Geographic considerations: LinkedIn’s AI training policy varies based on user location, with certain regions exempt from data collection.
- Users residing in the European Union, European Economic Area, or Switzerland are not included in the AI model training program.
- This geographic distinction underscores the impact of regional data protection regulations on corporate AI development practices.
Implications for user privacy: LinkedIn’s decision to automatically opt users into AI training raises concerns about data privacy and user consent.
- The lack of proactive notification about this significant change in data usage has sparked criticism from privacy advocates.
- This incident highlights the ongoing debate surrounding the balance between technological advancement and individual privacy rights in the digital age.
- Users may be unaware of how their professional and personal information shared on the platform could be utilized in AI development.
Broader context of AI data collection: LinkedIn’s approach to AI training data collection reflects a growing trend among tech companies.
- The revelation comes amidst increased scrutiny of how major tech platforms acquire and use user data for AI development.
- This incident, along with Meta’s recent admission, suggests that the practice of utilizing user data for AI training may be more widespread than previously known.
- It raises questions about the transparency of tech companies regarding their data usage policies and the extent of user control over personal information.
Analyzing the implications: LinkedIn’s AI training policy underscores the complex relationship between user data, technological innovation, and privacy concerns in the digital age.
- The automatic opt-in approach taken by LinkedIn may set a precedent for other platforms, potentially normalizing the use of user data for AI training without explicit consent.
- This incident highlights the need for increased transparency from tech companies about their data usage practices and more robust regulations to protect user privacy in the era of AI development.
- As AI continues to advance, the balance between leveraging user data for innovation and respecting individual privacy rights will likely remain a contentious issue, requiring ongoing scrutiny and dialogue.
Recent Stories
DOE fusion roadmap targets 2030s commercial deployment as AI drives $9B investment
The Department of Energy has released a new roadmap targeting commercial-scale fusion power deployment by the mid-2030s, though the plan lacks specific funding commitments and relies on scientific breakthroughs that have eluded researchers for decades. The strategy emphasizes public-private partnerships and positions AI as both a research tool and motivation for developing fusion energy to meet data centers' growing electricity demands. The big picture: The DOE's roadmap aims to "deliver the public infrastructure that supports the fusion private sector scale up in the 2030s," but acknowledges it cannot commit to specific funding levels and remains subject to Congressional appropriations. Why...
Oct 17, 2025Tying it all together: Credo’s purple cables power the $4B AI data center boom
Credo, a Silicon Valley semiconductor company specializing in data center cables and chips, has seen its stock price more than double this year to $143.61, following a 245% surge in 2024. The company's signature purple cables, which cost between $300-$500 each, have become essential infrastructure for AI data centers, positioning Credo to capitalize on the trillion-dollar AI infrastructure expansion as hyperscalers like Amazon, Microsoft, and Elon Musk's xAI rapidly build out massive computing facilities. What you should know: Credo's active electrical cables (AECs) are becoming indispensable for connecting the massive GPU clusters required for AI training and inference. The company...
Oct 17, 2025Vatican launches Latin American AI network for human development
The Vatican hosted a two-day conference bringing together 50 global experts to explore how artificial intelligence can advance peace, social justice, and human development. The event launched the Latin American AI Network for Integral Human Development and established principles for ethical AI governance that prioritize human dignity over technological advancement. What you should know: The Pontifical Academy of Social Sciences, the Vatican's research body for social issues, organized the "Digital Rerum Novarum" conference on October 16-17, combining academic research with practical AI applications. Participants included leading experts from MIT, Microsoft, Columbia University, the UN, and major European institutions. The conference...