WeTransfer has confirmed it does not use files uploaded to its service to train artificial intelligence models, following significant customer backlash over confusing terms of service changes. The file-sharing company updated its language to clarify that content moderation—not AI training—was the intended purpose, highlighting how unclear AI policies can quickly erode user trust in digital platforms.
What happened: WeTransfer faced widespread criticism on social media after updating its terms of service in late June or early July with language that users interpreted as granting permission to use their files for AI training.
- The original terms stated WeTransfer could use content “including to improve performance of machine learning models that enhance our content moderation process.”
- The company also included rights to “reproduce, distribute, modify,” or “publicly display” files uploaded to the service.
- Creative professionals, including illustrators and actors who regularly use the platform to share work, expressed concerns and considered switching to alternative providers.
The clarification: A WeTransfer spokeswoman told BBC News the company does not use machine learning or AI to process shared content, nor does it sell content or data to third parties.
- The clause was initially added to “include the possibility of using AI to improve content moderation” and identify harmful content.
- WeTransfer updated the terms on Tuesday, stating it had “made the language easier to understand” to avoid confusion.
- The revised clause now says: “You hereby grant us a royalty-free license to use your Content for the purposes of operating, developing, and improving the Service, all in accordance with our Privacy & Cookie Policy.”
Why this matters: The incident reflects growing sensitivity around AI training data and the importance of transparent communication from tech companies about how user content is handled.
- Users are increasingly vigilant about whether their creative work might be used to train AI models without explicit consent.
- The backlash demonstrates how quickly unclear terms of service can damage customer relationships, particularly among creative professionals who depend on these platforms for their work.
Similar incidents: WeTransfer joins other file-sharing platforms that have faced similar scrutiny over AI-related terms.
- Dropbox, a cloud storage company, also had to clarify in December 2023 that it was not using uploaded files to train AI models after social media outcry.
- These incidents suggest a pattern of companies struggling to communicate AI policies clearly to users.
Recent Stories
DOE fusion roadmap targets 2030s commercial deployment as AI drives $9B investment
The Department of Energy has released a new roadmap targeting commercial-scale fusion power deployment by the mid-2030s, though the plan lacks specific funding commitments and relies on scientific breakthroughs that have eluded researchers for decades. The strategy emphasizes public-private partnerships and positions AI as both a research tool and motivation for developing fusion energy to meet data centers' growing electricity demands. The big picture: The DOE's roadmap aims to "deliver the public infrastructure that supports the fusion private sector scale up in the 2030s," but acknowledges it cannot commit to specific funding levels and remains subject to Congressional appropriations. Why...
Oct 17, 2025Tying it all together: Credo’s purple cables power the $4B AI data center boom
Credo, a Silicon Valley semiconductor company specializing in data center cables and chips, has seen its stock price more than double this year to $143.61, following a 245% surge in 2024. The company's signature purple cables, which cost between $300-$500 each, have become essential infrastructure for AI data centers, positioning Credo to capitalize on the trillion-dollar AI infrastructure expansion as hyperscalers like Amazon, Microsoft, and Elon Musk's xAI rapidly build out massive computing facilities. What you should know: Credo's active electrical cables (AECs) are becoming indispensable for connecting the massive GPU clusters required for AI training and inference. The company...
Oct 17, 2025Vatican launches Latin American AI network for human development
The Vatican hosted a two-day conference bringing together 50 global experts to explore how artificial intelligence can advance peace, social justice, and human development. The event launched the Latin American AI Network for Integral Human Development and established principles for ethical AI governance that prioritize human dignity over technological advancement. What you should know: The Pontifical Academy of Social Sciences, the Vatican's research body for social issues, organized the "Digital Rerum Novarum" conference on October 16-17, combining academic research with practical AI applications. Participants included leading experts from MIT, Microsoft, Columbia University, the UN, and major European institutions. The conference...