A recent update to WeTransfer’s terms of service caused consternation after some of its customers feared that it meant content from files uploaded to the popular file-sharing service would automatically be used to train AI models.
But the Netherlands-based company insisted on Tuesday that this is not the case, saying in a statement that it “does not sell user content to third parties,” and nor does it “use AI in connection with customer content.”
The updated terms of service that prompted the criticism was sent to its customers earlier this month and marked as going into effect on August 8, 2025. The text stated that WeTransfer could use content shared on its service for purposes “including to improve performance of machine learning models that enhance our content moderation process.”
The new wording was widely interpreted as granting WeTransfer the right to use customer-uploaded files to train AI models. Many users reacting strongly, accusing WeTransfer of giving itself the right to share or sell customer content to AI companies hungry for fresh data to train their AI technologies.
On Tuesday, WeTransfer tried to reassure its users by saying in a statement that “your content is always your content,” and that “we don’t use machine learning or any form of AI to process content shared via WeTransfer.”
It continued: “The passage that caught most people’s eye was initially updated to include the possibility of using AI to improve content moderation and further enhance our measures to prevent the distribution of illegal or harmful content on the WeTransfer platform. Such a feature hasn’t been built or used in practice, but it was under consideration for the future.”
It said that it had removed the mention of machine learning from its terms, “as it’s not something WeTransfer uses in connection with customer content and may have caused some apprehension.”
The revised section now states: “You hereby grant us a royalty-free license to use your Content for the purposes of operating, developing, and improving the Service, all in accordance with our Privacy & Cookie Policy.”
The controversial episode highlights the growing sensitivity among people toward having their content used for AI model training. Artists, musicians, and writers, for example, have been protesting strongly against AI companies using their work to train AI models without asking for permission or offering compensation.
The troubling episode is also a lesson for other online companies to be clearer about how they’re handling user data, as misunderstandings over AI can, as we’ve seen, quickly escalate into a major backlash.
The Poetry Camera is an ingenious device that doesn’t take photos but instead makes poems.The clev...
Read More →
Google’s Chrome browser has offered a rich suite of privacy and safety features for a while now. T...
Read More →
Google’s Pixel Buds wireless earbuds have offered a fantastic real-time translation facility for a...
Read More →
It feels like all of the big tech companies practically live in courtrooms lately, but it also feels...
Read More →
Google is steadily rolling out contextual improvements to Gemini that make it easier for users to de...
Read More →
Barely a few weeks ago, Google introduced a new AI Search mode. The idea is to provide answers as a ...
Read More →
xAI launched its Grok-3 AI chatbot merely a few days ago, but locked it behind a paywall worth $40 p...
Read More →
Apple’s progress with bringing AI to its hardware hasn’t exactly hit the same notes as the progr...
Read More →
Mobile World Congress Read our complete coverage of Mobile Worl...
Read More →
Comments on "WeTransfer backlash highlights need for smarter AI practices" :