A recent update to WeTransfer’s terms of service caused consternation after some of its customers feared that it meant content from files uploaded to the popular file-sharing service would automatically be used to train AI models.
But the Netherlands-based company insisted on Tuesday that this is not the case, saying in a statement that it “does not sell user content to third parties,” and nor does it “use AI in connection with customer content.”
The updated terms of service that prompted the criticism was sent to its customers earlier this month and marked as going into effect on August 8, 2025. The text stated that WeTransfer could use content shared on its service for purposes “including to improve performance of machine learning models that enhance our content moderation process.”
The new wording was widely interpreted as granting WeTransfer the right to use customer-uploaded files to train AI models. Many users reacting strongly, accusing WeTransfer of giving itself the right to share or sell customer content to AI companies hungry for fresh data to train their AI technologies.
On Tuesday, WeTransfer tried to reassure its users by saying in a statement that “your content is always your content,” and that “we don’t use machine learning or any form of AI to process content shared via WeTransfer.”
It continued: “The passage that caught most people’s eye was initially updated to include the possibility of using AI to improve content moderation and further enhance our measures to prevent the distribution of illegal or harmful content on the WeTransfer platform. Such a feature hasn’t been built or used in practice, but it was under consideration for the future.”
It said that it had removed the mention of machine learning from its terms, “as it’s not something WeTransfer uses in connection with customer content and may have caused some apprehension.”
The revised section now states: “You hereby grant us a royalty-free license to use your Content for the purposes of operating, developing, and improving the Service, all in accordance with our Privacy & Cookie Policy.”
The controversial episode highlights the growing sensitivity among people toward having their content used for AI model training. Artists, musicians, and writers, for example, have been protesting strongly against AI companies using their work to train AI models without asking for permission or offering compensation.
The troubling episode is also a lesson for other online companies to be clearer about how they’re handling user data, as misunderstandings over AI can, as we’ve seen, quickly escalate into a major backlash.
A recent update to WeTransfer’s terms of service caused consternation after some of its customers ...
Read More →Microsoft may be on its way to developing AI models independent of its partnership with OpenAI. Over...
Read More →
Samsung’s Project Moohan XR headset has grabbed all the spotlights in the past few months, and rig...
Read More →
After only one day, OpenAI has put a halt on the free version of its in-app image generator, powered...
Read More →
More than 25 years after its original release, Hayao Miyazaki’s action epic Princess Mononoke is...
Read More →
The future of online shopping on Amazon is going to be heavily dependent on AI. Early in 2025, the c...
Read More →
Tech for Change This story is part of Tech for Change: an ongoi...
Read More →
Andrew Tarantola / Google LabsGot a degree and no idea what to do with it? Google’s newest AI feat...
Read More →
Mobile World Congress Read our complete coverage of Mobile Worl...
Read More →
Comments on "WeTransfer backlash highlights need for smarter AI practices" :