Anthropic is changing how it uses data from Claude users, asking them to decide by September 28 whether their conversations can be included in future AI training. The move introduces new rules on data retention and consent, giving individuals the option to opt out if they do not want their chats analysed.
What’s Changing
Until now, Anthropic has not used consumer chat data to train its models. With the update, the company plans to include conversations and coding sessions from users who don’t opt out. Data from these accounts could be stored for up to five years. This is a sharp shift from the earlier policy, where prompts and outputs were automatically deleted after 30 days unless flagged for violations or required for legal reasons.
Also read: Just months in, Meta’s highly paid AI researchers are quitting: What’s going on behind the scenes?
The update applies to Claude Free, Pro, and Max users, as well as Claude Code. However, enterprise customers using Claude Gov, Claude for Work, Claude for Education, or API access will not be affected, following a similar approach taken by OpenAI to shield business clients.
Why the Shift Matters
Anthropic says the change supports model improvements, claiming that shared user data will help build safer, more accurate systems while strengthening skills in coding, analysis, and reasoning. The company frames this as a way for users to contribute to stronger models in the future.
Also read: AI systems great at tests, but how do they perform in real life?
However, industry analysts note that the update also reflects competition across AI companies. Training advanced systems requires access to large volumes of real-world interactions, and Claude’s user data could give Anthropic an edge against rivals like OpenAI and Google.
Also read: iPhone 17 Air: launch date, specification, features, and price in India
Concerns Around Consent
The policy has raised questions about transparency. New users will see a choice at sign-up, but existing users face a pop-up with an “Accept” button displayed prominently, while the opt-out toggle appears smaller and set to “On” by default. Critics warn that this design may push many to agree without realising it.
Privacy experts argue that true consent is difficult to achieve when policies are buried in complex language or hidden in fine print. Regulators, including the U.S. Federal Trade Commission, have already warned AI companies against quietly changing data policies without clear disclosure.