Close Menu
Mirror Brief

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    Here’s Every Way to Visit U.S. National Parks for Free in 2025

    August 29, 2025

    Starmer names former Bank deputy governor as his chief economic adviser | Economic policy

    August 29, 2025

    P&O Ferries boss who sparked outage after mass sacking quits

    August 29, 2025
    Facebook X (Twitter) Instagram
    Mirror BriefMirror Brief
    Trending
    • Here’s Every Way to Visit U.S. National Parks for Free in 2025
    • Starmer names former Bank deputy governor as his chief economic adviser | Economic policy
    • P&O Ferries boss who sparked outage after mass sacking quits
    • Four wheels good, two wheels bad: why are there no exciting cycling games? | Games
    • Sabrina Carpenter: Man’s Best Friend review – smut and stunning craft from pop’s best in show | Sabrina Carpenter
    • Manchester United: Ruben Amorim blames emotions for comments following Grimsby defeat
    • All but one EU states demand Russia ‘stop the killing’ in Ukraine as Zelenskyy urges Monday deadline for Putin meeting – Europe live | Ukraine
    • Fed’s Waller, a candidate for chair, sees potential for half-point cut if labor market weakens further
    Friday, August 29
    • Home
    • Business
    • Health
    • Lifestyle
    • Politics
    • Science
    • Sports
    • World
    • Travel
    • Technology
    • Entertainment
    Mirror Brief
    Home»Technology»Anthropic users face a new choice – opt out or share your chats for AI training
    Technology

    Anthropic users face a new choice – opt out or share your chats for AI training

    By Emma ReynoldsAugust 29, 2025No Comments4 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    Anthropic says some Claude models can now end ‘harmful or abusive’ conversations 
    Share
    Facebook Twitter LinkedIn Pinterest Email

    Anthropic is making some big changes to how it handles user data, requiring all Claude users to decide by September 28 whether they want their conversations used to train AI models. While the company directed us to its blog post on the policy changes when asked about what prompted the move, we’ve formed some theories of our own.

    But first, what’s changing: Previously, Anthropic didn’t use consumer chat data for model training. Now, the company wants to train its AI systems on user conversations and coding sessions, and it said it’s extending data retention to five years for those who don’t opt out.

    That is a massive update. Previously, users of Anthropic’s consumer products were told that their prompts and conversation outputs would be automatically deleted from Anthropic’s back end within 30 days “unless legally or policy‑required to keep them longer” or their input was flagged as violating its policies, in which case a user’s inputs and outputs might be retained for up to two years.

    By consumer, we mean the new policies apply to Claude Free, Pro, and Max users, including those using Claude Code. Business customers using Claude Gov, Claude for Work, Claude for Education, or API access will be unaffected, which is how OpenAI similarly protects enterprise customers from data training policies.

    So why is this happening? In that post about the update, Anthropic frames the changes around user choice, saying that by not opting out, users will “help us improve model safety, making our systems for detecting harmful content more accurate and less likely to flag harmless conversations.” Users will “also help future Claude models improve at skills like coding, analysis, and reasoning, ultimately leading to better models for all users.”

    In short, help us help you. But the full truth is probably a little less selfless.

    Like every other large language model company, Anthropic needs data more than it needs people to have fuzzy feelings about its brand. Training AI models requires vast amounts of high-quality conversational data, and accessing millions of Claude interactions should provide exactly the kind of real-world content that can improve Anthropic’s competitive positioning against rivals like OpenAI and Google.

    Techcrunch event

    San Francisco
    |
    October 27-29, 2025

    Beyond the competitive pressures of AI development, the changes would also seem to reflect broader industry shifts in data policies, as companies like Anthropic and OpenAI face increasing scrutiny over their data retention practices. OpenAI, for instance, is currently fighting a court order that forces the company to retain all consumer ChatGPT conversations indefinitely, including deleted chats, because of a lawsuit filed by The New York Times and other publishers.

    In June, OpenAI COO Brad Lightcap called this “a sweeping and unnecessary demand” that “fundamentally conflicts with the privacy commitments we have made to our users.” The court order affects ChatGPT Free, Plus, Pro, and Team users, though enterprise customers and those with Zero Data Retention agreements are still protected.

    What’s alarming is how much confusion all of these changing usage policies are creating for users, many of whom remain oblivious to them.

    In fairness, everything is moving quickly now, so as the tech changes, privacy policies are bound to change. But many of these changes are fairly sweeping and mentioned only fleetingly amid the companies’ other news. (You wouldn’t think Tuesday’s policy changes for Anthropic users were very big news based on where the company placed this update on its press page.)

    Image Credits:Anthropic

    But many users don’t realize the guidelines to which they’ve agreed have changed because the design practically guarantees it. Most ChatGPT users keep clicking on “delete” toggles that aren’t technically deleting anything. Meanwhile, Anthropic’s implementation of its new policy follows a familiar pattern.

    How so? New users will choose their preference during signup, but existing users face a pop-up with “Updates to Consumer Terms and Policies” in large text and a prominent black “Accept” button with a much tinier toggle switch for training permissions below in smaller print — and automatically set to “On.”

    As observed earlier today by The Verge, the design raises concerns that users might quickly click “Accept” without noticing they’re agreeing to data sharing.

    Meanwhile, the stakes for user awareness couldn’t be higher. Privacy experts have long warned that the complexity surrounding AI makes meaningful user consent nearly unattainable. Under the Biden administration, the Federal Trade Commission even stepped in, warning that AI companies risk enforcement action if they engage in “surreptitiously changing its terms of service or privacy policy, or burying a disclosure behind hyperlinks, in legalese, or in fine print.”

    Whether the commission — now operating with just three of its five commissioners — still has its eye on these practices today is an open question, one we’ve put directly to the FTC.

    Anthropic chats Choice face opt share training users
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleHelen Goh’s recipe for peach, blackberry, ricotta and thyme galette | Cake
    Next Article UK firms warn over US small parcel tax
    Emma Reynolds
    • Website

    Emma Reynolds is a senior journalist at Mirror Brief, covering world affairs, politics, and cultural trends for over eight years. She is passionate about unbiased reporting and delivering in-depth stories that matter.

    Related Posts

    Technology

    Four wheels good, two wheels bad: why are there no exciting cycling games? | Games

    August 29, 2025
    Technology

    Musk files to dismiss lawsuit over his purchase of Twitter shares

    August 29, 2025
    Technology

    The best wireless earbuds for 2025

    August 29, 2025
    Add A Comment
    Leave A Reply Cancel Reply

    Medium Rectangle Ad
    Top Posts

    Revealed: Yorkshire Water boss was paid extra £1.3m via offshore parent firm | Water industry

    August 3, 202513 Views

    PSG’s ‘team of stars’ seek perfect finale at Club World Cup

    July 12, 20258 Views

    Eric Trump opens door to political dynasty

    June 27, 20257 Views
    Stay In Touch
    • Facebook
    • YouTube
    • TikTok
    • WhatsApp
    • Twitter
    • Instagram
    Latest Reviews
    Technology

    Meta Wins Blockbuster AI Copyright Case—but There’s a Catch

    Emma ReynoldsJune 25, 2025
    Business

    No phone signal on your train? There may be a fix

    Emma ReynoldsJune 25, 2025
    World

    US sanctions Mexican banks, alleging connections to cartel money laundering | Crime News

    Emma ReynoldsJune 25, 2025

    Subscribe to Updates

    Get the latest tech news from FooBar about tech, design and biz.

    Medium Rectangle Ad
    Most Popular

    Revealed: Yorkshire Water boss was paid extra £1.3m via offshore parent firm | Water industry

    August 3, 202513 Views

    PSG’s ‘team of stars’ seek perfect finale at Club World Cup

    July 12, 20258 Views

    Eric Trump opens door to political dynasty

    June 27, 20257 Views
    Our Picks

    Here’s Every Way to Visit U.S. National Parks for Free in 2025

    August 29, 2025

    Starmer names former Bank deputy governor as his chief economic adviser | Economic policy

    August 29, 2025

    P&O Ferries boss who sparked outage after mass sacking quits

    August 29, 2025
    Recent Posts
    • Here’s Every Way to Visit U.S. National Parks for Free in 2025
    • Starmer names former Bank deputy governor as his chief economic adviser | Economic policy
    • P&O Ferries boss who sparked outage after mass sacking quits
    • Four wheels good, two wheels bad: why are there no exciting cycling games? | Games
    • Sabrina Carpenter: Man’s Best Friend review – smut and stunning craft from pop’s best in show | Sabrina Carpenter
    Facebook X (Twitter) Instagram Pinterest
    • About Us
    • Disclaimer
    • Get In Touch
    • Privacy Policy
    • Terms and Conditions
    © 2025 Mirror Brief. All rights reserved.

    Type above and press Enter to search. Press Esc to cancel.