| |

Why OpenAI’s users fleeing to Claude is not a simple good-guy-bad-guy narrative

A controversy over military contracts and data access is reshaping the competitive landscape of artificial intelligence, and raising questions that go far beyond which chatbot you prefer.

By early 2026, something unusual started happening in the world of artificial intelligence: users were publicly breaking up with ChatGPT.

Not because the technology had gotten worse. Not because of a bug or outage. But because of a deal its parent company, OpenAI, struck with the U.S. government, one that many users saw as a fundamental betrayal of trust.

The exodus to Anthropic’s Claude wasn’t just a consumer preference shift. It represented something deeper: a growing awareness that the AI systems we’re inviting into our most private thoughts and professional workflows are increasingly entangled with government surveillance infrastructure.

The Deal That Changed Everything

In late February 2026, OpenAI announced it had reached an agreement to provide AI capabilities to the U.S. Department of Defense. The timing was significant. The deal came just hours after the Trump administration directed all federal agencies to stop using AI services provided by Anthropic, Claude’s creator, which had been publicly resisting Pentagon demands to allow wide military use of its AI without prohibitions on autonomous weapons or mass surveillance.

The backlash was immediate and fierce.

Sam Altman, OpenAI’s CEO, acknowledged in a statement that the company had made missteps. “One thing I think I did wrong: we shouldn’t have rushed to get this out on Friday,” Altman said. ChatGPT uninstalls were reported to have surged by 295% after the initial agreement was announced. OpenAI subsequently promised to amend the terms of the deal, though the specifics of those amendments remained unclear.

But for many users, the damage was done. The company that had positioned itself as building AI “for all of humanity” had chosen to align itself with military and intelligence agencies in ways that felt fundamentally at odds with that mission. Days later, Reuters reported that OpenAI was also considering a contract with NATO.

What We Actually Know About Government Access

Military,Control,Room,,Computer,And,Soldier,At,Desk,,Typing,Code
Photo Credit: PeopleImages.com – Yuri A/Shutterstock

Here’s where things get complicated, because the question of whether governments can access your ChatGPT conversations doesn’t have a simple yes-or-no answer.

Officially, OpenAI, like Google, Microsoft, and other major tech companies, states that it does not provide blanket government access to user data. Any data sharing, the company says, happens only through legal processes: warrants, subpoenas, or national security letters.

But post-Snowden, we know that framework doesn’t tell the whole story.

Documents revealed by Edward Snowden in 2013 showed that the NSA had direct access to the servers of major tech companies through a program called PRISM. Companies initially denied knowledge of such access, then later acknowledged cooperation under legal compulsion while insisting they didn’t provide “backdoors.”

The distinction matters, but it’s also somewhat semantic. Whether access comes through a legal demand, a national security letter (which companies are often forbidden from disclosing), or a technical backdoor, the practical result for user privacy can be similar.

With AI systems, the stakes are potentially higher. Unlike a search query or email, conversations with AI assistants often involve people thinking through problems, expressing doubts, exploring ideas they might not share elsewhere. The intimacy of that interaction makes the question of access more sensitive.

Why Anthropic Became the Alternative

Screenshot of author’s Claude AI ChatOn screen. Image credit The Queen Zone

Anthropic, founded in 2021 by former OpenAI researchers including siblings Dario and Daniela Amodei, positioned itself from the beginning as the “safety-first” AI company.

The company’s approach, called Constitutional AI, involves training models according to explicit ethical principles. More relevant to the current controversy, Anthropic has publicly stated it will not allow its AI to be used for autonomous weapons, mass surveillance, or certain categories of military application.

When the Pentagon came calling with demands that would not explicitly prohibit such uses, Anthropic refused, even as a Pentagon-imposed deadline loomed.

That stance made Claude the obvious destination for users uncomfortable with OpenAI’s government partnerships. Social media filled with posts announcing switches to Claude, often framed in explicitly political terms about surveillance, civil liberties, and corporate responsibility.

But the reality is more nuanced than a simple good-guy-bad-guy narrative.

The Uncomfortable Truth About All AI Companies

computer data analysis.
Photo Credit: FAMILY STOCK via Shutterstock

Every major AI company operates under the legal jurisdiction of governments. Every one can be compelled to provide data. Every one stores at least some user information, if only for safety monitoring and system improvement.

The differences are real but often overstated:

OpenAI has been more willing to pursue government and defense contracts, and historically used more user conversation data for training (though with opt-out options).

Anthropic has taken a more restrictive stance on military applications and emphasizes not using most user conversations for model training by default.

Google’s Gemini sits within an ecosystem that already collects vast amounts of user data across search, email, location, and browsing, making the AI component just one more data point.

The choice between them is less about absolute privacy and more about which set of tradeoffs you find more acceptable.

Bruce Schneier, the cryptographer and security technologist, has written extensively about this dynamic. “We’ve outsourced our digital lives to corporations,” he noted, “and those corporations are inevitably subject to government pressure. If the government wants to investigate us, they’re more likely to go through our data than they are to search our homes.”

The Next Privacy Frontier: AI That Watches Everything

Hands on Laptop with AI Assistant Interface Displaying Icons for Music, Video, and Text Editing, Modern Technology and Digital Innovation, Working Environment.
pitinan via 123RF

Here’s what should actually keep privacy advocates up at night: the current controversy over chatbots may be a distraction from a much larger shift already underway.

The next generation of AI systems won’t wait for you to type a prompt. They’ll observe continuously, reading your emails, monitoring your documents, watching your browsing, learning your habits, anticipating your needs.

Microsoft’s Copilot, Google’s AI integrations, and Apple’s upcoming AI features all move in this direction: AI assistants embedded across your digital life, with persistent memory and cross-platform access.

This is fundamentally different from chatbot interactions. You control what you tell ChatGPT or Claude. You don’t control what an observational AI learns by watching everything you do.

Shoshana Zuboff, author of The Age of Surveillance Capitalism, has argued that this kind of behavioral data collection represents “an unprecedented concentration of knowledge and power.” The concern isn’t just government access, it’s the aggregation of behavioral data that reveals things about us we may not consciously know about ourselves.

A 2015 study from Cambridge University researchers demonstrated this unsettling reality. By analyzing just Facebook likes, algorithms could predict personality traits better than coworkers (with 10 likes), better than friends (70 likes), better than family (150 likes), and better than spouses (300 likes).

Future AI assistants will have access to vastly richer data: not just what you like, but what you write, when you work, who you talk to, what you buy, where you go, what you read, and how long you spend on each activity.

Over time, these systems may understand patterns in your behavior that you yourself cannot see. They might recognize that you write best in the morning, that certain topics trigger procrastination, that your communication tone shifts predictably with stress, that you’re most vulnerable to certain kinds of persuasion at particular times.

That knowledge could be beneficial—helping you structure your life more effectively. Or it could be weaponized for manipulation, targeted advertising, workplace monitoring, or political persuasion.

Who Controls the Insights?

This brings us to the central question that the OpenAI controversy really represents: Who should control the insights AI systems develop about individuals?

If you control them, AI could be genuinely empowering, a tool for self-knowledge and productivity.

If institutions control them, whether corporations or governments, it becomes surveillance, even if benevolently intended.

The European Union has moved most aggressively to regulate this space. The AI Act, which came into force in 2024, classifies AI systems by risk level and imposes strict requirements on high-risk applications, including those used for biometric identification, critical infrastructure, and law enforcement.

In the United States, regulation has been more fragmented. Some states have passed AI transparency laws, but there’s no comprehensive federal framework. The Biden administration issued an executive order on AI safety in 2023, but its enforcement mechanisms remain unclear, and the current Trump administration’s approach to AI regulation remains in flux.

Meanwhile, the technology races ahead of policy.

The Broader Pattern: Dual-Use Technology and Democratic Oversight

The OpenAI-Pentagon controversy fits into a much older pattern: the tension between innovation, national security, and civil liberties.

Encryption provides the clearest parallel. Strong encryption protects privacy, free speech, and economic security. It also protects criminals and terrorists. Governments have consistently pressured tech companies to provide backdoors for law enforcement, while security experts have consistently argued that backdoors for “good guys” inevitably create vulnerabilities for bad actors.

AI presents similar dilemmas, but with higher stakes. The same technology that could help diagnose diseases could enable autonomous weapons. The same systems that could improve education could enable unprecedented surveillance.

Matthew Guariglia, a senior policy analyst at the Electronic Frontier Foundation, argues that the key question is democratic oversight. “It’s not whether AI companies work with government,” he has written, “it’s whether that work happens with transparency, public debate, and meaningful constraints. Right now, we’re getting secret deals and after-the-fact justifications.”

The OpenAI deal exemplifies that pattern. Users learned about it from news reports, not from OpenAI. The scope and limitations were unclear. The public debate happened only after the arrangement was already in place.

What Should Users Actually Do?

For individuals concerned about privacy, the practical advice is surprisingly consistent across security experts:

1. Assume anything you tell an AI could potentially be accessed. Whether through legal process, data breach, or future policy changes, treat AI conversations as you would email: convenient but not truly private.

2. Avoid sharing sensitive personal information. Don’t include financial details, health information, passwords, or identifying information about yourself or others.

3. Use local AI when possible. Systems that run entirely on your device (like some Apple AI features) don’t send data to cloud servers, eliminating a major privacy risk.

4. Review privacy settings. Most AI systems allow you to opt out of having conversations used for training. Actually do it.

5. Stay informed about corporate policies. AI companies are updating their government cooperation policies in real time. What’s true today may change tomorrow.

6. Support regulatory frameworks. Individual actions matter, but systemic privacy protection requires policy changes that create enforceable standards across companies.

The Bigger Question

Ultimately, the migration from ChatGPT to Claude represents something more significant than a consumer preference shift. It’s an early skirmish in a much larger battle over the future of AI: whether these systems will be tools that empower individuals or instruments of institutional control.

The answer won’t be determined by which chatbot is more popular this year. It will be shaped by regulatory frameworks, corporate decisions, technical architectures, and public pressure over the next decade.

What makes this moment particularly important is that many of these decisions are being made right now, often without adequate public debate or democratic input. The infrastructure being built today, the data collection practices, the government partnerships, the technical architectures, will be much harder to change once established.

The ChatGPT exodus to Claude might seem like a small thing, a consumer choice in a crowded market. But it’s also a signal: a growing public awareness that the AI systems we’re inviting into our lives come with profound implications for privacy, autonomy, and power.

Whether that awareness translates into meaningful change depends on what happens next, not just in corporate boardrooms or government offices, but in the choices we make about what we’re willing to accept and what we’re willing to demand.

The technology is moving fast. The question is whether democratic governance can keep up.

This article was researched and written with the help of both ChatGPT and Claude, and reviewed, edited and fact checked by me.

You may also be interested in reading:

Author

  • Robin Jaffin headshot circle

    Robin Jaffin is a strategic communicator and entrepreneur dedicated to impactful storytelling, environmental advocacy, and women's empowerment. As Co-Founder of The Queen Zone™, Robin amplifies women's diverse experiences through engaging multimedia content across global platforms. Additionally, Robin co-founded FODMAP Everyday®, an internationally recognized resource improving lives through evidence-based health and wellness support for those managing IBS. With nearly two decades at Verité, Robin led groundbreaking initiatives promoting human rights in global supply chains.

    View all posts

Similar Posts