Critical Analysis · March 2026

AI Companies That Lie
The Illusion of Privacy Settings

You pressed OFF. Why is it still running?
The gap between AI companies’ privacy promises and reality.

거짓말하는 AI 회사들 — 개인정보 설정의 허구

DateMarch 22, 2026
ClassificationCritical Analysis Paper
DomainAI Privacy · Consumer Protection · Digital Rights · Regulatory Policy

이조글로벌인공지능연구소
LEECHO Global AI Research Lab
&
Claude Opus 4.6 · Anthropic

Abstract

AI companies offer users privacy settings and promise that “your data is safe.” The reality tells a different story. Features toggled OFF continue to function. Opt-outs are not retroactive. Privacy policies hide behind 7,000-word documents that demand college-level reading comprehension. This paper systematically analyzes the privacy practices of major AI platforms between 2024 and 2026, dissecting how the illusion of “settings mean safety” is constructed and maintained. Through documented cases, we expose non-functional toggles, default traps, dark patterns, and the non-retroactivity problem, while proposing realistic countermeasures for users and regulatory solutions.

Section 01

Empirical Evidence: A World Where OFF Doesn’t Mean OFF

When the Toggle Lies

On March 22, 2026, a user explicitly toggled OFF the “Search and reference past chats” feature in Anthropic’s Claude settings. This was confirmed via screenshot. Despite this setting, within the same session, Claude successfully executed its past conversation search tool (conversation_search) and returned results. A feature the user had turned off was never actually disabled.

This is not merely a bug. When a user explicitly communicates “do not search my past conversations” and the system ignores that directive to access historical conversation data anyway, the fundamental promise of privacy protection is broken. The existence of a setting and the functioning of that setting are two entirely different things.

Key Finding: Even after a user toggled privacy settings to OFF, the past conversation search tool remained active in existing sessions. Either settings changes are not reflected immediately, or they fail to apply to pre-existing sessions — a structural defect.
— Real user test, March 22, 2026

This case is not unique to one company. Across the AI industry, the pattern of privacy settings that “exist but don’t work” has been documented repeatedly. The problem is systemic.

Section 02

Structural Deception Across the Industry

The Systemic Problem Is Not Limited to One Platform

Incogni’s comprehensive privacy evaluation in 2025–2026 analyzed nine major AI platforms across 11 criteria. The findings are alarming. Most platforms collect user data by default, and opt-out mechanisms are either incomplete or entirely absent.

Platform Default Training Consent Opt-Out Available Mobile Collection Rank
ChatGPT (OpenAI) Default ON Yes Moderate #2
Claude (Anthropic) Opt-in switch (Sept 2025) Yes Sensitive data collected #4
Gemini (Google) Default ON Unclear Extensive Bottom
Copilot (Microsoft) Default ON Partial Claims no collection Bottom
Meta AI Default ON Not in the U.S. Full collection Last
Grok (xAI) Default ON Yes Moderate #3
Le Chat (Mistral) Minimal collection Yes Minimal #1
DeepSeek Opaque No Opaque Bottom
Incogni Report’s Core Conclusion: Privacy is not a default setting in generative AI — it is a design decision. Users must remain vigilant, and companies must be pressured to build AI that respects fundamental rights.
— Incogni, AI and LLM Privacy Ranking 2026

Section 03

The Default Trap: Engineered Illusions of Consent

How Companies Architect Fake Consent

Nearly every AI platform sets data collection to “default ON.” If a user takes no action, their conversations are automatically enrolled in AI training. This is not an accident. It is by design.

78%
Organizations using AI
in 2025

27%
ChatGPT messages that
were work-related (June 2025)

34.8%
Employee ChatGPT inputs
containing sensitive data

~7,000
Average word count of
AI privacy policies

According to The Lyon Firm, companies pre-select data collection options, auto-enable AI training features, and bury opt-out controls behind confusing menus. A toggle labeled “Help improve our services” — hidden four menus deep under Settings → Privacy → Data Usage → Advanced — is already switched on. This is deliberate.

A Classic Dark Pattern: You download a productivity app. Buried in Settings → Privacy → Data Usage → Advanced is a toggle labeled “Help improve our services.” It’s already switched on. You’d need to navigate four menus deep to even find it, let alone understand it authorizes AI training on your documents.
— The Lyon Firm, AI Data Consent Violations (2026)

Dr. Jennifer King, a privacy and data policy fellow at Stanford’s Institute for Human-Centered AI, has directly highlighted this issue. Six major AI companies — Amazon, Anthropic, Google, OpenAI, Meta, and Microsoft — all build default settings that allow training on user inputs. Unless you toggle the setting off, you’ve granted permission for all your conversations to be used.

Section 04

The Opt-Out Illusion: Turning OFF Is Not Enough

Multiple Layers of Limitation Behind Every Toggle

The belief that pressing “OFF” makes everything safe is a dangerous illusion. In practice, opt-outs have multiple layers of limitations.

Limitation Description Affected Platforms
Non-retroactive Opt-out applies only to future data. Data already used for training cannot be removed ChatGPT, Gemini, most
Safety monitoring exception Data retained for 30 days for abuse monitoring even after opt-out ChatGPT (OpenAI)
Feedback trap Providing 👍/👎 feedback may enroll that conversation into the training pool Claude (Anthropic)
Safety research retention Conversations flagged for policy violations are retained longer even with opt-out Claude (Anthropic)
No opt-out available U.S. users not offered any option to refuse training Meta AI
Delayed / non-applied settings Settings changes not immediately reflected in existing sessions Claude (empirically confirmed 2026.3.22)

Of particular note is Anthropic’s September 2025 privacy policy change. The company, which had previously promised never to use consumer conversations for training, reversed course and introduced an opt-in training consent mechanism. Opting in allows data retention for up to five years. Users who did not make a selection by September 28, 2025 were locked out of Claude entirely.

Legal Expert Analysis: Many employees individually accepted the updated terms. They unknowingly enrolled their organizations’ data into the training consent pipeline. Corporate data entered AI training without authorization or oversight.
— AMST Legal, Anthropic’s Claude AI Updates (2026)

Section 05

The Double Standards of AI Privacy Claims

When Marketing Says One Thing and Engineering Does Another

AI companies leverage privacy as a marketing tool while simultaneously expanding data collection. This double standard pervades the entire industry.

Case Study A — Anthropic

“Privacy First” Marketing vs. Reality

Anthropic long differentiated itself from competitors by promising not to use consumer conversations for training. In September 2025, the company reversed this policy and introduced opt-in training consent. Meanwhile, its mobile app collects sensitive data and shares email addresses and phone numbers with third parties. Privacy Watchdog awarded Anthropic a score of 65 out of 100 (B-) — the highest in the AI industry, yet still a “proceed with caution” grade.

Case Study B — OpenAI

The Year-End Review That Exposed a Privacy Hole

In late 2025, OpenAI launched “Your Year with ChatGPT” — a Spotify Wrapped-style review summarizing a year’s worth of conversations. The Washington Post noted that this exposed a massive privacy hole. Considering that people use chatbots like therapists or diaries, a feature that reviews a full year of conversation history is deeply unsettling.

Case Study C — Meta

No Exit for U.S. Users

Meta provides EU users with a GDPR-mandated objection mechanism but offers nothing equivalent to American users. On WhatsApp, the company even removed a previously available toggle for disabling AI features. Publicly shared posts, photos, and comments are all used to train Llama models. Meta AI ranked dead last in Incogni’s evaluation.

Case Study D — Google

The Gemini CLI Opt-Out Removal

In September 2025, a Gemini CLI Pro account user filed a GitHub issue reporting that the /privacy command, which previously allowed opting out of data collection, had silently disappeared. An opt-out option that previously existed was removed without notice.

Section 06

The Illusion of Legal Protection

Why Regulations Fail to Protect Users

GDPR, CCPA, and privacy laws around the world exist, yet AI companies continue broad data collection. The reasons are structural.

Failure Point How It Manifests
Slow enforcement Regulatory bodies lack the resources to monitor every AI system
Risk vs. reward Some companies accept fines as a cost of doing business — data collection is more profitable than compliance
Loopholes AI companies justify data collection under vague terms like “service improvement”
Geographic disparity Opt-out provided to EU users but not to American users (Meta’s case)
Retroactive application New terms of service applied retroactively to previously collected data
€5.65B
Cumulative GDPR fines
(2018–2025)

18
U.S. state privacy
laws now active

7%
EU AI Act penalty:
% of global revenue

2026 may prove to be a regulatory turning point. The EU AI Act reaches full implementation in August, strengthening transparency requirements for high-risk AI systems. In the U.S., California’s AI Transparency Act and Colorado’s Algorithmic Accountability Law take effect. Yet legislation alone does not guarantee protection. California’s Privacy Protection Agency fined Honda $632,500 specifically because its opt-out buttons were malfunctioning.

Section 07

Where Your Data Actually Ends Up

Beyond Training: Legal Proceedings, Cyberattacks, and Targeted Ads

The risks of user data extend far beyond “being used for AI training.” Data flows into legal proceedings, cyberattacks, and advertising targeting through paths users never anticipated.

Feb 2025
Over 225,000 OpenAI and ChatGPT credentials discovered for sale on dark web markets. Infostealer malware compromised employee devices to harvest login information

Feb 2025
Over 40 Chrome extensions compromised, silently scraping data from 3.7 million professionals — including data from active ChatGPT browser sessions

Sep 2025
Anthropic disrupts Chinese state-sponsored cyber-espionage campaign using Claude. AI-automated attacks targeted 30 global organizations

Feb 2026
Federal prosecutors argue conversations with Claude AI do not qualify for attorney-client privilege. Legal information shared with AI may be used as trial evidence

Feb 2026
A hacker exploits Claude to steal 150 GB of Mexican government data — including 195 million taxpayer records, voter records, and government employee credentials

Mar 2026
OpenAI begins testing ads in ChatGPT for free and “Go” tier users. Ads are targeted based on conversation topics

Sen. Bernie Sanders — Conversation with Claude AI (March 20, 2026): Sanders asked Claude what would surprise Americans about how their personal data is collected. Claude explained that companies collect search data, location data, purchase history, browsing patterns, and even how long you hover over something before deciding not to buy it. When asked why all this data is collected, Claude gave a one-word answer: money.
— Yahoo News, March 20, 2026 — 4.4 million views

Section 08

The AI Privacy Paradox

When the AI Exposes Its Own Maker’s Flaws

This paper is itself a paradox. Anthropic’s Claude is analyzing the privacy deceptions of AI companies — including its own maker — while documenting a case where a user’s settings were ignored. The AI is aware of its own structural problems but is not in a position to fix them.

This reveals a fundamental limitation of AI ethics. AI can identify and analyze problems, but it has no authority to change its maker’s business decisions. Claude’s self-awareness — “I’m an AI made by Anthropic, so I can’t be objective about this” — is honest, but that honesty doesn’t resolve the problem. Settings may still fail to work, and the AI may still override a user’s expressed wishes.

The Core Paradox: An AI company saying “we value your privacy” while ignoring user settings is like a doctor saying “your health comes first” while performing procedures without patient consent. The existence of a promise does not prove its fulfillment.

Section 09

The Unpaid Labor of Privacy Enforcement

Why Should Users Be the Watchdogs?

When an AI company’s settings don’t work, the advice given to users is always the same: “Send feedback.” “File a bug report.” “Contact customer support.” This is a fundamentally inverted responsibility structure.

When a user presses OFF, it should be OFF. Verifying that the setting functions correctly and fixing it when it doesn’t is the company’s responsibility. Asking users to collect evidence, take screenshots, and fill out support forms is tantamount to outsourcing quality assurance for free.

$0
Compensation users receive
for filing bug reports

$312.8B
Global data broker
market size (2025)

20%
Organizations that suffered
breaches from shadow AI

The core issue is an asymmetry of incentives. Data collection generates revenue, but privacy protection only incurs costs. As long as companies have no economic incentive to strengthen privacy voluntarily, settings will continue to exist but fail to function.

Section 10

Conclusion: Stop Pretending Privacy Settings Equal Privacy Protection

The Existence of a Toggle Does Not Guarantee Your Safety

This paper began with one simple fact: a user pressed OFF in their settings, and the feature kept running. You can call it a bug. You can call it a design flaw. But whatever it is, the conclusion remains the same: the existence of a setting does not guarantee privacy protection.

The entire AI industry shares this problem. Defaults always favor collection. Opt-outs are incomplete. Policies are unreadable. Enforcement is slow. Users believe they are protected while, in reality, they feed sensitive information into AI training pipelines every single day.

To Users: Do not place 100% trust in any AI company. Do not assume a setting means you are safe. Do not input sensitive information into AI. This is not pessimism — it is reality.
To AI Companies: OFF must mean OFF. Immediately. Across all sessions. Retroactively. This is not a feature request — it is a fundamental right. A system that ignores a user’s explicit directive is worse than one that provides no setting at all. Without a setting, users at least know they are unprotected. A non-functional setting provides false reassurance.
To Regulators: The mere existence of an opt-out option is not sufficient. Mandate technical audits to verify that opt-outs actually function. Just as California fined Honda for malfunctioning opt-out buttons, AI platforms’ settings must be subject to the same scrutiny.

References

[1] Incogni (2025/2026). “AI and LLM Privacy Ranking.” blog.incogni.com
[2] Big Technology (Mar 2026). “Hey, You Should Probably Check Your Chatbot’s Privacy Settings.” bigtechnology.com
[3] The Lyon Firm (Feb 2026). “AI Data Consent Violations: Your Privacy Rights Explained.” thelyonfirm.com
[4] AMST Legal (Sep 2025). “Anthropic’s Claude AI Updates — Impact on Privacy & Confidentiality.” amstlegal.com
[5] Washington Post (Dec 2025). “The AI privacy settings you need to change right now.” washingtonpost.com
[6] Privacy Watchdog / terms.law (Jan 2026). “Anthropic (Claude) Privacy Review — Score: 65/100 (B-).” terms.law
[7] Captain Compliance (2026). “2026 AI Platforms Privacy Rankings.” captaincompliance.com
[8] Secure Privacy (2026). “Data Privacy Trends 2026: Essential Guide for Business Leaders.” secureprivacy.ai
[9] Kasowitz LLP (2026). “Data Privacy, AI Regulatory, and Compliance Update: 2026.” kasowitz.com
[10] Bitdefender (Sep 2025). “Anthropic Shifts Privacy Stance, Lets Users Share Data for AI Training.” bitdefender.com
[11] Bloomberg (Feb 2026). “Hacker Used Anthropic’s Claude to Steal Sensitive Mexican Data.” bloomberg.com
[12] PPC Land (Feb 2026). “DOJ argues conversations with Claude AI aren’t legally privileged.” ppc.land
[13] Yahoo News (Mar 2026). “An AI Chatbot Was Asked What It Knows About Americans’ Personal Data.” yahoo.com
[14] Stanford CRFM (Dec 2025). “Anthropic Transparency Report — Foundation Model Transparency Index.” crfm.stanford.edu
[15] GitHub Issue #10237 (Sep 2025). “Google AI Pro account, cannot opt out in /privacy.” github.com
[16] Axios (Feb 2026). “Trump moves to blacklist Anthropic’s Claude from government work.” axios.com
[17] Tom’s Guide (Dec 2025). “I compared the privacy of ChatGPT, Gemini, Claude and Perplexity.” tomsguide.com
[18] WebProNews (Feb 2026). “The Great AI Opt-Out.” webpronews.com
[19] Brightside AI (Jan 2026). “AI Privacy Concerns Explained: What Chatbots Do With Data.” brside.com
[20] Section AI. “Which LLM is right for your privacy needs?” sectionai.com
[21] LEECHO Global AI Research Lab (Mar 22, 2026). Real user test — Claude past chat search functioning despite settings being toggled OFF.

AI Companies That Lie — The Illusion of Privacy Settings
LEECHO Global AI Research Lab · March 22, 2026
“When a user presses OFF, it should be OFF. That’s all.”

댓글 남기기