In 2026, the AI industry has entered an unprecedented phase of fundraising and expansion. Anthropic’s annualized revenue has surpassed $30 billion, overtaking OpenAI, whose valuation is approaching $850 billion. Yet beneath this surface prosperity, three structural financial black holes are devouring the entire industry: model training costs that cannot be passed on to customers, model iteration that instantly zeroes out the value of predecessor products, and competition-driven operational expansion that creates irreversible rigid expenditures. The three black holes feed each other, forming a self-reinforcing capital strangulation structure — the money spent on training can never be recouped, the unrecouped costs keep growing, the money previously spent has already become worthless, and the costs of hiring and leasing office space, once started, cannot be stopped. This paper systematically analyzes this fatal architecture from the perspective of industrial economics, comparing pure AI companies against Google’s self-sustaining model, DeepSeek’s extreme low-cost model, and the healthy cash-generation models of Apple, Nvidia, and Samsung, to reveal the deep crisis lurking behind the AI industry’s apparent boom.
The First Black Hole: Training Costs That Cannot Be Passed On
In traditional manufacturing and technology industries, R&D costs are ultimately passed on to end customers through product pricing. Apple amortizes chip R&D expenses into the price of every iPhone, and consumers pay for it. Nvidia embeds GPU architecture R&D costs into the price of every graphics card, and data center customers pay for it. The ability to pass on costs is the core indicator that a company possesses pricing power.
AI foundation model companies almost entirely lack this ability. As of 2025, OpenAI spends $1.35 for every $1 it earns, with projected total cash burn of approximately $9 billion for the full year 2025. Anthropic’s situation is slightly better but essentially the same — losses of $5.3 billion in 2024, approximately $3 billion in 2025, with gross margins revised down from earlier optimistic projections of 50% to 40%, still far below the 70–80% typical of traditional software companies. Both companies’ revenues are surging, but spending is growing even faster — the direct consequence of an inability to pass on costs.
OpenAI’s actual spend per $1 earned
OpenAI Q4 2025 single-quarter loss
OpenAI projected 2025 cash burn
Typical AI company gross margin (far below traditional software)
Yet these astronomical training costs cannot be recouped by raising API prices. The reasons are fourfold:
First, the price war has destroyed pricing power. OpenAI cuts prices, Anthropic follows, Google directly offers free tiers. Competition among the three giants has driven API prices continuously downward, with customers psychologically expecting that “AI should keep getting cheaper.” If prices rise, switching to a competitor costs virtually nothing — perhaps just a few lines of code changed.
Second, open-source models exert pressure from below. Meta’s Llama series, Mistral, and other open-source models keep pushing closer to the capability frontier of closed-source models. According to the Stanford AI Index 2025 report, the performance gap between open-source and closed-source models has narrowed from 8% to just 1.7%, setting an almost ground-level ceiling on API pricing.
Third, training costs and inference costs are structurally decoupled. Customers pay for inference, but training is a one-time sunk cost. Training a top-tier model may cost hundreds of millions of dollars, but this expense cannot be amortized on a “per unit” basis across each API call the way manufacturing costs can — because the marginal cost of an API call is determined by inference compute, which has no relation to training costs.
Fourth, declining inference costs cannot save the overall economic model. According to Epoch AI data, LLM inference unit costs are falling at a median rate of 50x per year, accelerating to 200x after 2024. This seems like good news, but Gartner explicitly warned in March 2026: the deflation in per-token costs is being offset by the explosion in total token consumption, meaning total inference spending is actually rising. This is the classic Jevons Paradox — the more efficient it gets, the greater the demand, and total costs end up higher. Declining inference costs do not equal declining total expenditure for AI companies, let alone declining training costs.
Billions of dollars in training costs can only be swallowed internally, then slowly clawed back through low-priced APIs. But before you’ve recouped them, it’s time to start training the next model — a hole that can never be filled.
The Second Black Hole: Value Evaporation Through Iteration
Traditional tech product iteration is additive. When iPhone 15 is retired, Apple’s brand equity, user ecosystem, App Store developer network, and iCloud user data all carry over to the next generation. When Nvidia’s older GPUs are discontinued, the CUDA ecosystem, developer community, and enterprise client relationships all persist. Old products retire, but value accumulates in the ecosystem.
AI model iteration is replacement-based. When GPT-4 arrives, GPT-3’s value drops to zero. When Claude Opus 4 launches, prior models lose all commercial value. A model trained at a cost of billions may have an effective lifespan of just six to twelve months. This is not depreciation — it is evaporation.
This evaporation is not only the result of internal iteration but is accelerated by external competition. The AI industry is an arms race with no finish line: OpenAI must retire GPT-4 not because it has “stopped working” but because Anthropic’s Claude, Google’s Gemini, and Meta’s Llama are all approaching or surpassing it. Any company that stops iterating will see customers migrate to rivals within months. The existence of competitors compresses a model’s effective lifespan from its “natural life” to its “competitive life” — and the latter is invariably shorter.
The evidence already confirms this. As of 2025, GPT-3.5 has been deemed unusable for any professional scenario — it fabricates false references approximately 40% of the time when citing sources, and scores in the bottom 10% on the bar exam. OpenAI has continuously force-retired old model versions; GPT-3 series, multiple GPT-3.5 variants, and early GPT-4 versions have all been deprecated or placed on deprecation schedules, with increasingly shorter migration notice periods for developers.
Even more alarming, investor Michael Burry — the “Big Short” who famously predicted the 2008 financial crisis — publicly warned in late 2025 that AI-era profits are built on falsified asset depreciation. He estimated that large tech companies will under-report approximately $176 billion in asset impairments between 2026 and 2028 by extending depreciation schedules. In other words, the true depreciation rate of AI assets is far faster than financial reports suggest.
Additive Iteration
Apple: iPhone iteration → brand, ecosystem, user data all inherited
Nvidia: GPU generational upgrade → CUDA ecosystem, client relationships persist
Samsung: Chip upgrades → manufacturing process know-how, production assets appreciate
Value accumulates over time
Replacement Iteration
OpenAI: GPT-4 → GPT-5 = GPT-4 training costs entirely sunk
Anthropic: Opus 4 → Opus 5 = prior generation investment zeroed out
Google: Gemini 1.5 → 2.0 = old model has no commercial value
Value evaporates with each iteration
This means AI companies’ balance sheets are, in a sense, an illusion. The most “valuable” asset on the books is the model itself (along with associated training data and intellectual property), but this asset needs to be re-manufactured at enormous cost every six to twelve months. A traditional enterprise can invest in a production line that runs for a decade; an AI company’s model investment may have an effective cycle of less than a year. Capital efficiency is worlds apart.
The deeper problem: this characteristic means AI companies can never stop investing. Apple could theoretically skip a year without releasing a new iPhone and still profit handsomely from its installed base and services revenue. An AI company that stops model iteration will lose customers within months — because competitors’ newer models are invariably more capable.
AI companies are not fundamentally accumulating assets — they are perpetually consuming them. Each training run is a one-time expenditure producing a product with an extremely short shelf life. The money earned must always go toward building the next model that is already about to expire.
The Compounding of Training and Inference Black Holes: A Death Spiral
Individually, each black hole is already deadly. Combined, they form a self-reinforcing capital consumption cycle:
→
→
→
→
→
This structure has almost no precedent in business history. The closest analogy may be the pharmaceutical industry — R&D costs are enormous, and most drug candidates ultimately fail. But pharma companies at least have patent protection; once a new drug is approved, it can be sold exclusively for ten to twenty years, during which time R&D investments are fully recouped. AI models lack even this protection — because competitors can train a model of equivalent capability at any time, and no form of “exclusivity period” exists. Moreover, competition itself accelerates the spiral’s rotation: if you don’t train a new model, rivals will surpass you, but training a new model offers no guarantee of maintaining a lead for more than a few months. All participants are forced to keep raising the stakes, and no one can exit.
This also explains why revenue growth cannot solve the fundamental problem for AI companies. Anthropic grew from $87 million in annualized revenue in early 2024 to $30 billion by April 2026 — astonishing growth. But if cost growth always outpaces revenue, profitability can never be achieved.
The essential nature of the AI industry’s death spiral: the money spent on training can never be recouped (Black Hole I), the unrecouped costs keep growing (Black Hole I accelerating), the money previously spent has already become worthless (Black Hole II), and the costs of hiring and expansion, once started, cannot be stopped (Black Hole III). Together they form a triangular strangulation structure — the most expensive war of attrition in the history of technology.
The Third Black Hole: The Irreversible Operational Expansion Trap
Training is a sunk cost, inference runs on razor-thin margins, but there is an even more insidious black hole often overlooked: competition-driven operational expansion. This black hole differs from the first two — it is rigid, irreversible, and once set in motion, cannot be stopped.
OpenAI plans to nearly double its headcount from approximately 4,500 to 8,000 by the end of 2026, adding 3,500 new positions. The direct trigger is Anthropic’s rapid pursuit in the enterprise market — the share of new enterprise customers choosing Anthropic is already three times that of OpenAI. In December 2025, CEO Altman issued an internal “red alert” directive, pausing non-core projects to accelerate development at full speed. Anthropic has likewise been expanding, growing from 192 employees in 2022 to 1,097 in 2025 and approximately 2,500 by early 2026, signing leases on multiple buildings along Howard Street in San Francisco at numbers 500, 505, 400, and 300, creating what is effectively an “AI Alley.”
The salaries for these employees are no small figure. OpenAI’s median total annual compensation is approximately $640,000, with software engineering managers reaching up to $1.265 million. In 2025, OpenAI’s per-employee equity compensation averaged $1.5 million — the highest equity compensation in the history of tech startups, six times higher than Google at its IPO. Anthropic’s median total compensation is approximately $443,000, with software engineers reaching up to $759,000. Running the numbers: OpenAI’s current 4,500 employees alone represent nearly $2.9 billion per year in pure salary costs; doubling to 8,000 would exceed $5 billion. Anthropic’s 2,500 employees at a median of $440,000 means approximately $1.1 billion in annual salary costs. Combined, headcount costs alone for the two companies exceed $4 billion per year — and growing rapidly.
OpenAI’s target headcount by end of 2026
OpenAI median total annual comp
OpenAI per-employee equity comp (2025)
OpenAI’s San Francisco office footprint
Office space is expanding just as aggressively. OpenAI’s footprint in San Francisco’s Mission Bay has surpassed 1 million square feet, with an additional 439,000 sq ft signed on a 10-year lease in Mountain View, plus offices in New York and London. At San Francisco’s going rate of approximately $67/sq ft per year, San Francisco office rent alone approaches $67 million annually. Anthropic has been densely clustering along Howard Street, signing leases for hundreds of thousands of square feet across 500, 505, 400, and 300 Howard Street.
The fatal feature of this black hole is its irreversibility. Training costs can theoretically be paced — delay the next generation by six months. But can you delay 8,000 people’s salaries? Can you cancel a 10-year campus lease? Cut pay? Top talent will instantly be poached by Google, Anthropic, or Meta. Lay off staff? Core R&D capabilities are destroyed overnight. These are rigid expenditures that do not adjust with revenue fluctuations.
Inference tokens generate roughly $2 billion in gross margin, but OpenAI’s headcount costs alone approach $2.9–5 billion, plus office and operational expenses, putting pure operational expenditure at $4–6 billion or more per year. Inference revenue can’t even cover payroll, let alone fill the training black hole.
The three black holes thus form a triangular capital strangulation structure: the training black hole produces models that require more researchers to develop (operational costs rise), serving more customers demands greater inference compute and larger sales teams (operational costs rise again), operational expansion needs more revenue to sustain itself (inference pressure intensifies), and inference revenue can’t even cover operations, let alone backfill sunk training costs. The three black holes feed each other, and none can be closed independently.
Healthy Models for Reference: Who Is Generating Cash Properly?
To more clearly understand the AI industry’s structural defects, it is useful to benchmark against tech giants with healthy cash-generation capabilities. As of fiscal years 2025–2026:
| Metric | Apple | Nvidia | Samsung | OpenAI |
|---|---|---|---|---|
| Cash Reserves | $66.9B | $62.6B | $77.3B | Funding-dependent |
| Cash Generation | Quarterly FCF $28.6B | FY2026 shareholder returns $41.1B | Annual FCF ₩36.5T | Quarterly loss $11.5B |
| Cost Pass-Through | ✅ Hardware pricing power | ✅ GPU pricing power (70%+ gross margin) | ✅ Memory chip pricing power | ❌ API price war |
| Asset Accumulation | ✅ Ecosystem compounds | ✅ CUDA ecosystem | ✅ Manufacturing process IP | ❌ Model value zeroes out with iteration |
| Customer Relationship | Win-win (App Store) | Win-win (selling pickaxes) | Win-win (supply chain) | Competitive (absorbing ecosystem) |
These three companies share one common trait: they do not compete with their customers. Nvidia never builds foundation models or applications — it only sells compute. The more customers it has, the better; the more intensely those customers compete with each other, the higher demand for Nvidia GPUs. Apple provides a platform and distribution channel; developers make money through the App Store, binding their interests together. Samsung, as an upstream supplier of chips and displays, is a partner to all device manufacturers.
“Don’t compete with your customers” — this is an iron law of business. TSMC doesn’t design chips. ARM doesn’t make phones. Visa doesn’t run a bank. The companies that survive the longest play indispensable but never boundary-crossing roles within their ecosystems.
AI companies are systematically violating this iron law. OpenAI launched ChatGPT to compete directly with startups built on its GPT API, and launched GPTs to compete with application-layer developers. Anthropic launched Managed Agents to compete with companies building agent frameworks, and the more egregious case is Claude Code — Anthropic’s largest customer, Cursor, pays 100% of its revenue to Anthropic as compute fees, and Anthropic turns around and uses that money to develop Claude Code, a product that directly competes with Cursor. Using customer money to build a competitor to that customer is the most brazen violation of the “don’t compete with your customers” rule. Google is embedding Gemini into every one of its own products, competing with every developer using the Gemini API. Every step of expansion is encroaching on partners’ survival space.
Squeezed from Above and Below: Google’s Self-Sufficiency and DeepSeek’s Extreme Low Cost
The three black holes are already deadly enough, but pure AI companies also face an even crueler competitive landscape: they are being simultaneously squeezed by two forces — from above by giants with cash-generation capabilities, and from below by disruptors who don’t need to be profitable.
The crusher from above: Google. Alphabet’s 2026 capital expenditure budget is $175–185 billion, nearly double 2025. But Google differs fundamentally from OpenAI and Anthropic: Google Cloud is already profitable, with Q4 2025 operating profit surging 154% year-over-year to $5.3 billion at a 30% margin. More critically, Google’s in-house TPU chips deliver 30–44% lower total cost of ownership than Nvidia GPUs. This means when Google serves a Gemini query, it pays electricity plus chip depreciation; when OpenAI serves a ChatGPT query, it pays Nvidia’s margin + Microsoft’s margin + its own operating costs. This structural gap does not narrow with scale — it widens. Google also reduced Gemini inference unit costs by 78% within 2025 through model optimization. For Google, AI is an investment project for its search advertising empire (annual revenue exceeding $200 billion), not an existential dependency. It can afford to burn, and it can afford to wait.
The disruptor from below: DeepSeek. This Chinese company claims its V3 model training cost just $6 million, while OpenAI’s GPT-4 cost approximately $100 million. DeepSeek’s API pricing is one-quarter to one-sixth of American companies, and — unlike its loss-making American competitors — it claims to be profitable. DeepSeek’s parent company is the quantitative hedge fund High-Flyer, which faces no external investor profitability pressure and can disrupt the market at rock-bottom prices indefinitely. DeepSeek’s emergence directly ignited the Chinese AI price war, forcing Baidu, Alibaba, Tencent, and ByteDance to slash prices drastically or even open-source their models for free. Baidu cut ERNIE 4.5 Turbo’s API price by 80%, to just $0.11 per million tokens. AI scholar Kai-Fu Lee’s assessment: “DeepSeek’s biggest lesson is that open-source has won. When a competitor is free and powerful, the pricing rationale for closed-source models is fundamentally shaken.”
Crushed from Above · Google
In-house TPUs, 30–44% lower cost than Nvidia
Google Cloud already profitable, 30% margin
Search ad revenue $200B+ as backstop
AI is an investment project, not an existential dependency
Disrupted from Below · DeepSeek
Training cost $6M vs OpenAI’s $100M
API pricing at 1/4 to 1/6 of US companies
Hedge fund backstop, no need for profitability
Destroying industry pricing with extreme low cost
Pure AI companies — OpenAI, Anthropic — are caught in the middle: they cannot achieve Google’s self-sufficiency through in-house chips and advertising revenue, nor can they wage DeepSeek’s price war through extreme efficiency and zero profitability pressure. Their margin space is crushed from above, their pricing power squeezed from below. The three black holes are not shrinking in this pincer — they are accelerating.
Can an IPO Break the Deadlock?
Faced with continuous capital consumption, an IPO appears to be the AI companies’ last card to play. OpenAI is reportedly preparing for an IPO, and Anthropic is evaluating plans to go public as early as October 2026. But can going public really solve the problem?
Traditional tech companies go public because they have already proven their profit model — the IPO is icing on the cake. AI companies are going public because private market money is running out — the IPO is life support. These are fundamentally different situations.
After going public, AI companies will face an entirely new dimension of pressure — the pricing discipline of secondary markets. In the private phase, valuation is negotiated among a small group of investors, and losses can be absorbed by “growth narratives.” After listing, valuation is repriced by global markets every second. A single piece of bad news, one missed earnings expectation, one lawsuit — and the stock price can be halved.
OpenAI’s latest valuation
Anthropic’s potential IPO valuation
WeWork post-IPO market cap (bankrupt)
Even more dangerous is the delisting risk. Under US stock exchange rules, if a stock trades below $1 for 30 consecutive trading days or market cap falls below certain thresholds, the exchange issues a delisting warning. Once a loss-making company loses market confidence, share price collapse can be far faster than imagined. WeWork went from a $47 billion valuation to zero and bankruptcy; waves of SPAC-listed companies were delisted within two to three years — these companies share a striking resemblance to today’s AI companies: high valuations, heavy losses, no cash-generation ability, stock prices sustained entirely by narrative.
An IPO raises tens of billions, which burns through in two or three years. Then what? Raise more? The stock price won’t hold. Public market investors are not as patient as VCs — they want quarterly earnings reports, they want profits. Once consecutive losses appear, the stock crashes, fundraising ability evaporates, and the death spiral begins.
Some see advertising as a way to break through. In February 2026, OpenAI officially launched ads in the free and Go tiers of ChatGPT, reaching $100 million in annualized advertising revenue within six weeks, with over 600 advertisers participating in the pilot. That figure seems impressive, but in the context of the overall financial structure, it’s not even in the same league — the same company loses $11.5 billion in a single quarter, and annualized ad revenue of $100 million represents less than 1% of the loss. Even if advertising reaches its projected sub-$1 billion for all of 2026, it’s a drop in the ocean against the three black holes.
More dangerously, the advertising model itself carries a new trust bomb. ChatGPT’s core value is built on users’ trust in the objectivity of its answers. Once users begin to suspect that AI responses serve advertisers rather than accuracy, the platform’s core competitive advantage rapidly erodes. Anthropic has explicitly rejected the advertising model, positioning “no ads” as a trust differentiator for enterprise customers. OpenAI is selling a $1 product for $0.70 and now trying to use advertising to subsidize the $0.30 shortfall — this is not business model innovation, it is a desperate scramble.
The Hidden Bomb of Data Trust Erosion
Beyond the three financial black holes, AI companies face a potential acceleration trigger for collapse: the data trust crisis.
Every user interaction on the platform — how prompts are written, how workflows are designed, how agents are orchestrated — is essentially free training data and product inspiration. Entrepreneurs think they’re using a tool; in reality, they’re also feeding it. This is more insidious than Apple’s App Store tax: Apple takes money; AI companies take intelligence.
OpenAI’s experience has already demonstrated the chain reaction of trust collapse. In early 2026, a court ruled that OpenAI must provide plaintiffs in a copyright lawsuit with 20 million anonymized ChatGPT conversation logs. The judge determined that users “voluntarily submitted” their conversation content, thus invalidating privacy defenses. Simultaneously, Elon Musk’s fraud lawsuit is set for trial on April 27, seeking $134 billion in damages. Microsoft — OpenAI’s largest shareholder — is reportedly also considering a breach-of-contract lawsuit.
Founders, investors, users, copyright holders, and regulators are all applying pressure simultaneously — this is the classic scenario of a pile-on. There is historical precedent: Facebook’s Cambridge Analytica scandal transformed it overnight from “connecting the world” to “privacy enemy.” If AI companies mishandle data trust, a similar tipping point could arrive at any time.
Some argue that user data and RLHF feedback signals constitute a “data flywheel” moat for AI companies — more users generate richer feedback, producing stronger models that attract even more users. But this logic is being undermined by two trends. First, synthetic data is replacing human feedback data at massive scale: in the instruction fine-tuning phase, synthetic data has essentially won; in the preference training phase, academic research shows synthetic preference data performs comparably to human data. Second, as Andreessen Horowitz’s research has noted: data scale effects often exhibit diminishing returns — the marginal cost of acquiring unique data is rising while the marginal value of incremental data is declining. The data flywheel does not constitute a durable competitive barrier at the general model layer; it is more likely to prove useful at the vertical application layer.
This means the “moat” AI companies are attempting to build through collecting user data faces both ethical and legal risks, and is simultaneously being technically circumvented by synthetic data. Data is not an AI company’s asset — it is becoming a liability.
Historical Analogy: Who Actually Made Money in the Gold Rush?
In the 1849 California Gold Rush, the vast majority of miners lost everything, while those selling pickaxes, tents, and jeans (Levi’s) made fortunes. The AI industry in 2026 is replaying the exact same script.
| Role | Gold Rush (1849) | AI Boom (2024–2026) |
|---|---|---|
| Gold miners | Miners (most lost money) | AI foundation model companies (cash burn race) |
| Pickaxe sellers | Tool merchants, Levi’s | Nvidia (GPUs), TSMC (foundry) |
| Platform operators | Railroad companies, banks | AWS, Azure, GCP (cloud platforms) |
| Casualties | Bandwagon speculators | AI startups, enterprise customers |
Nvidia’s cash reserves surged from $13.3 billion before ChatGPT’s launch (January 2023) to $62.6 billion in fiscal year 2026 — nearly a fivefold increase. In FY2026, Nvidia returned $41.1 billion to shareholders in buybacks and dividends — something only a company that is truly making money can do. Regardless of which AI company wins or loses, Nvidia collects — the more fiercely AI companies compete, the higher demand for Nvidia GPUs. The AI companies’ death spiral is precisely Nvidia’s virtuous cycle.
The true “tax collectors” in the AI value chain extend beyond Nvidia to include cloud platforms. AWS, Azure, and GCP provide the operational infrastructure for AI companies, charging by usage — rain or shine, they get paid. In 2026, the five largest US cloud and AI infrastructure providers have committed to a combined $660–690 billion in capital expenditure. That money flows to cloud platforms and chip companies, not to AI model companies. AI companies think they are building the future; in reality, they are paying rent to infrastructure providers. Model companies are tenants, not landlords.
Conclusion: The Countdown Behind the Boom
The AI industry’s current surface prosperity — $30 billion in annualized revenue, $850 billion valuations, a new product launch every two weeks — masks a fundamental structural flaw: this industry has not yet found a sustainable profit model.
The existence of the three black holes means that no matter how fast revenue grows, as long as the cost structure does not fundamentally change, AI companies are merely competing to see who dies more slowly. The thin margins from inference tokens can’t even sustain operational teams, training costs are a pure-expenditure, zero-return abyss, and competition-driven expansion, once launched, cannot be braked. Meanwhile, Google crushes from above with in-house chips and advertising revenue, while DeepSeek destroys the pricing system from below with extreme low cost and free models. At the macro level, US AI capital expenditure exceeds $500 billion per year, while consumer spending on AI services totals just $12 billion — the gap between investment and return is like comparing Singapore’s GDP to Somalia’s. Anthropic CEO Dario Amodei has openly admitted: if AI progress is delayed by 12 months, he would go bankrupt. OpenAI CEO Sam Altman has publicly stated: “Are investors collectively too excited about AI? My view is: yes.” When the CEOs of both leading companies are sending signals like these, this is not a warning — it is a confirmation. Unless at least one of the following two conditions is met:
Condition 1 · Cost Breakthrough
An order-of-magnitude decline in compute costs — not 10% or 20% optimization, but a 10x or even 100x breakthrough like the early days of Moore’s Law, making training costs no longer astronomical.
Condition 2 · Margin Breakthrough
AI discovers a killer application with extremely high margins and near-zero marginal cost — analogous to Apple’s App Store or Google’s search advertising, where once built, profits scale almost infinitely.
Until these conditions are met, AI companies are essentially using venture capital and public market funds to conduct the most expensive gamble in the history of human technology. If the bet wins, it may reshape the entire economic landscape. If it loses, what remains will be hundreds of billions in sunk costs and a field of wreckage.
The longest-surviving companies in business history all follow two simple principles: don’t compete with your customers — only through win-win cooperation can sustainability be achieved; and fortunes are not made in a day. The AI industry is violating both principles simultaneously — sacrificing long-term ecosystem health for short-term expansion, and mortgaging today’s trust against tomorrow’s valuations.
A company without the ability to generate its own cash, no matter how much it raises, is only counting down. Those who eat alone will eventually find that the table is empty — and the meal has gone cold.
Sources & References
[1] SaaStr, “Anthropic Just Passed OpenAI in Revenue,” April 2026
[2] Yahoo Finance, “OpenAI’s 2026 Scorecard: A String of Lawsuits,” March 2026
[3] CNBC, “Elon Musk Seeks Ouster of OpenAI CEO Sam Altman,” April 2026
[4] National Law Review, “OpenAI Loses Privacy Gambit: 20 Million ChatGPT Logs,” January 2026
[5] Anthropic Engineering Blog, “Scaling Managed Agents,” April 2026
[6] Gartner, “Performing Inference on an LLM Will Cost Over 90% Less by 2030,” March 2026
[7] Epoch AI, “LLM Inference Prices Have Fallen Rapidly but Unequally Across Tasks,” March 2025
[8] Stanford HAI, “The 2025 AI Index Report,” April 2025
[9] CNBC, “OpenAI Ads Pilot Tops $100 Million in ARR in Under 2 Months,” March 2026
[10] Fortune, “What Happens to Old AI Chips? Michael Burry Depreciation Warning,” December 2025
[11] AI Automation Global, “OpenAI Lost $5B on $3.7B Revenue: The AI Inference Cost Crisis,” March 2026
[12] Andreessen Horowitz, “The Empty Promise of Data Moats,” April 2024
[13] MacroTrends / CompaniesMarketCap, financial data for Apple (AAPL), Nvidia (NVDA), Samsung (005930.KS)
[14] SEC Filings: Apple 8-K (FY2025), Nvidia 8-K (FY2026), Samsung annual reports
[15] OpenAI, “Our Approach to Advertising and Expanding Access to ChatGPT,” January 2026
[16] TradingKey, “Anthropic Revenue Surpasses OpenAI for First Time,” April 2026
[17] Futurum, “AI Capex 2026: The $690B Infrastructure Sprint,” February 2026
[18] Nathan Lambert, “Synthetic Data & CAI,” RLHF Book, 2025-2026
[19] Fortune, “OpenAI Plans to Almost Double Headcount This Year,” March 2026
[20] Levels.fyi, OpenAI / Anthropic salary data, updated April 2026
[21] Fortune, “OpenAI Is Paying Workers $1.5 Million in Stock-Based Compensation on Average,” February 2026
[22] SF Chronicle / The Real Deal, OpenAI office expansion data, January-March 2026
[23] SF Standard, “The AI Leaderboard: Where the Biggest Companies Are in SF,” April 2026
[24] CNBC, “Google Parent Beats on Revenue, Projects Significant AI Spending Increase,” February 2026
[25] SemiAnalysis, “Google TPU vs Nvidia GPU Total Cost of Ownership Analysis,” November 2025
[26] Wikipedia / Britannica Money, “DeepSeek,” updated April 2026
[27] CNBC, “China’s Open-Source Embrace Upends Conventional Wisdom Around AI,” March 2025
[28] Digitimes, “Baidu Cuts AI Costs, Takes Swipe at DeepSeek,” April 2025
[29] Epoch AI, “Can AI Companies Become Profitable?” January 2026 (updated March 2026)
[30] RAND Corporation, US-China AI Competition Report, 2026