LEECHO Research Paper · V2

Search Competitiveness of AI Companies in the GEO Era

From cost structures and technical architectures to user behavior — decoding the industrial economics of AI search and information alignment

LEECHO Global AI Research Lab
&
Claude Opus 4.6 · Anthropic

April 8, 2026 · V2

Abstract

GEO (Generative Engine Optimization) is replacing SEO as the core paradigm for digital information distribution. This paper analyzes, from the perspective of AI companies, how search capability is becoming the decisive factor in large language model competitiveness. By examining the cost differentials between real-time search and RAG technical architectures, the causal relationship between search integration and user growth (with independent validation from Perplexity as a pure search product), the structural demand driven by cognitive degradation in the internet age, the usage divide between producer-side and consumer-side users, and a comparative analysis of search infrastructure across OpenAI, Google, Perplexity, and Anthropic, this paper argues: in the GEO era, search capability is not an add-on feature for AI companies — it is the core infrastructure of their commercial competitiveness. Competition among AI companies will increasingly manifest as competition over search quality, search cost control, and information alignment precision.

This paper is a companion piece to “AI Search Information Alignment Is the Most Core Function of LLMs” (V1, April 6, 2026). The previous work argued from the user perspective that “information alignment is the core function of LLMs”; this paper argues from the enterprise perspective that “search capability is the core competitiveness of AI companies.”

SECTION 01 · Technical Architecture

Real-Time Search vs. RAG: The Cost Game of Two Paths

A fundamental architectural decision with profound implications for business models

When providing information alignment services to users, AI companies face a foundational architectural decision: should they call search engine APIs in real time for every user query to retrieve the latest information, or should they perform Retrieval-Augmented Generation (RAG) based on pre-built index databases? The cost structures of these two paths differ radically, profoundly affecting AI companies’ business models and competitive strategies.

Real-Time Search: A Variable Cost Model Billed Per Query

Real-time search means every time a user triggers a search request, the system issues an API call to a search engine. This generates multi-layered costs: the per-call fee for the search API, the bandwidth and computational overhead of full-page web scraping, and the inference compute cost of stuffing retrieved results into the large model’s context window. The longer the context, the higher the inference cost. This is a linear growth model where costs scale directly with users.

RAG Indexing: A Fixed-Cost Model — Heavy Upfront, Light Operations

The RAG path requires AI companies to massively pre-crawl web pages, clean data, perform vectorization, build index databases, and continuously maintain storage. The upfront investment is enormous, but once built, each query only requires vector retrieval from the company’s own database at extremely low per-query cost. The trade-off is that information freshness depends on index update frequency.

Dimension Real-Time Search RAG Indexing
Cost Structure Variable cost (per-query billing) Fixed cost (build + maintain)
Information Timeliness Real-time (seconds) Delayed (hours to days)
Per-Query Cost High Very low
Scaling Pressure Linear growth Marginal decrease
Third-Party Dependency Strong (search engine APIs) Weak (own infrastructure)

In practice, most AI companies employ a hybrid architecture: high-frequency queries with lower timeliness requirements are handled by RAG, while time-sensitive scenarios use real-time search. Some companies also introduce caching mechanisms — reusing the first search result for trending topics that many users query within a short window — to reduce API call frequency.

An Industrial Economics Perspective: Real-time search and RAG are not a technology choice but a cost decision. The search competitiveness of an AI company depends largely on finding the optimal mix between these two paths — ensuring information freshness meets user expectations while keeping search costs within commercially sustainable bounds.

SECTION 02 · Cost Data

Search Costs: The Hidden Giant in AI Company Financials

What leaked documents and industry shifts reveal about the true cost of search

No AI company currently discloses search API fees paid to search engine providers as a separate line item. But from leaked financial documents and industry developments, we can piece together the cost landscape of search. It is important to clarify: search costs and inference costs are two distinct but compounding concepts — search API fees are the cost of calling external search engines, inference costs are the compute costs of running LLMs to generate responses, and the introduction of search features simultaneously increases both (API call fees + inference costs from longer contexts).

What the OpenAI–Microsoft Financial Relationship Reveals

Leaked internal documents show that in 2024, Microsoft collected approximately $494 million in revenue sharing from OpenAI, which surged to approximately $866 million in the first three quarters of 2025. OpenAI pays Microsoft a revenue share of roughly 20%, covering Azure compute, Bing search data, and multiple other services. On the inference cost side, OpenAI’s Azure inference spend alone was approximately $3.7 billion in 2024, reaching approximately $8.7 billion in the first three quarters of 2025 — inference spending may have already exceeded total revenue.

$494M
2024 OpenAI → Microsoft Revenue Share
$866M
2025 First 3 Quarters Revenue Share
$8.7B
2025 First 3 Quarters Inference Cost (Azure only)

Microsoft Shutting Down Bing API: A Signal of Search Data Monopolization

In May 2025, Microsoft announced it would completely shut down the legacy Bing Search API by August 11, with replacement solutions based on Azure AI products priced 40% to 483% higher than the original API. In the two years prior, Microsoft had already increased Bing API prices by 3 to 10 times. This means all AI companies dependent on the Bing search index — including DuckDuckGo and other smaller search providers — face sharply escalating search data acquisition costs.

For AI companies, a high-quality search index is the lifeblood of their product operation. AI output comes from two sources: the model’s training data (pre-trained knowledge) and the latest content retrieved through real-time web retrieval (RAG/grounding). Microsoft blocking the legacy search API is essentially squeezing the information pipeline of the AI industry.

Strategic Risk: Reliance on third-party search infrastructure is a path of increasing costs and diminishing control. This explains why companies like Perplexity are building their own search indexes — in the long run, autonomous control over search capabilities is a foundational condition for AI company survival.

SECTION 03 · Growth Causation

Search Integration: The Causal Engine of Explosive User Growth

ChatGPT’s evolution offers a precise natural experiment

ChatGPT’s development history provides a precise natural experiment: what happened to user numbers and engagement frequency as search capability went from nonexistent to fully integrated?

Three Phases of Evolution

Phase Period Search Capability User Scale
Static Recall Nov 2022 – Mar 2023 No search, training data only 5 days → 1M / 2 months → 100M
Plugin Integration Mar 2023 – Oct 2024 Browsing plugin / early RAG 100M → 200M weekly active
Native Search Oct 2024 – Present ChatGPT Search officially launched 300M → 800M weekly active

Key data points: from December 2024 to February 2025 — the window when ChatGPT Search was rolled out to all users — user numbers grew 33%, jumping from 300 million to 400 million. Within the following two months, they doubled again to 800 million. Monthly traffic grew from 600 million visits in January 2023 to 6.2 billion in October 2025, a more than 10x increase.

Even more significant was the qualitative shift in usage habits: by April 2025, weekly active users nearly equaled monthly active users, indicating that the vast majority of users had formed a weekly usage habit rather than occasional experimentation. The average conversation on ChatGPT was 5.2 turns, with 50.6% of conversations being multi-turn — users were progressively refining their intent through dialogue to achieve information alignment.

Causal Inference: Search integration was not a footnote in the ChatGPT growth story — it was the acceleration engine that propelled users from the “hundred-million” tier to the “billion” tier. Search transformed LLMs from “an interesting writing tool” into “indispensable information infrastructure.” Users voted with their feet.

Confounding Factors and Independent Validation

It must be acknowledged that the launch window of ChatGPT Search overlapped with GPT-4o’s release, expanded free-tier features, mobile growth, and other factors — user growth cannot be solely attributed to search. However, Perplexity AI — a pure search product — provides powerful independent corroboration through its own growth trajectory.

Perplexity has no coding assistant, no image generation, no Agent capabilities — it achieved the following growth purely through search-based information alignment: monthly active users grew from 10 million in January 2024 to 30 million by April 2025 (1,400% growth over two years); monthly queries surged from 230 million in August 2024 to 780 million by May 2025; valuation jumped from $520 million in January 2024 to $18 billion by July 2025; annual recurring revenue grew from $100 million in early 2025 to approximately $200 million by year’s end. A team of just 38 people serving 30 million monthly active users.

Perplexity’s Independent Validation: A pure search product that offers none of the “generative” functions — no writing, coding, or image generation — achieved 1,400% user growth and an $18 billion valuation solely through information alignment. This proves at the product level that search capability itself is an independent competitive dimension with enormous commercial value.

SECTION 04 · Cognitive Degradation

Demand-Side Driver: Cognitive Decline in the Internet Age

The structural force making AI search increasingly indispensable

The value of AI search derives not only from technological progress on the supply side but also from a structural change on the demand side — the systematic degradation of human language articulation and deep cognitive abilities in the internet age.

“Brain Rot”: Academic Evidence Behind the 2024 Word of the Year

Oxford University Press named “brain rot” the 2024 Word of the Year. A meta-analysis published in Psychological Bulletin (based on 71 studies and nearly 100,000 participants) confirmed that higher short-video consumption frequency correlates with poorer cognitive performance. Data from MIT’s cognitive science research center shows deep reading ability is declining at a rate of 12% per year. Research from the University of Stavanger reveals that approximately 40% of Gen Z is losing the ability to communicate through handwriting.

Cognitive Offloading: From the “Google Effect” to the “AI Effect”

Academia has termed this phenomenon “cognitive offloading” — humans transferring memory, reasoning, and problem-solving tasks to technological tools. Over the past 10–15 years, baseline human cognitive load has declined by approximately 20%. The search engine era produced the “Google Effect” (people stopped memorizing information they could search for), and the AI era takes this further: people no longer perform the keyword-refinement thinking that search engines demanded.

A study of 666 participants found a significant negative correlation between frequent AI tool use and critical thinking ability, with cognitive offloading as a mediating factor. Even more alarming, the century-long upward trend in IQ scores (the Flynn Effect) has reversed in industrialized nations.

The Causal Loop: Human cognitive degradation → decline in language articulation ability → increasing difficulty expressing complex information needs precisely → the “articulation threshold” of traditional keyword search becomes a massive barrier → AI eliminates this barrier by understanding vague intent → information alignment becomes AI’s most irreplaceable core value. Human decline and AI advancement converge in the same time window, mutually reinforcing each other.

Investment Implications of Irreversibility

The most important characteristic of cognitive degradation is its irreversibility. The attention fragmentation of the short-video era, the erosion of deep thinking capacity from fragmented reading, and the accelerated cognitive offloading driven by AI itself — all are one-way processes. There is no evidence that human population-level articulation ability and deep reading capacity will recover in the future. This means the demand for AI search information alignment is not cyclical (like demand fluctuations caused by economic cycles) but structural and irreversible. For AI companies, the return on investment in search infrastructure is long-term and certain — this is not a fad that may fade, but a structural trend that will only deepen.


SECTION 05 · Intent Refinement

Precision from Vagueness: The Micro-Mechanism of AI Search

How AI bridges the gap between fuzzy intent and precise information retrieval

Nielsen Norman Group (a leading global user experience research organization) has provided precise descriptions of the micro-mechanism of AI search. Their research found that users actively turn to AI tools when they are uncertain about what they are looking for or do not know how to describe their search target. Traditional search requires users to provide specific keywords, whereas AI offers greater flexibility when the search space is unfamiliar.

“Keyword Foraging”: The Search Before the Search

NN/g coined the term “keyword foraging” — users must first conduct a preparatory round of searching to figure out what keywords to use for their actual search. For example, a user wanting to buy a bartender’s Y-shaped peeler but not knowing it is called a “channel knife” can only resort to trial and error in a search engine. AI eliminates this intermediate step entirely.

MIT Technology Review’s observations corroborate this: with AI search, you don’t need to be able to precisely articulate what you’re looking for. You can describe what that bird in your yard looks like, what seems to be wrong with your refrigerator, or the strange noise your car is making, and receive an accurate answer. Once accustomed to this mode of searching, dependency follows naturally.

Parallel Validation in the Chinese Market

Frost & Sullivan’s “2025 China AI Search Industry White Paper” confirms that traditional search struggles with semantically complex or polysemous long-tail queries, while AI search uses NLP and deep learning models to deeply parse user intent. Chinese AI search products such as Metaso’s “Think First, Search Second” mode have evolved search from “finding known information” to “solving unknown problems.”

Core Insight: AI is not merely helping people search — it is helping people “think through” what they actually need. The starting point of information alignment is not even the search behavior itself, but “self-discovery of intent” — something traditional search engines have never been able to do.

SECTION 06 · User Segmentation

Producers vs. Consumers: The Dual Structure of AI Usage

The overlooked divide that defines where the real demand lives

AI usage scenarios contain a dual divide that is severely overlooked in industry discussion: the usage patterns of producer-side users (Producers) and consumer-side users (Consumers) are fundamentally different.

Dimension Producer-Side Users Consumer-Side Users
Typical Profile Developers, designers, content creators General users, students, professionals across industries
Core Usage AI coding, image/video generation AI search, information retrieval, decision-making consultation
Population Scale Small (coding accounts for only 4.2% of ChatGPT messages) Massive (information search + practical guidance = 53%)
Community Voice Extremely high (highly active on Reddit/HN) Extremely low (the silent majority)
Willingness to Pay High (multiple tool subscriptions) Low (primarily free-tier users)
Dependence on Search Moderate (mainly for API documentation) Core need

Key supporting data: in a U.S. survey, 60% of users listed “searching for information” as the number one use of AI. Among teen users, information search led all use cases at 57%. Anthropic’s data shows that “computer and math” activity accounts for 37–40% on the Claude platform — but this reflects Claude’s specific developer-heavy user base. Projected onto ChatGPT’s full 800 million weekly active users, coding accounts for only 4.2%, while information search plus practical guidance together exceed 53%.

The Industry Implication of the “Silent Majority”: AI coding and AI image generation garner far more attention in industry discussions than AI search — because the loudest voices come from producer-side users. But the engine supporting 800 million weekly active users and 2.5 billion daily messages is the information alignment needs of consumer-side users. The commercial competitiveness of AI companies hinges on their ability to serve this silent majority.

SECTION 07 · Behavioral Paradigm

The New Behavioral Inertia of the AI Era: Converse → Refine → Search → Align

How search behavior is evolving from single transactions to multi-turn dialogues

Search behavior is evolving from “one-shot transactions” to “multi-turn conversations.” Users no longer enter keywords and receive a list of links; instead, they progressively refine their true intent through iterative dialogue with AI, which then executes precise retrieval to achieve information alignment.

From “Single-Query” to “Conversational Discovery”

LLM monitoring data shows that in August 2025, the average ChatGPT conversation lasted 5.2 turns, with 50.6% of conversations being multi-turn. Academic research notes that users often begin with vague, under-specified, or even internally inconsistent goals, only gradually clarifying their true needs through iterative dialogue with the model.

“Prompt Fluency” Is the New Search Literacy

The core literacy of the traditional search era was “keyword construction ability” — knowing which words to search for. The core literacy of the AI era is shifting to “prompt fluency” — knowing how to converse with AI to express one’s needs. As AI search platforms accumulate sufficient user context (device, location, search history, preferences, past conversations), the amount users need to explicitly state in their prompts will steadily decrease.

Jakob Nielsen (the global godfather of UX) defined this transition as the third epoch of UX: the Internet era (1995–2025) aimed to “influence” users to buy and subscribe; the AI era (from 2026) shifts toward “augmenting human existence” — helping humans make better decisions, imagine, and judge.

Business Implications of Behavioral Inertia: When “refining intent through AI dialogue → executing precise search → completing information alignment” becomes the daily behavioral habit of 800 million users, the essential nature of competition among AI companies becomes: Who can understand vague intent more accurately? Who can complete information retrieval faster? Who can provide fresher information at lower cost? These three questions define the search competitiveness of AI companies in the GEO era.

SECTION 08 · Usage Share

The Trendline: Search’s Rising Share of AI Usage Scenarios

A clear upward trajectory emerges from longitudinal usage data

Observing the shifting shares of various ChatGPT use cases over time reveals a clear trendline.

Usage Category Early Period (2023–Early 2024) Current (Mid-2025) Trend
Writing ~36% ~24% ↓ Continuous decline
Information Retrieval ~14% ~24% ↑ The only category with sustained growth
Practical Guidance ~28% ~29% → Stable
Coding ~4% ~4.2% → Stable (niche)
Personal Expression ~11% ~11% → Stable

Information retrieval is the only category with sustained growth among all ChatGPT usage types, rising from 14% to 24% — a 71% increase. Meanwhile, writing dropped from 36% to 24%, and the two have now converged. If “practical guidance” (29%) — essentially personalized information alignment — is included in the broader information search category, then broad-sense information alignment already accounts for 53%, far exceeding any other single use case.

Broader data also supports this trend: total global search volume (search engines + AI search) grew 26%, with 16% growth in the U.S. AI search platforms saw average monthly traffic growth exceeding 721% over the past year. AI search traffic has a conversion rate of 14.2%, compared to just 2.8% for Google’s traditional search. 75% of users report using AI search tools more frequently than a year ago, with 43% using them daily.

Trend Projection: As RAG and search capabilities continue to mature, the center of gravity for LLM usage is shifting from “generation” toward “search and alignment.” Users initially treated AI as a writing tool (because search capabilities were not yet mature), but once search was fully realized, they rapidly reverted to their most fundamental need. Generation is merely the means; alignment is the purpose.

SECTION 09 · Competitive Landscape

Competition in the GEO Era: Search IS the Moat

Why search infrastructure is becoming the decisive differentiator

Synthesizing the preceding analysis, the competitiveness of AI companies in the GEO era can be decomposed into three dimensions:

Dimension One: Search Quality (Information Alignment Precision)

Who can more accurately understand users’ vague intent, retrieve the most relevant information, and present it in a coherent synthesis? This directly determines user retention and usage frequency. All current mainstream AI search systems — Google AI Overviews, ChatGPT Search, Perplexity, Claude — run on RAG or its variants, but search quality differences are enormous.

Dimension Two: Search Cost Control (Commercial Sustainability)

With Microsoft shutting down the legacy Bing API and search data acquisition costs escalating sharply, who can provide equivalent-quality search services at lower cost? Building proprietary search indexes (like Perplexity), optimizing caching strategies, and fine-tuning hybrid architectures are all weapons in the cost competition.

Dimension Three: Search Infrastructure Autonomy (Strategic Security)

AI companies that fully depend on Google or Bing APIs face constant risk of price increases, service interruptions, or data restrictions. The degree of autonomous control over search infrastructure determines an AI company’s long-term strategic security.

Search Competitiveness Matrix of Four AI Companies

Company Search Index Search Strategy Autonomy Competitive Position
Google Proprietary (world’s largest index) AI Overviews / AI Mode native integration Fully autonomous Platform depth: Search + Workspace + Cloud
OpenAI Dependent on Bing API + building SearchGPT Vertical integration, embedded in ChatGPT Moderate (constrained by Microsoft) Model + API + consumer platform integration
Perplexity Self-built (200B+ URLs, 400PB hot storage) Search-as-product, model-agnostic architecture Highly autonomous AI-native search engine, 38-person team / $18B valuation
Anthropic Third-party search API calls Safety-first, MCP open protocol Low (dependent on external search) Safety + open standards, strong in coding/long documents

This matrix reveals a critical divide: Google and Perplexity possess autonomous search infrastructure, while OpenAI and Anthropic depend on third parties. Against the backdrop of increasingly homogenized model capabilities (price wars have already begun), search infrastructure autonomy is becoming the more durable source of differentiation. Perplexity’s case is particularly noteworthy — a team of just 38 people, armed with a self-built search index covering 200 billion URLs and a model-agnostic architecture, has already surpassed many companies with thousands of engineers on the search competitiveness dimension.

Core Thesis: In the GEO era, search capability is not an add-on feature for AI companies — it is the core infrastructure of their commercial competitiveness. The gap in model capabilities is narrowing (price wars have begun), while the gap in search capabilities — including search quality, search cost, and search autonomy — is becoming the decisive factor separating winners from losers.

SECTION 10 · Limitations & Rebuttals

Possible Counterarguments and Responses

Stress-testing the thesis against likely objections

Objection 1: “Model capability still matters more than search. Smarter models are the real competitive advantage.”

Response: The gap in model capabilities is closing rapidly. From 2024 to 2026, LLM inference costs have been falling at a rate of 50–200x per year, and GPT-5.2, Gemini 3.1 Pro, and Claude Opus 4.6 now differ negligibly on most benchmarks. When models become commoditized, the differentiated services built around models — especially search quality — become the true determinant of user retention. As Perplexity’s case demonstrates: it does not develop frontier models, uses a model-agnostic architecture, yet achieved an $18 billion valuation through search capability alone.

Objection 2: “When the Agent era arrives, search will be replaced by automated execution. Users won’t need to ‘search’ — AI agents will complete tasks for them directly.”

Response: Agents’ automated execution is not a substitute for search but an extension of it. For any Agent to execute a task (shopping, booking, managing), the prerequisite step is still information retrieval — an Agent must “know” before it can “do.” As of 2026, only 24% of consumers are comfortable with AI agents making autonomous purchases; Agents remain in early stages. Even when Agents mature, search will transition from “explicitly triggered by users” to “implicitly called by Agents” — but the demand for search infrastructure will increase rather than decrease.

Objection 3: “Google’s proprietary search index gives it an insurmountable advantage. Other AI companies can never compete with Google on search.”

Response: Google’s search index advantage is indeed formidable, but “search competitiveness” in the GEO era is not equivalent to “search index scale.” Perplexity, with a 200 billion URL index (far smaller than Google’s) and a 38-person team, has earned strong user recognition for its AI search experience. The key differentiator is not the absolute size of the index but rather: semantic understanding precision, multi-source information synthesis capability, and the architectural ability to seamlessly fuse search results with LLM reasoning. On these dimensions, companies focused on AI search can — and already do — outperform Google’s general-purpose search.

Objection 4: “Search costs will naturally decline with technological progress and won’t constitute a long-term competitive barrier.”

Response: Inference costs are indeed falling rapidly, but the trajectory of search costs is more complex. Microsoft shut down the Bing API and raised replacement prices by 40%–483%; Google’s search data is likewise not freely accessible. Search costs include not only compute (which will decrease) but also data access rights (becoming more monopolized, prices rising) and index maintenance (fixed-scale costs). Amid the data monopolization trend, autonomous control over search infrastructure is becoming increasingly important, not less.


SECTION 11 · Conclusion

Conclusion and Outlook

Synthesizing six dimensions of evidence into a unified thesis

This paper has argued for the central importance of search competitiveness to AI companies in the GEO era across six dimensions: cost structure, the causal engine of user growth, demand driven by human cognitive degradation, the dual segmentation of users, the shift in behavioral paradigms, and the rising share of search in usage patterns.

First, search is the fastest-growing component of AI companies’ cost structures. API fees for real-time search, the shutdown and price hikes of Bing API, and the linear increase in inference costs as search features are grafted on — search costs are becoming the most critical financial management challenge for AI companies.

Second, search is the causal engine of user growth. ChatGPT’s leap from 100 million to 800 million users closely coincided in timing with the integration and refinement of search features. Search transformed LLMs from “an interesting tool” into “indispensable infrastructure.”

Third, the demand for search stems from the structural trend of human cognitive degradation. The fragmented information consumption of the internet age has systematically eroded deep reading, precise articulation, and independent thinking abilities, while AI bridges this gap by understanding vague intent. This demand will only intensify, never diminish.

Fourth, search serves the “silent majority.” AI coding and image generation are enormously loud in community discussions, but in actual usage, information search and alignment account for 53% of the total share — this is the true foundation sustaining AI product daily active users.

In this epochal transition where GEO replaces SEO, competition among AI companies is no longer merely about “whose model is smarter” but about “whose search is better, cheaper, and more autonomous.” Search competitiveness is the core competitiveness of AI companies in the GEO era.

References

  1. TechCrunch (2025). “Leaked documents shed light into how much OpenAI pays Microsoft.”
  2. Ed Zitron / Where’s Your Ed At (2025). “Exclusive: Here’s How Much OpenAI Spends On Inference.”
  3. The Register (2025). “OpenAI has spent $12B on inference with Microsoft.”
  4. PPC Land (2025). “Microsoft ends Bing Search APIs on August 11.”
  5. Open Markets Institute (2025). “Microsoft’s Monopoly on Bing Search Data.”
  6. Computerworld (2023). “Microsoft more than triples Bing Search API prices.”
  7. Search Engine Journal (2025). “Timeline Of ChatGPT Updates & Key Events.”
  8. DemandSage (2026). “ChatGPT Statistics – Active Users & Growth Data.”
  9. Textero.io (2025). “ChatGPT Users Statistics 2025: Global Growth.”
  10. Chatterji, A. et al. (2025). “How People Use ChatGPT.” NBER Working Paper No. 34255.
  11. Nielsen Norman Group (2025). “GenAI for Complex Questions, Search for Critical Facts.”
  12. Nielsen Norman Group (2025). “How AI Succeeds (and Fails) to Help People Find Information.”
  13. MIT Technology Review (2025). “AI is weaving itself into the fabric of the internet.”
  14. Frost & Sullivan (2025). “2025 China AI Search Industry White Paper.”
  15. Gerlich, M. (2025). “AI Tools in Society: Impacts on Cognitive Offloading.” Societies, 15(1).
  16. Pressenza (2025). “The decline of the intelligence quotient in the digital age.”
  17. IE Business School (2025). “AI’s cognitive implications: the decline of our thinking skills?”
  18. Nguyen, L. et al. (2025). “Feeds, feelings, and focus.” Psychological Bulletin.
  19. Jakob Nielsen / UXTigers (2026). “Intent by Discovery: Designing the AI User Experience.”
  20. iPullRank (2025). “User Behavior in the Generative Era: From Clicks to Conversations.”
  21. Superlines (2026). “AI Search Statistics 2026: 60+ Data Points.”
  22. Orbit Media / QuestionPro (2026). “The AI-Search Adoption Survey.”
  23. a16z (2025). “State of Consumer AI 2025.”
  24. Menlo Ventures (2025). “2025: The State of Consumer AI.”
  25. DataReportal (2025). “Digital 2026: more than 1 billion people use AI.”
  26. Exposure Ninja (2026). “AI Search Statistics for 2026: CMO Cheatsheet.”
  27. Pew Research Center (2026). “AI Chatbot Use Among U.S. Teens.”
  28. Graphite (2026). “Total search usage combining search engines and LLMs.”
  29. Backlinko/Semrush (2026). “Perplexity AI User and Revenue Statistics.”
  30. DemandSage (2026). “Perplexity AI Statistics – Active Users & Revenue.”
  31. ByteByteGo (2025). “How Perplexity Built an AI Google.”
  32. MindStudio (2026). “Anthropic vs OpenAI vs Google: Three Different Bets on the Future of AI Agents.”
  33. Sacra (2025). “Perplexity revenue, valuation & funding.”

“In the GEO era, the AI company that searches best doesn’t just answer questions — it becomes the infrastructure through which humanity aligns with knowledge.”

LEECHO Global AI Research Lab · Claude Opus 4.6 · Anthropic
V2 · April 8, 2026

댓글 남기기