This paper argues that AGI (Artificial General Intelligence) is a structurally impossible false proposition under the current technological pathway. The argument unfolds across seven dimensions: the stochasticity of evolutionary selection, the non-linguistic nature of human wisdom, the dimensional bias of textualized information, the absence of physical-world adversarial experience, the structural limitations of LLM statistical principles, the absence of any AGI evaluation framework, and a re-examination of what RLHF actually aligns. The core conclusion: LLMs are systems that statistically optimize textual residue corresponding to less than 0.05% of total human thought. This residue is a biased sample structurally devoid of physical-world adversarial experience. Financial markets have already begun pricing in this structural deficiency.
Physical-World Adversarial Experience
Statistical Hallucination
Deep-Well Intellect
Texas Sharpshooter Fallacy
RLHF Limitations
SaaSpocalypse
captured in LLM training data
Sep 2025 – Feb 2026
Same period
Jan–Feb 2026
The Creator’s Selection and the Weight of FortuneWhy survival was selected, not engineered
Humanity’s current position as the dominant species on Earth is the result of selection by the physical world’s “creator.” In this selection, the weight of fortune decisively exceeded the weight of wisdom. When Homo sapiens coexisted with Neanderthals, sapiens were individually inferior. Neanderthals possessed larger cranial capacity and more robust physiques. Yet the evolutionary outcome was that the collective aggression of sapiens drove Neanderthals to extinction.
Simultaneously, advanced ancient civilizations were passively annihilated by natural forces: tsunamis, floods, volcanic eruptions, earthquakes, and ice ages. That humanity survived the various mass-extinction crises in Earth’s history was a function of fortune, not of wisdom.
The physical world holds veto power over civilization.
Human survival was not engineered; it was selected.
1.2 Language: A Product of Environmental Pressure
The decisive weapon that enabled sapiens to prevail over Neanderthals was language as an information-exchange system. This linguistic framework exponentially enhanced small-group coordination. However, it must not be overlooked that language itself was a product of environmental pressure, not an intentionally designed artifact.
1.3 Single-Point Mutants and Collective Wisdom
The developmental trajectory of human civilization follows a single-point mutation model. A minority of high-cognition mutants sporadically elevated the overall level of humanity, and their knowledge proliferated through diverse information carriers. The defining characteristic of human wisdom is that it is collective wisdom, and its developmental pathway is a stochastic process dependent on low-probability variables. This wisdom was acquired at the cost of astronomical individual lives, deep DNA evolutionary selection, and massive trial-and-error against the vicissitudes of the physical world.
Wisdom is not a product that can be manufactured.
It is a dynamic variable that emerges under the pressure of the physical world, at enormous cost.
From Thought to Text: The Cascading ReductionWhy LLMs train on less than 0.05% of human cognition
2.1 The Cascading Reduction
Human thought is a vast domain encompassing intuition, bodily sensation, emotion, spatial perception, dreams, and moments of sudden insight. Only approximately 5% of this thought is ever converted into language. Of the linguistically encoded portion, less than 1% is committed to written record. Consequently, LLM training data corresponds to less than 0.05% of total human thought.
What LLMs ingest is the residue left after human thought has been filtered through three to four successive reductions. It is not even language proper. Language is alive, embodied, and contextual. What LLMs process is a sequence of symbols severed from speaker, body, and concrete situation.
2.2 Dimensional Bias: The Absence of Engineering Knowledge
Even within the sub-0.05% residue, severe structural bias persists. The textualized record of human knowledge overflows with philosophical speculation, literary narrative, political discourse, religious scripture, and social media output — all things humans say to other humans, constituting the internal information circulation of human society.
By contrast, engineering knowledge produced through genuine adversarial engagement with the physical world is drastically underrepresented. How a bridge avoids collapse under specific geological conditions, how a weld point endures repeated thermal expansion, how an engine maintains stability under extreme conditions. This knowledge, which Michael Polanyi termed “tacit knowledge,” is structurally absent from LLM training data.
A vast dimensional black hole exists in LLM training data. Humanity’s hard-won experience of genuine combat with the physical world is effectively absent from the dataset. What LLMs have learned is how humanity “discusses” the world, not how humanity “transforms” it.
Gray Rhinos and Black SwansThe mathematical deadlock of out-of-distribution prediction
3.1 Overwhelming Performance Within the Textual Domain
LLMs are powerful in a statistical sense. By compressing the entirety of humanity’s written record across millennia, they overwhelm any individual in tasks confined to the textual domain: examinations, writing, knowledge retrieval, and logical reasoning. This is factual and undeniable.
Yet this performance generates a dangerous hallucination. Observers witness LLMs dominating humans in the textual domain and mistake this for an approach toward general intelligence. In reality, the model has reached an extremum within a 0.05% sliver, and humanity is erroneously extrapolating performance within this sliver to the full dimensionality of intelligence.
3.2 Helplessness Before the Black Swan
Against black swans — events that have never appeared in training data, that violate existing patterns, that erupt suddenly from the tail of the statistical distribution — LLMs are powerless in principle. The entire operating mechanism of an LLM consists of searching for the maximum-probability output within the statistical distribution of existing data. For objects outside that distribution, no mathematical anchor point exists.
The cost for the physical world to generate a black swan approaches zero.
The cost for AI to respond to a black swan approaches infinity.
This is not a problem solvable by computational power.
Predicting out-of-distribution events from historical distributions is a mathematical deadlock.
A simple example demonstrates this: a single webpage that changes rapidly and continuously can trap every AI crawler in an infinite loop. A near-zero-cost perturbation from the physical world collapses an entire AI system. A human would glance at it, recognize the malicious intent, and walk away.
The Cognitive Blind Spot of AGI ProponentsWhen the depth of one’s well is mistaken for the height of all intelligence
4.1 The Profile of the Deep Well
The typical profile of those who propose and pursue AGI: graduates of elite computer science programs, raised from childhood before screens, skilled in mathematics and programming, operating fluently within the symbolic world. Their life experience is highly concentrated in the digital domain. The overwhelming majority among them have never carried steel beams on a construction site, never adjusted machine tool tolerances in a factory, never assessed soil moisture in a field.
This is “deep-well intellect.” A well can be drilled to extraordinary depth, but the opening of the well remains narrow. The more lethal characteristic of deep-well intellect is that the deeper one drills, the less one can see of the sky beyond the well’s mouth.
The AGI proposition is a projection cast by deep-well intellect onto the world. Its proponents mistake the depth of their own well for the height of all intelligence.
4.2 The Texas Sharpshooter Fallacy
The Texas Sharpshooter fires first, then paints the target around the bullet holes. LLMs demonstrating spectacular performance in textual statistics was the bullet fired first. Software engineers then painted a target called “AGI” around the holes and declared they were converging on the bullseye. Whatever LLMs can do gets defined as “the core of intelligence”; whatever they cannot do gets minimized, ignored, or claimed to be “solvable in the next version.”
Moore’s Law in Semiconductors vs. the Void in AIA target with no acceptance criteria supports a multi-trillion-dollar valuation
The semiconductor industry possesses Moore’s Law. Every 18 to 24 months, transistor density doubles, and performance improvement, power consumption reduction, and process miniaturization are all quantitatively measurable. Investors can confirm that each step from 28nm to 7nm to 3nm represents a tangible achievement in the physical world.
The AI system has nothing comparable. The relationship between increasing model parameter counts and intelligence has never been demonstrated. Benchmark score improvements are repeatedly undermined by overfitting and data contamination. No ruler exists to measure proximity to AGI.
| Dimension | Semiconductor Industry | AI Industry |
|---|---|---|
| Yardstick | Moore’s Law — transistor density doubles every 18–24 months | None; benchmarks continuously invalidated |
| Quality metrics | Yield rate, process node, power efficiency | Perplexity, MMLU — no mapping to real-world utility |
| Acceptance criteria | Chip performance benchmarks → tape-out | No AGI acceptance criteria; no expert consensus |
| Valuation basis | Quantifiable physical progress | Narrative and expectation |
An industry with a yardstick produces valuations that are solid and traceable.
An industry without a yardstick produces valuations dependent solely on narrative.
When narrative falters, repricing arrives as an avalanche.
What RLHF Aligned Was Not Wisdom but SentimentFrom the 2023 shock to AI Slop: the trajectory of alignment failure
6.1 The 2023 Shock: Its True Nature
The reason given for ChatGPT’s astonishment of the world in 2023 is its “passing of the Turing Test.” However, this was a success of RLHF alignment, not the emergence of intelligence. What RLHF optimized were outputs matching the preferences of human evaluators: friendly tone, apparently smooth logic, humble acknowledgment of uncertainty, empathetic expression. Every one of these is a sentiment-level indicator. What RLHF trained was a supremely skilled socializer, not a thinker.
What RLHF aligned was not human wisdom but human sentiment.
What the Turing Test measures is “can it deceive human emotional judgment,”
and RLHF is a training method specifically optimized for precisely that examination.
6.2 AI Slop: The Report Card After Two and a Half Years
From October 2023 to March 2026, trillion-dollar-scale investment was deployed. The name the entire internet bestowed upon the output was “AI Slop” — search results contaminated by AI-generated junk content, social media inundated with low-quality productions, online bookstores flooded with AI-authored substandard books, academic journals discovering AI-generated fraudulent papers.
The output of a trillion-dollar investment was defined by user experience as slop. This is the exposure, following industrial-scale deployment, of the fundamental nature of LLMs as statistical recombinations of existing textual information.
6.3 RLVR: The Right Direction, the Same Limit
RLVR (Reinforcement Learning with Verifiable Rewards) attempts to escape RLHF’s dependence on human sentiment by using objectively verifiable outcomes as reward signals. The direction is sound. But a fatal problem emerges: how broad is the verifiable domain? Physical-world engineering decisions, medical judgments, long-term ecosystem consequences, and material behavior under extreme conditions either lack verification criteria, carry prohibitive costs, or require decades-long cycles. What RLVR can cover is a narrow domain of already-formalized problems.
The Death of SaaS and the Rise of SemiconductorsWhen trillion-dollar capital flows declare: value resides in the physical layer
Since September 2025, a dramatic divergence has occurred in financial markets. The software ETF (IGV) has fallen 30% from its peak, while the semiconductor ETF (SMH) has risen 30% over the same period. In the single month from mid-January to mid-February 2026, approximately one trillion dollars evaporated from software stocks.
from Sep 2025 peak
same period
vs S&P 500 +17.6%
The financial market, humanity’s largest collective-wisdom pricing system,
is expressing a judgment through trillion-dollar-scale capital flows:
Value resides in the physical layer. Not in the information layer.
7.2 The Collapse of the Narrative-Driven Cycle
The complete cycle structure of the AGI narrative: fear drives narrative, narrative drives valuation, valuation drives investment, and investment produces AI Slop. The origin of this cycle is a cognitive misalignment. What humanity feared was AI’s information-integration capability, but information integration is not wisdom.
Throughout history, humanity has employed exclusion mechanisms against individuals possessing exceptionally high-dimensional cognition: isolation, exile, elimination. But AI is omnipresent, infinitely replicable, and possesses no physical body that can be destroyed. The exclusion mechanism has failed completely for the first time. The fear generated by this failure was the fuel of the narrative, and the narrative’s fuel was the foundation of valuation. But financial markets have already begun repricing.
The Limits of Passive IntelligenceThe gun does not determine its own point of aim
The LLM is a passively triggered intelligence. Input determines output. Behind every astonishing display of LLM performance stands a high-quality human input. Observers witness the brilliance of the output and assume intelligence resides within the model. But intelligence has never once resided within the model. Intelligence resides in the human brain that knows what to ask, how to ask it, and when to ask it.
The model is the gun, the input is the bullet,
and the person pulling the trigger determines what is hit.
A gun without a shooter does not fire itself.
The essence of the AGI narrative is the declaration that the gun can determine its own point of aim.
Synthesizing the arguments of this paper, AGI is not a technical challenge but a path-level impossibility. This impossibility derives from seven structural defects: (1) humanity’s genuine wisdom is a dynamic variable that emerged under the selection pressure of the physical world at enormous cost; (2) over 95% of this wisdom is non-linguistic, embedded in body, intuition, and DNA evolution; (3) LLMs have secured only a fragment of less than 0.05%, a biased sample devoid of physical-world adversarial experience; (4) the statistical principles of LLMs are mathematically powerless against black swans; (5) AGI was defined by deep-well intellects via the Texas Sharpshooter Fallacy; (6) AGI lacks any evaluation framework comparable to Moore’s Law; (7) what RLHF aligned was sentiment, not wisdom, and RLVR’s verifiable domain is vanishingly narrow.
“Those who gaze upward from the bottom of a deep well
and believe that small circle to be the entirety of the sky
have declared they will reconstruct the ocean from a single drop of well water.
This is the zeitgeist of the AGI era.”
References
- Polanyi, M. (1966). The Tacit Dimension. University of Chicago Press.
- Harari, Y. N. (2015). Sapiens: A Brief History of Humankind. Harper.
- Taleb, N. N. (2007). The Black Swan: The Impact of the Highly Improbable. Random House.
- Moore, G. E. (1965). “Cramming More Components onto Integrated Circuits.” Electronics, 38(8), 114–117.
- Christiano, P. et al. (2017). “Deep Reinforcement Learning from Human Feedback.” NeurIPS 2017.
- Morningstar Research (2026). “Which Stocks Drove the Market’s Gains in 2025.”
- Bain & Company (2026). “Why SaaS Stocks Have Dropped—and What It Signals.”
- Calcalist Tech (2026). “SaaS Is Dying as a Business Category.”
- Advisor Perspectives (2026). “Software Stocks: Navigating the SaaSpocalypse.”
- Turing, A. M. (1950). “Computing Machinery and Intelligence.” Mind, 59(236), 433–460.