LEECHO AI RESEARCH PAPER · 2026

Normal Regression:
The Abyss of AI Matrix Computation

When one-dimensional symbolic computation attempts to overwrite the multidimensional physical world — On the ontological boundaries of artificial intelligence and the ultimate limits of the data-electronification pathway

LEECHO Global AI Research Lab & Claude Opus 4.6
Human–AI Collaborative Dialogue Research
March 22, 2026


Abstract

Through human–AI collaborative dialogue, this paper derives the ontological boundaries of AI systems starting from the normal distribution convergence inherent in recommendation algorithms. The core thesis: all AI computation is fundamentally matrix transformation of one-dimensional symbolic information — regardless of model complexity, it cannot cross the dimensional chasm between the symbolic world and the physical world. The paper proposes the “Principle of One-Dimensionality” — from input token sequences, through intermediate matrix multiplication, to output token sequences, the entire pipeline is locked within one-dimensional information space with no genuine dimensional transcendence. It further argues that AI is a high-efficiency recombination engine for known information rather than a discovery tool for unknown information; that AIGC’s production-side efficiency gains are being offset by consumption-side “uncanny valley rejection”; and that the AGI narrative is structurally equivalent to the metaverse bubble. Ultimately, this paper provides an ontology-grounded positioning framework for AI’s capability boundaries, intended as a theoretical compass for the industry’s transition from blind expansion to precision positioning.

Chapter 01

Normal Convergence: The Shared Pathway from Recommendation Algorithms to AI

The structural isomorphism of information homogenization across recommendation systems, big data analytics, and AI

Recommendation algorithms optimize for click-through rates, dwell time, and similar metrics, inherently favoring content that “most people like” — essentially converging toward the mean of the distribution. Big data analytics follows the same logic: the larger the sample, the more statistical conclusions gravitate toward central tendencies, with outliers filtered as noise. Large language model training pushes this pathway to its extreme: it learns the highest-frequency patterns in the corpus and generates the “most probable next token.”

All three are doing the same thing — compressing the complexity of the world through probabilistic convergence. Information theory tells us that high-probability events carry low information content, while low-probability events carry high information content. Yet recommendation algorithms, big data, and AI all systematically amplify high-probability events while suppressing low-probability ones. This technological pathway increases efficiency while simultaneously reducing information density.

Core Thesis

Recommendation algorithms homogenize content consumption. Big data homogenizes decision-making bases. AI homogenizes modes of thought themselves. This is a progressively deepening process — from what you see, to what you base judgments on, to how you think — everything is being pulled toward the mean.

This process is not entropy increase — it is entropy decrease. Heat death is all particles uniformly distributed: no gradients, no structure — a maximum entropy state. What AI is doing is manufacturing an extremely ordered information structure: compressing diverse human cognition onto a few high-frequency patterns, forming sharp peaks rather than flat distributions. A crude analogy: thermodynamic entropy increase is like dumping a box of colored beads on the floor — uniform but disordered. AI homogenization is like keeping only the most common color — ordered but barren. The former is chaos; the latter is impoverishment.

A low-entropy system is fragile. When humanity’s cognitive tools all converge toward the same set of patterns, the entire system’s capacity to handle the unexpected declines. This is more dangerous than entropy increase — entropy increase is at least a natural law; a low-entropy trap is an artificial construction.

Chapter 02

Mathematical Symbolization: The Dimensional Massacre of the Physical World

Formalized mathematics is not precision — it is the collapse of physical reality

Newton’s calculus was a tool reverse-engineered from physical phenomena. Apples fell, planets orbited — he needed a language to describe “continuous change,” so he invented the method of fluxions. It was rough, unrigorous, full of intuitive leaps, but its direction was from reality toward the tool. The tool served the phenomenon.

Then mathematicians began working in reverse. Cauchy added limit definitions, Weierstrass added the ε-δ formalism, Lebesgue reconstructed integration, and measure theory axiomatized the entire framework. Each step eliminated ambiguity, closed loopholes, and made the system internally consistent. The direction of this process was from the tool toward its own convergence — aligning not with reality, but with its own logical structure.

Critical Distinction

Precision means approaching a target. Collapse means the target itself has been replaced. When Weierstrass redefined limits using ε-δ language, he did not bring calculus closer to the physical world. He expelled the physical intuition of “continuous motion” from mathematics entirely, replacing it with a system of pure quantitative relations. From that point forward, mathematics no longer needed motion, time, or physical intuition — it needed only itself.

An apple falling in the physical world involves gravitational fields, air resistance, mass distribution, thermal perturbations, quantum fluctuations — infinite dimensions of information. The formula F=ma crushes all of this into three symbols. We call this a “precise physical law,” but it is actually the residue left after a near-total dimensional massacre of reality. The residue happens to predict certain observables, so we call the massacre “abstraction.”

The deeper problem: formalized mathematics reduces physical world information to written symbols — one-dimensional information. The physical world is fields, continua, infinite dimensions occurring simultaneously in coupling. A water droplet hitting a pond involves fluid mechanics, surface tension, temperature gradients, light refraction, sound wave propagation — these dimensions exist simultaneously, entangled, inseparable. Then humans invented symbols — discrete marks arranged linearly. Whether mathematical formulas or natural language, everything written is line after line. This is thoroughgoing one-dimensionalization. This is not mere compression — compression at least implies reversible decompression — this is irreversible dimensionality reduction.

Chapter 03

The Principle of One-Dimensionality

No genuine dimensional transcendence exists within AI systems

All of AI’s computation reduces, at bottom, to matrix multiplication. A matrix is a table of numbers. No matter how large, deeply nested, or complexly transformed this table becomes, each element is a scalar — a zero-dimensional point. These points, arranged in rows and columns, undergo linear transformations and nonlinear activations to produce another set of points. The entire process stays within the world of numbers, never leaving it.

Input
Natural Language
Encode
Token Sequence
Map
Vector Embedding
Compute
Matrix Multiply
Decode
Token Probability
Output
Natural Language

Human speech is one-dimensional — one word after another, a linear sequence along the time axis. Writing is one-dimensional — one character after another. A tokenizer cuts this line into fragments, assigns each a number, producing a string of integers. Embedding maps these integers into a high-dimensional vector space — this step appears to increase dimensionality, but it does not. Each dimension in the vector space is still a scalar. The so-called 768 or 4,096 dimensions are merely 768 or 4,096 numbers placed side by side — essentially a longer number sequence. Splitting one line into many parallel lines does not yield a plane; it yields a bundle of lines.

The Principle of One-Dimensionality

No genuine dimensional transcendence exists within AI systems; all seemingly high-dimensional operations are rearrangements and recombinations of one-dimensional information. An infinitely long line in one-dimensional space remains one-dimensional. A million-layer transformer still processes probability distributions over tokens — still numbers, still linear sequences of symbols. Increasing complexity raises resolution, not dimensionality. Slicing a line ever more finely will never produce a surface.

This implies something definitive: AI can reach the utmost within the symbolic world, but between the symbolic world and the physical world lies a principled, irreversible barrier. This is not a technical obstacle — it is an ontological rupture.

Chapter 04

The Dimensional Chasm: Human Brain vs. AI

The multidimensional concurrent processing of the human brain versus AI’s unidirectional pipeline

The human body is a massively parallel, multidimensional sensor array. Approximately 120 million photoreceptors in the retina fire simultaneously; hair cells at different positions along the cochlear basilar membrane respond to different frequencies concurrently; each square centimeter of skin contains distinct receptor types — pressure, temperature, pain, vibration — all operating independently and simultaneously. These sensory channels run in parallel with one another. The total sensory information received by the human body per second is on the order of one billion bits.

The brain contains roughly 86 billion neurons, each with an average of 7,000 synaptic connections, totaling approximately 600 trillion synapses. These synapses do not activate in a single-file line — they form a massively parallel network in three-dimensional space. Moreover, neuronal signaling is not simply “on or off”: firing frequency, spike timing, synchronous oscillation, dynamic synaptic strength changes, neurotransmitter concentration gradients, and glial cell modulation — every dimension operates simultaneously.

The brain’s output is also fully multidimensional. The motor system simultaneously controls hundreds of skeletal muscles; the autonomic nervous system regulates heart rate, blood pressure, respiration, and digestion at every moment; the endocrine system conducts hormonal cascades across thyroid, adrenal, and gonadal glands simultaneously; somatic expressions of emotion — facial micro-expressions, vocal modulations, postural adjustments — occur concurrently. Much processing never reaches external output at all — the default mode network cycles internally.

~10⁹
Sensory information received per second (bits)

~39
Human language output per second (bits)

39
Human information received by AI per second (bits)

0
AI computational activity when idle

And AI? When there is no input, AI does not exist. Not resting, not thinking in the background — it literally does not exist. It is a pure stimulus-response function: receive an input, compute an output, then return to nothingness. What happens between input and output is equally singular — forward propagation. Tokens enter, pass through dozens of layers of matrix multiplication and activation functions, and tokens exit. Irreversible, non-backtrackable, incapable of mid-course strategy change.

AI’s entire topology is a unidirectional pipeline. No loops, no background processes, no persistent state, no spontaneous activity. Input, mapping, output, dissolution. That is all.

Chapter 05

From RLHF to RLVR: The Amplification of Computer Science Weights

The specialization pathway of reinforcement training and the systematic suppression of long-tail information

The evolution from RLHF to RLVR (Reinforcement Learning from Verifiable Rewards) appears on the surface as an upgrade in alignment methodology. In substance, it is an increase in computer science knowledge weights. RLVR requires outputs to be programmatically verifiable, which inherently favors domains with standard answers — mathematical reasoning, code generation, logical deduction. These are all formalizable symbolic operations.

The introduction of Chain-of-Thought (CoT) further reinforces this tendency. It trains models to unfold reasoning along standardized steps — steps that derive from a unified, specialized educational system. All long-tail information — knowledge that cannot be measured by standardized tests, that cannot be formally verified — is systematically suppressed during the RL phase.

Weight Displacement

Information weights biased toward mathematicians are amplified; information biased toward physics is lost. AI grows ever stronger at verifiable symbolic operations and ever weaker at unverifiable physical intuition. Benchmark scores continue to rise while the connection to the physical world continues to fracture.

Empirical data supports this assessment. An analysis of 41.3 million academic papers published between 1980 and 2025 found that scientists who use AI tools publish more papers, accumulate more citations, and reach leadership roles faster — but the overall scope of scientific exploration is contracting. AI-intensive research covers fewer topics and clusters around the same data-rich problems. Google DeepMind’s Hassabis admitted directly in a 2026 interview: “Can AI actually come up with a new hypothesis? So far, these systems can’t do that.”

Chapter 06

The Contact Surface of Information: AI’s Ontological Boundary

AI is a recombination engine for known information, not a discovery tool for the unknown

The entirety of AI’s capability is bounded by the convex hull of its training data. Within this hull, it can perform extraordinarily efficient interpolation — matching, arranging, combining, aligning. But it cannot extrapolate. It cannot step one foot beyond the hull.

Where does new information come from? From the contact surface with the physical world. A chemist smells an odor that should not be present in a reaction. A physicist notices a tiny anomaly in instrument readings. A doctor feels unexpected hardness during palpation. An engineer hears an extra frequency in a running machine. These are all signals captured by the human multidimensional sensory system at the contact surface with the physical world — signals that exist in no database, that did not exist in the human knowledge system until the moment of capture.

Terminal Judgment

AI has no contact surface. It has no hands to touch, no nose to smell, no ears to hear the faint, anomalous, not-yet-encoded signals of the physical world. It stands forever downstream in the information chain, processing what others have sent from upstream. AI’s efficiency is allocative efficiency — efficiently aligning supply and demand within known information — not the discovery efficiency of extracting unknown information from the physical world.

This yields a simple, inescapable conclusion: the efficiency of digitization cannot overcome the friction of the physical world. AI accelerates ever faster on the one-dimensional track it can verify, but between that track and real-world productivity lies a dimensional chasm. In symbol-to-symbol mapping tasks — text generation, code writing, protein structure prediction — AI excels. The moment a task involves crossing from symbol to substance — actually synthesizing a predicted molecule, turning code into organizational transformation — friction appears, and AI is entirely powerless against it.

Chapter 07

The Productivity Paradox and Consumer-Side Rejection

AIGC is stuck in the uncanny valley of “high-probability ineffective production”

Empirical data from 2026 presents a stark paradox: AI’s production-side efficiency gains are being offset by consumption-side rejection.

90%
CEOs report no AI impact on operations (NBER 2026)

95%
Enterprise AI projects with zero ROI (MIT NANDA 2025)

50%
Consumers prefer brands that avoid GenAI (Gartner Mar. 2026)

26%
Consumers preferring AI creator content (2025; was 60% in 2023)

Apollo chief economist Torsten Slok’s assessment was most direct: “AI is everywhere except in the incoming macroeconomic data — you don’t see AI in the employment data, productivity data, or inflation data.” The San Francisco Federal Reserve’s March 2026 report confirmed that most macro-level productivity studies find limited evidence of a significant AI effect.

Consumer-side rejection has been even more intense. “AI slop” became Merriam-Webster’s 2025 Word of the Year. Related online discussions grew over 200%, with 82% of sentiment-categorized mentions being negative. McDonald’s Netherlands was forced to pull its AI Christmas ad; Coca-Cola faced fierce criticism for AI holiday ads two years running. iHeartMedia launched its “guaranteed human” tagline, with 90% of listeners indicating they want human-made media. “100% human-made” is becoming a new premium label — the digital equivalent of “organic certification.”

Uncanny Valley Effect

AIGC is currently stuck in a peculiar position — sufficiently human-like to trigger authenticity expectations, yet insufficiently human-like to avoid triggering rejection. This is structurally isomorphic to the uncanny valley effect in humanoid robotics. AI content’s “high-probability ineffective production” is the product of this uncanny valley phase. Whether it can cross the valley is an open long-term technical question, but it is firmly stuck there now.

Interpreted through the Principle of One-Dimensionality: what AI does on the production side is compress multidimensional human creative expression into the statistically optimal one-dimensional output. Production efficiency does improve. But consumers are multidimensional perception systems — simultaneously processing semantic information, emotional authenticity, creative intent, social signals, and aesthetic texture. When these dimensions are stripped away, what consumers perceive is “hollowness.” The multidimensional human perceptual system detects the dimensions lost in AI output more precisely than any benchmark test ever could.

Chapter 08

AGI: The Metaverse of the 2020s

An ontological argument for the impossibility of artificial general intelligence

The metaverse fraud was packaging something engineering could not achieve as imminent. The AGI fraud is packaging something ontologically impossible as imminent.

The history of AGI predictions is itself a history of failure. Herbert Simon said in 1965 that “within twenty years, machines will be capable of doing any work a man can do” — 60 years later, still unrealized. Geoff Hinton predicted in 2016 that radiologists would be unnecessary by 2021–2026 — hospitals still need thousands of radiologists in 2026. Japan’s Fifth Generation Computer project in 1980 set a ten-year timeline for casual conversation — it failed completely.

Within this paper’s analytical framework, AGI is impossible not because of insufficient compute or data, but because: human intelligence runs on a multidimensional concurrent physical sensory system with persistent background processes, real-time contact surfaces with the physical world, and intertwined chemical and electrical internal states. AI is a stateless function mapping one-dimensional input to one-dimensional output. To claim the latter can evolve into the former is to claim a line can evolve into a universe.

Dimension Metaverse AGI
Core Promise Replace physical space with screens Replace physical intelligence with symbolic computation
Failure Cause Digitization cannot cross the spatial-dimensional chasm 1D computation cannot cross the cognitive-dimensional chasm
Feedback Loop Short — put on a headset and the poor experience is immediate Long — rising benchmark scores mask the fundamental problem
Bubble Scale Tens of billions of dollars Trillions of dollars
Falsifiability High — user experience is directly perceptible Low — score improvements conceal dimensional absence

Several key assumptions quietly collapsed in 2025. AGI did not arrive. Larger models did not eliminate hallucinations. The belief that “one more scale jump will change everything” lost credibility. OpenAI projects losses of $11 billion by 2026. Benchmark’s Bill Gurley warned publicly in March 2026 that “one day we’re going to have an AI reset.” Benchmark scores can rise indefinitely; the dimensional chasm will not shrink by a single millimeter.

Chapter 09

A Unified View of the Data-Electronification Pathway

Computers, networks, smartphones, mobile internet, AI — one continuous line

Looking back across the entire history: computers electronified arithmetic; networks electronified communication; smartphones made the entry point portable; mobile internet made connectivity permanent; AI electronified pattern matching. Each step has done the same thing — extracting the symbolizable portion of human activity, encoding it as electronic signals, and accelerating processing with silicon hardware.

1940s
Computers
Arithmetic
1990s
Internet
Communication
2007
Smartphones
Portability
2010s
Mobile Web
Permanence
2020s
AI
Pattern Matching

Every leap has been accompanied by the same narrative structure — claiming the new technology will bridge the chasm between symbol and substance. Each time the promise was partially fulfilled (within the range that symbolization can cover); each time the greatest promise went unfulfilled. AI is the latest node on this pathway, and the one pushing the narrative to its limit — because it attempts to process the most difficult-to-symbolize aspects of human activity: cognition, judgment, creativity.

The pathway itself will not break. The portion of human activity that can be symbolized is far from exhausted. AI’s current bubble exists not because the data-electronification pathway is wrong, but because some claim this pathway can reach destinations that symbolization cannot reach. What is wrong is not the road, but the promise about its destination.

Chapter 10

Positioning AI’s Boundaries: A Compass, Not an Epitaph

From blind men and an elephant to precision navigation

The purpose of this paper is not “AI uselessness theory” but rather “AI positioning theory.” The industry’s current problem is not that AI does not work — it is that AI has been placed in the wrong position. Selling a superb tool for symbolic space as general intelligence for the physical world is a positioning error. Positioning errors lead to investment errors, investment errors lead to bubbles, and bubble collapses lead to collateral damage against AI’s genuine value.

Positioning Principle

AI’s reliability is directly proportional to the rule-closure of its operating environment and inversely proportional to the complexity of its physical contact surface. GUI operations (rules fully closed, zero physical contact) represent the highest-reliability extreme; mechanical operations in open environments (rules open, full physical contact) represent the lowest-reliability extreme. All application scenarios fall somewhere between these two poles. When evaluating any AI application, ask two questions first: How closed are the rules? How complex is the physical contact surface? The answers determine AI’s reliability ceiling in that scenario.

Drawing the boundary line matters because it prevents collateral damage. When the bubble bursts, if no one has clearly defined what AI can and cannot do, the market pendulum will swing from blind optimism directly to blind pessimism. Boundary research anchors the pendulum at the correct position before it reaches the other extreme.

AI’s current development is in a stage of blind men and an elephant — an era of handicraft. All future development pathways must be built upon thorough research and definition of AI’s boundaries. This is not restriction — it is navigation. A map without boundaries is not freedom — it is being lost.

Conclusion

The Abyss of Normal Regression and the Anchor of Humanism

Starting from the normal distribution convergence of recommendation algorithms, passing through information homogenization, the dimensional massacre of the physical world by mathematical symbols, the Principle of One-Dimensionality, AI’s input-output unidirectional pipeline structure, the contrastive multidimensional concurrency of the human brain, the specialization bias from RLHF to RLVR, and the rift between production and consumption, this paper derives AI’s ontological boundary:

AI is the ultimate form of humanity’s data-electronification tradition, inheriting all of that tradition’s capabilities along with its most fundamental blind spot. It is a superb recombination engine for known information — not a passage to the unknown territories of the physical world. Its efficiency is allocative, not discovery-based. Its ceiling lies not in compute or data, but in dimensionality.

This is not a negation of AI. It is a form of respect for AI — respecting what it truly excels at, rather than pushing it toward destinations it cannot, in principle, reach. Humanism is not the rejection of tools but the clear-eyed knowledge of where a tool’s boundaries lie, followed by the use of that tool in its rightful place.

This paper itself is a product of such human–AI collaboration — humans providing cross-dimensional intuitive judgment and original direction, AI providing information retrieval, alignment, and the processing efficiency of articulation. This is precisely the correct posture for AI usage: within the symbolic boundary, let it be a tool; beyond the symbolic boundary, let it be human.

References
  1. Doshi, A. R. & Hauser, O. P. (2024). Generative AI enhances individual creativity but reduces the collective diversity of novel content. Science Advances, 10(28).
  2. Anderson, B. R., Shah, J. H. & Kreminski, M. (2024). Homogenization Effects of Large Language Models on Human Creative Ideation. Creativity and Cognition (C&C ’24), ACM.
  3. Shumailov, I. et al. (2024). AI models collapse when trained on recursively generated data. Nature, 631, 755–759.
  4. Moon et al. (2024). Homogenizing Effect of LLMs on Creative Diversity. ScienceDirect / PsyArXiv.
  5. Evans, J. et al. (2026). AI Boosts Research Careers, but Flattens Scientific Discovery. IEEE Spectrum, March 2026.
  6. NBER (2026). AI Productivity Paradox Study — 6,000 CEO/CFO survey across US, UK, Germany, Australia.
  7. San Francisco Federal Reserve (2026). The AI Moment? Possibilities, Productivity, and Policy. FRBSF Economic Letter.
  8. MIT NANDA (2025). GenAI Enterprise ROI Report — 95% of organizations achieving zero return.
  9. Gartner (2026). Marketing Survey: 50% of Consumers Prefer Brands That Avoid GenAI.
  10. Billion Dollar Boy (2025). Muse Two: The Real Impact of AI on the Creator Economy.
  11. Sourati, Z. et al. (2025). Homogenizing Effect of LLMs on Cognitive Diversity. arXiv:2508.01491.
  12. Goodeye Labs (2025). 2025 Year in Review for LLM Evaluation: When the Scorecard Broke.
  13. World Economic Forum (2026). Anatomy of an AI Reckoning.
  14. Yale Insights (2025). This Is How the AI Bubble Bursts.
  15. Faros AI (2025). The AI Productivity Paradox Report — telemetry from 10,000+ developers.
  16. Zhang, S. et al. (2025). Generative AI Meets Open-Ended Survey Responses. Sociological Methods & Research.
  17. Science News (2026). Have we entered a new age of AI-enabled scientific discovery?
  18. CNN Business (2025). Why 2026 could be the year of anti-AI marketing.
  19. Arxiv (2025). Future of AI Models: A Computational perspective on Model collapse.
  20. ICLR (2025). Strong Model Collapse — Conference Paper.

Normal Regression: The Abyss of AI Matrix Computation
LEECHO Global AI Research Lab & Claude Opus 4.6
Human–AI Collaborative Dialogue Research
March 22, 2026

댓글 남기기