THOUGHT PAPER · APRIL 2026

The Empiricism Paradox in the Age of AI

The Structural Predicament of Evolutionary Inertia in Human Knowledge, as Revealed by Erdős Problem #1196

The Empiricism Paradox in the Age of AI:
Structural Inertia of Human Knowledge Evolution Through the Lens of Erdős Problem #1196


PublishedApril 27, 2026
CategoryOriginal Thought Paper
FieldAI Epistemology · Philosophy of Science · Archaeology of Knowledge · History of Mathematics
LEECHO Global AI Research Lab
LEECHO Global AI Research Lab
&
Claude Opus 4.6 · Anthropic
V2


Abstract

In April 2026, Liam Price, a 23-year-old amateur mathematician, solved Erdős Problem #1196—a question that had confounded the mathematical community for nearly 60 years—in just 80 minutes through a single ChatGPT prompt. Rather than following the probabilistic path that human mathematicians had pursued for 90 years, the AI found the solution from an entirely different arithmetic direction—the von Mangoldt function and Markov chains. Taking this event as its starting point, this paper proposes the epistemological concept of the “Empiricism Paradox in the Age of AI”: the transmission of human knowledge, while accumulating valid experience, simultaneously and systematically solidifies path dependencies shaped by historical productive-force constraints. The “validity of compromise theories” continues to be transmitted as truth even after constraining conditions have changed, producing vast “evolutionary inertia error paths” in the knowledge graph. AI’s value lies not merely in computational power, but in being the first cognitive system in human history that does not carry path-dependency inertia, capable of systematically exposing alternative possibilities that empiricism has obscured.


Chapter 1

Introduction: 80 Minutes by a 23-Year-Old

One Monday afternoon in April 2026, a 23-year-old named Liam Price typed a mathematics problem into ChatGPT GPT-5.4 Pro. He knew nothing of the problem’s history, nothing of how long it had confounded the mathematical community, nothing of how many experts had spent decades laboring over it. He simply did what he always did—casually picked a problem from the Erdős Problems website and handed it to the AI to see what would happen.

Eighty minutes later, GPT-5.4 Pro returned a proof. The writing quality was “quite poor”—in the words of Stanford mathematician Jared Lichtman. But when Lichtman and Fields Medalist Terence Tao carefully examined it, they realized: buried within this rough output was an unprecedented mathematical insight.

Lichtman himself was the foremost expert in this field—he had spent four years proving the Erdős primitive set conjecture, then seven more years pursuing the next open problem in the same family. After reading GPT’s proof, he wrote: this was a “Book Proof”—Paul Erdős’s highest praise for the most elegant proofs.

Tao, for his part, noted that the proof revealed “a previously undescribed connection” between the structure of integers and Markov process theory. He said: “We have discovered an entirely new way of thinking about large numbers and their structure.”

The event itself is astonishing enough. But this paper does not ask the surface-level question of “can AI do mathematics.” Instead, it pursues a deeper epistemological proposition hidden beneath: Why could a person with no mathematical training, working with an AI free of any path bias, solve a problem on which all experts had collectively failed for 60 years?



Chapter 2

Anatomy of the Event: A 90-Year Collective Detour

To understand the deeper meaning of this event, we must first understand the problem itself and its history.

2.1 What Are Primitive Sets and the Erdős Sum?

A primitive set (primitive set) is a set of integers greater than 1 in which no element divides another. For example, the set of primes {2, 3, 5, 7, 11, …} is a primitive set. For any primitive set A, one can compute a “score”—the Erdős sum: f(A) = Σ 1/(a·log a), summed over each element a in A.

In 1935, Paul Erdős proved a surprising result: this sum is bounded for all primitive sets. This means that a seemingly purely combinatorial condition—”no element divides another”—actually imposes an analytic constraint.

In 1988, Erdős further conjectured that this upper bound is attained at the set of primes (approximately 1.6366). In 2022, Lichtman proved this conjecture in his doctoral dissertation.

2.2 Problem #1196: Asymptotic Extremality

Erdős also noticed that if the elements of a primitive set are all very large, the Erdős sum becomes small. In 1968, together with Sárközy and Szemerédi, he conjectured that as the elements tend to infinity, the supremum of the Erdős sum tends to 1. This is Problem #1196.

Lichtman had previously proved a weaker upper bound—approximately 1.399 plus a vanishing error term. This was strong work, but not the final answer. GPT-5.4 Pro delivered the precise asymptotic result: 1+O(1/log x).

2.3 The Critical Difference: The Path, Not the Destination

What truly stunned the mathematical community was not the result itself, but the path by which it was reached.

“Since 1935, every mathematician who studied this problem took the same route: converting the problem from number theory into probability theory. This approach was so natural to human thinking that no one ever searched for an alternative.”

—— Jared Lichtman

GPT-5.4 Pro did not take this path. It stayed in the arithmetic domain, employing the von Mangoldt function—a classical tool that had existed in analytic number theory for 90 years, yet no one had ever thought to apply it to primitive set problems. It also introduced a Markov chain method, establishing a previously undiscovered connection between integer structure and stochastic processes.

Greg Brockman cited a precise analogy: “The closest analogy is: the major openings in chess have been thoroughly studied, but AI discovered a new opening line that had been overlooked by human aesthetics and convention.”



Chapter 3

The Empiricism Paradox in the Age of AI

Based on the above events, we propose the core concept of the “Empiricism Paradox in the Age of AI.” Its substance can be expressed as follows:

All progress in human civilization is built upon the accumulation of experience and the transmission of knowledge. But the process of accumulating experience is simultaneously a process of collapsing the search space—each generation inherits the achievements of its predecessors, but also inherits their direction, until all experts crowd into the same ever-narrowing tunnel. The deeper the knowledge, the more systematic the blind spots; the more successful the tradition, the more invisible the alternative paths.

This paradox has an extremely condensed expression: “The side effect of standing on the shoulders of giants is seeing the world in the direction the giant faces.”

Newton’s famous words—”If I have seen further, it is by standing on the shoulders of giants”—are regarded as the supreme metaphor for knowledge transmission. But they omit a crucial dark side: standing on the shoulders of giants, you do indeed see further, but you can only see in the direction the giant faces. The entire vista behind the giant is invisible to you—not because there is nothing to see, but because you cannot turn the giant around.

The Erdős #1196 event demonstrates this perfectly. Lichtman was not unintelligent—quite the opposite, he was the person on Earth who understood this problem best. But it was precisely this understanding that constituted his cage. He stood on the path from number theory to probability theory that Erdős had drawn in 1935, all successors stood on his shoulders, and so everyone looked in the same direction.

3.1 Relationship to Kuhn’s Paradigm Theory: Inheritance and Transcendence

Readers may have already noticed the deep resonance between the Empiricism Paradox and the paradigm theory Thomas Kuhn proposed in his 1962 work The Structure of Scientific Revolutions. Kuhn argued that scientific progress is not linear accumulation of knowledge, but an alternating cycle of “normal science” and “scientific revolutions.” During normal science, scientists solve “puzzles” within the dominant paradigm; when anomalies accumulate to a critical point, a paradigm crisis erupts, ultimately leading to a paradigm shift.

The Empiricism Paradox shares the same core insight with Kuhn’s paradigm theory: scientific knowledge is not a neutral, ever-accumulating collection of facts, but a directional and limited human construction shaped by specific frameworks. Kuhn called this “theory-ladenness”—scientists may view the same data differently depending on prevailing theories; Popper expressed it as the “non-cumulative nature of science”; and the Empiricism Paradox, starting from the internal mechanisms of experience accumulation, reveals the inevitability of path dependency.

But the Empiricism Paradox transcends Kuhn’s framework in three key dimensions:

Transcendence One: From Paradigm Operations to Paradigm Origins

Kuhn focused on how paradigms operate and shift. The Empiricism Paradox asks a more upstream question: why does a paradigm take this particular shape? The answer: because paradigms are compromise products under specific productive-force constraints. This advances paradigm theory from the sociology of science to the level of historical materialism.

Transcendence Two: From the Contingency of Revolution to the Systematicity of Detection

In Kuhn’s framework, paradigm shifts depend on the accumulation of anomalies and the emergence of “extraordinary research”—essentially a stochastic process, where no one can predict when a paradigm revolution will occur. The Empiricism Paradox points out that the emergence of AI provides the possibility of systematically detecting path dependencies—rather than waiting for anomalies to accumulate, one can proactively scan for regions in the knowledge graph locked down by inertia.

Transcendence Three: From Generational Resistance to the Dissolution of Resistance

Max Planck once pointedly observed: “A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation familiar with it grows up.” This is the famous “Planck Principle”—paradigm shifts often take a generation. But the AI era may alter this pattern: when AI can provide a breakthrough proof in 80 minutes, verified by younger-generation mathematicians within days, the speed of paradigm shifts may shrink from “a generation” to “a season.”

3.2 Formal Structure of the Paradox

The Empiricism Paradox can be more rigorously stated as a set of interrelated propositions:

Proposition One (The Double Edge of Experience): The accumulation of experience, while enhancing problem-solving capacity along established paths, systematically reduces the probability of discovering alternative paths. Let E denote the degree of experience accumulation in a field, P(s) the probability of success along the traditional path, and P(a) the probability of discovering an alternative path. Then dP(s)/dE > 0 but dP(a)/dE < 0. The more experience, the stronger the traditional path and the dimmer the alternatives.

Proposition Two (The Systematicity of Blind Spots): The blind spots created by experience accumulation are not randomly distributed but have a definite structure—they are precisely concentrated on the alternative paths obscured by old constraints. Therefore, the most experienced experts are precisely the ones most systematically blind to alternative paths.

Proposition Three (Undetectability from Within): The blind spots caused by path dependency cannot be detected from within the system. A scientist embedded in a paradigm cannot distinguish between “this path is genuinely optimal” and “this path appears optimal because all my training has been on this path.” These two states are indistinguishable in subjective experience.

Proposition Four (AI’s Structural Advantage): As a cognitive system that does not carry the inertia of disciplinary path dependency, AI is not subject to the constraints of Propositions One through Three. Its search space does not collapse with the accumulation of “experience,” because its mode of knowledge organization (high-dimensional vector space) does not topologically possess the tree-like hierarchical structure of human disciplinary systems.

3.3 The Triple-Layer Structure of the Empiricism Paradox

Translating the above formal propositions into more intuitive language, the Empiricism Paradox manifests in every concrete case as the same three-layer progressive structure:

Layer One: Path Generation

Under specific constraints, the optimal approximate solution emerges (e.g., the 1935 probabilistic method). This is not a mistake, but the best choice available at the time. The key point: the “optimality” of the choice is relative to the constraints, not to the problem itself. Under different constraints, the optimal solution could be entirely different.

Layer Two: Path Solidification

The optimal approximation is transmitted as orthodoxy. What successors learn is not “predecessors chose this path under certain constraints” but “this is how this problem should be done.” The contingency of the choice is forgotten; the compromised nature of the path is sanctified. Textbooks play a critical role in this process—Kuhn precisely observed that textbooks “obscure the revolutionary process,” presenting the paradigm as if it were the only possible way of organizing knowledge.

Layer Three: Path Lock-In

Orthodoxy produces experts; experts’ professional identities are built upon orthodoxy; and thus orthodoxy acquires a self-defense mechanism. Questioning the path equals questioning the very foundation of the expert’s existence, so alternative paths are also systematically rejected at the sociological level. After Wegener’s continental drift hypothesis was rejected, “older geologists warned younger researchers that any hint of interest in continental drift would ruin their careers.”

3.4 Why “the Age of AI” Is a Qualifier

The paradoxical nature of empiricism has always existed in human history—Kuhn, Planck, and Popper all touched on it from different angles. So why do we particularly emphasize the qualifier “the Age of AI”?

Because the emergence of AI changes the consequences of the paradox. In the pre-AI era, the Empiricism Paradox was “a diagnosis without a cure”—you could point out that experts have blind spots, but you could not systematically circumvent them. Paradigm shifts could only await the accumulation of anomalies and the chance appearance of genius individuals. The “extraordinary research” Kuhn described—”the proliferation of competing articulations, the willingness to try anything, the expression of explicit discontent”—had to ferment naturally during crisis periods, with no reliable method to accelerate the process.

The emergence of AI provides, for the first time, a low-cost, repeatable, and scalable capacity for alternative path search. Price, with a $20/month ChatGPT subscription and 80 minutes on a Monday afternoon, achieved a path breakthrough that the expert community had failed to accomplish for 60 years. This is not an isolated case—since October 2025, AI tools have helped solve multiple long-standing open mathematical problems.

This is what the “Age of AI” in the “Age of AI” Empiricism Paradox points to: not that the paradox itself is new, but that humanity possesses, for the first time, a tool to counteract it.

3.5 Empiricist Validity and the Empiricism Paradox: Phased Complementarity, Not Opposition

A critical clarification must be made here: empiricist validity and the Empiricism Paradox are not opposing poles, but two phases of the same knowledge production system.

A systematic study published in the Proceedings of the Royal Society A in 2024, examining over 750 major scientific discoveries (including all Nobel Prize discoveries), concluded that three key indicators of scientific progress—major discoveries, methods, and fields—all indicate that science is primarily cumulatively evolutionary. No major cross-disciplinary scientific method or instrument has been completely abandoned. This study underscored the limitations of the Planck Principle.

These facts are not a refutation of the Empiricism Paradox, but a delineation of its applicable boundaries. Empiricist accumulation is efficient and irreplaceable within the “reachable domain” of a problem. The entire history of electromagnetism is a triumph of cumulative progress. But when the cumulative path reaches its limit—when all experts have exhausted 60 years along the same direction without breakthrough—this precisely indicates that the problem has exceeded the boundaries of the reachable domain and requires a different search mechanism.

Empiricist validity is the engine of “within-path optimization.” The Empiricism Paradox is the trigger for “between-path leaps.” The former handles 90% of scientific problems; the latter handles the most critical among the remaining 10%—those frontier problems locked in a long-term unsolvable state by path dependency. The two are not in opposition, but in phased complementarity.

Erdős #1196 is precisely a paradigmatic case of the transition from “Phase One” to “Phase Two”: cumulative progress along the probabilistic path reached its apex when Lichtman proved the ~1.399 upper bound—this was a triumph of empiricist validity. But the subsequent seven years of stagnation indicated that the reachable domain of that path had been exhausted. GPT-5.4’s approach from the arithmetic direction constituted a between-path leap—the moment when the Empiricism Paradox was broken by AI.



Chapter 4

The Validity of Compromise Theories

The root of the Empiricism Paradox lies in an often-overlooked fact: the overwhelming majority of theories in human history are compromise products under the constraints of the prevailing level of productive forces and the physical environment.

Theories are not pure truths. Theories are optimal approximations under specific productive-force conditions. When tools are limited, humans must compromise between “precision” and “operability.” These compromises were reasonable at the time—even the only feasible option. The problem is that in the process of transmission, the background of the compromise is forgotten, and the result of the compromise is treated as truth itself.

We name this phenomenon “the Validity of Compromise Theories”: a theory is valid within the constraining conditions under which it was produced, but this validity is improperly extended by knowledge transmission mechanisms to new eras where the constraining conditions have already changed.

4.1 A Panorama of Historical Cases

The following cases demonstrate the universality of this pattern in the history of human knowledge. We first provide an overview, then analyze each case in detail:

Field Compromise Theory Physical Constraint Duration Broken By
Medicine Humorism No microscope ~2,000 years Germ theory (microscope)
Medicine Miasma theory No microscope ~2,300 years Germ theory (microscope)
Medicine Bloodletting No alternative drugs ~2,000 years Antibiotics and modern pharmacology
Psychiatry Lobotomy No psychiatric drugs ~30 years Chlorpromazine and other antipsychotics
Astronomy Geocentrism/Ptolemaic system No telescope ~1,400 years Heliocentrism (telescope)
Physics Aether theory No precision interferometer ~200 years Michelson-Morley experiment
Physics Caloric theory No molecular kinetics ~100 years Joule’s experiments
Chemistry Phlogiston theory No precision balance ~130 years Lavoisier’s oxidation theory
Physics Newtonian mechanics No light-speed experiments ~230 years Einstein’s relativity
Geology Fixed continent theory No seafloor seismographs/magnetometers ~50 years Plate tectonics
Economics Mercantilism No industrialized productivity ~250 years Adam Smith’s free trade theory
Biology Spontaneous generation No microscope/aseptic technique ~2,000 years Pasteur’s experiments
Mathematics Probabilistic path Limited human brain search space 90 years GPT-5.4 Pro

The last row is structurally identical to all preceding cases: tool constraints produced a path, the path was transmitted as tradition, tradition obscured alternatives, until a new tool appeared to break the cycle. The only difference: previously, the tools that broke old constraints were physical instruments (telescope, microscope, interferometer), while this time the tool that broke the old constraint is a cognitive instrument.

Below we analyze each key case in detail.

4.2 Medicine: Two Thousand Years Without the Microscope

Humorism is one of the longest-surviving compromise theories in the history of human knowledge. From Hippocrates in the fourth century BCE to the mid-nineteenth century, for nearly two thousand years, virtually all Western medicine was built on the balance theory of four humors—blood, phlegm, yellow bile, and black bile. Doctors observed patients’ macroscopic symptoms (fever, sweating, vomiting, pus discharge) and attempted treatment by adjusting these “fluids.”

Why could this completely wrong theory persist for two thousand years? Because in an era without microscopes, all doctors could observe were macroscopic fluid phenomena. Viruses are too small even for optical microscopes—the first virus was not observed until 1938, using an electron microscope. Under this constraint, “fluid imbalance causes disease” was the only explanation consistent with observable evidence.

Bloodletting was the most extreme practical consequence of humorism. It persisted for two thousand years because it sometimes did make patients “feel better”—bloodletting lowered blood pressure, producing a temporary sense of comfort and creating the illusion of “cure.” George Washington, in the last 16 hours before his death in 1799, had approximately 2.5 liters of blood drained. A wrong theory, through coincidental positive feedback, received false confirmation.

The story of miasma theory is even more bizarre. Proposed by Hippocrates in the fifth century BCE, it held that diseases were caused by “bad air.” This theory was popular in both Europe and China for over two thousand years. Florence Nightingale, based on miasma theory, promoted hospital sanitation reform—although the underlying mechanism was completely wrong (it was not “bad air” but bacteria that caused disease), the practical effects were significant: clean environments did reduce infection rates.

This reveals the most uncanny feature of compromise theories: they are genuinely effective within old constraints, and this effectiveness actually reinforces incorrect causal explanations. The “correctness” of the effect obscures the “incorrectness” of the mechanism. Nightingale’s success let miasma theory persist for several more decades.

4.3 Psychiatry: The Era Without Medication

Lobotomy is the most extreme case of compromise theories producing catastrophic consequences. In the 1930s, psychiatric hospitals were severely overcrowded, and no effective psychiatric medications existed. In the 1940s, scientists observed that soldiers with frontal lobe damage exhibited calm, low-anxiety characteristics, forming a hypothesis: the frontal lobe causes mental illness; removing it would cure the patient.

Portuguese neurologist Egas Moniz received the 1949 Nobel Prize for inventing this procedure. In the United States, Walter Freeman popularized it as the “ice pick surgery”—inserting a metal instrument through the eye socket to sever frontal lobe connections. Between 1949 and 1952 alone, approximately 50,000 people underwent the procedure. Many patients became permanently impaired, losing personality, initiative, and emotional capacity.

It was not until 1952, when the first antipsychotic drug chlorpromazine appeared in Paris, that lobotomy was rapidly phased out. This case perfectly demonstrates the lethal logic of “the Validity of Compromise Theories”: under constraints where no alternative exists, an extremely crude method is accepted as the “optimal solution,” even receiving science’s highest honor. When the constraints change (drugs become available), the “optimal solution” is instantly recognized as barbarism.

4.4 Geology: Wegener’s Half-Century Exile

In 1912, German meteorologist Alfred Wegener proposed the continental drift hypothesis: the Earth’s continents were once a single supercontinent (Pangaea) that gradually drifted apart. His evidence was extremely compelling—the matching coastlines of South America and Africa, cross-continental fossil distributions, glacial scratch directions—but his theory was thoroughly rejected by mainstream geology for half a century.

Why? Because Wegener could not answer a fatal question: what force could push such massive rock plates across the ocean floor? The mechanisms he proposed (centrifugal force from Earth’s rotation, tidal forces) were proven by calculation to be far too weak. British geophysicist Harold Jeffreys correctly pointed out that solid rock could not “plow through” the sea floor.

In an era without seafloor seismographs and magnetometers, the ocean floor was a complete black box. Geologists could not observe mid-ocean ridges, seafloor spreading, or magnetic stripe reversals—these were the actual mechanisms of continental drift. Wegener had the right observations, but limited by the tools of his era, he could not provide the right mechanism.

Wegener froze to death during a Greenland expedition in 1930. In the decades after his death, senior geologists warned young researchers: any hint of interest in continental drift would ruin their careers. It was not until the 1960s, when seismographs (developed for monitoring nuclear tests) and magnetometers (for detecting submarines) were applied to seafloor research, that the mechanisms of continental drift were finally revealed.

This is the most brutal case of path lock-in: a correct theory was rejected for half a century, not because of insufficient evidence, but because the tools of the era could not reveal the mechanism, and “no mechanism” was considered a fatal flaw by the empiricist tradition.

4.5 Economics: The “Common Sense” of the Gold Standard

Mercantilism was the dominant economic theory in Europe from the 16th to the 18th century. Its core belief was that a nation’s wealth equals its stock of gold and silver, trade is a zero-sum game (one party’s gain is necessarily another’s loss), and therefore nations should maximize exports and minimize imports to accumulate precious metals. This theory ruled for approximately 250 years, profoundly shaping European colonial expansion and trade wars.

Why was mercantilism a “reasonable compromise” at the time? Because before the Industrial Revolution, productivity growth was extremely slow, and total global wealth barely changed within a person’s lifetime. Under this constraint, “wealth is a fixed pie” appeared to perfectly match experience. Adam Smith, in The Wealth of Nations (1776), pointed out that wealth is not gold and silver but a nation’s productive capacity—but this insight required the Industrial Revolution’s actual demonstration that the pie could grow.

Notably, mercantilism’s “slipper” is still being worn today. The world in 2026 is still rife with tariff barriers and trade protectionism—essentially the same 16th-century zero-sum thinking revived in the 21st century. This demonstrates that the inertia of compromise theories can outlast their “official overthrow,” continuing to exist in mutated forms.

4.6 Physics: Newtonian Mechanics—The Most Successful Slipper

Newtonian mechanics is the most interesting case among compromise theories, because it was too successful. Its predictive precision at everyday scales is so high that even more than a century after the emergence of relativity and quantum mechanics, we still use Newtonian mechanics in daily life, engineering design, and even satellite orbit calculations.

Every physical theory is valid only within a specific parameter range. Newtonian mechanics can be derived as the low-velocity approximation of special relativity. Similarly, flat Earth theory is an approximation of spherical Earth theory at small distances. But “validity” and “truth” are two different things. Newtonian mechanics is “valid” at everyday scales, but its understanding of the nature of gravity (action at a distance) is fundamentally wrong.

What makes Newtonian mechanics special is that it demonstrates the most dangerous form of compromise theories: a theory that is “good enough” can forever prevent people from seeking the “truly correct” theory. If the minute precession of Mercury’s perihelion had not been observed with sufficiently precise telescopes, general relativity might have been accepted much later.

4.7 Chemistry: Patching Up Phlogiston

Phlogiston theory provides a textbook case of how path lock-in develops into path collapse. In 1667, German physician Johann Joachim Becher proposed that combustible substances contain “phlogiston,” which is released during combustion. In an era without precision balances, this theory perfectly explained all known combustion phenomena.

When Lavoisier discovered through precise weighing that combustion is actually combination with oxygen (mass increases rather than decreases), senior chemists who supported phlogiston theory did not abandon the theory but repeatedly patched it—some proposed that phlogiston has “negative mass” to explain the mass increase. Pierre Macquer repeatedly rewrote his theory to accommodate new data, though he himself suspected his theory was wrong.

This is precisely the typical symptom of late-stage path lock-in: when reality begins to conflict with theory, experts’ first reaction is not to abandon the theory but to patch it. Each patch makes the theory more complex, less elegant, and harder to overturn—until the weight of patches collapses the entire structure. This parallels Ptolemy’s system using ever more “epicycles” to explain planetary motion.

4.8 Biology: Spontaneous Generation—The “Obvious” Illusion

Since Aristotle, people “obviously” believed that life could spontaneously generate from non-living matter: rotting meat produces maggots, granaries produce mice, sealed containers develop bacteria. Jan Baptista van Helmont even provided a “recipe for making mice”: place wheat and dirty shirts together.

In an era without microscopes and aseptic technique, these observations did match all available empirical evidence. Maggots did “appear” on rotting meat (because the naked eye cannot see the eggs laid by flies), bacteria did “appear” in sealed containers (because before Pasteur, no one knew that spores float in the air). It was not until Francesco Redi’s controlled experiments (1668) and ultimately Pasteur’s definitive swan-neck flask experiment (1859) that this theory was finally overturned.

Spontaneous generation is the purest form of compromise theory: it is not a reasoning error, but a case where the precision of observational tools was insufficient to distinguish “spontaneous emergence” from “emergence from invisible sources.”

4.9 Cross-Case Pattern Summary

All the above cases follow the same five-stage life cycle:

Stage One: Optimal Approximation Under Constraints
Under the tools and cognitive limits of the time, the theory is the best explanation for observable phenomena.
Stage Two: Transmission and Solidification
The theory is incorporated into educational systems and professional training; the constraining background is forgotten; the path is assumed to be “the only correct direction.”
Stage Three: Anomaly Accumulation and Patching
New evidence begins to conflict with the theory, but experts choose to patch the theory rather than abandon the path (phlogiston’s “negative mass,” Ptolemy’s “epicycles”).
Stage Four: New Tools Enter the Scene
Telescope, microscope, interferometer, seismograph, or AI—new tools reveal levels of reality that old tools could not reach.
Stage Five: Paradigm Shift (Often Accompanied by a Generation of Resistance)
The old theory is overturned, but old experts often refuse to accept it, until a new generation grows up.

The Erdős #1196 event occupies an unprecedented position in this cycle: the “new tool” in Stage Four is not a physical instrument but a cognitive system; the resistance in Stage Five may not last a generation as it has in the past, because AI can continuously repeat such breakthroughs.



Chapter 5

The Validity of the Slipper Theory

Beyond the serious academic formulation of “compromise theories,” we also offer an intuitive metaphorical framework—”the Validity of the Slipper Theory”—to capture the absurd dimension of the same phenomenon.

Imagine all of humanity wearing slippers for 90 years, developing upon this foundation an entire science of slipper-based gait, kinematics, and competitive theory. Each generation of athletes refines the art of “how to run faster in slippers.” The foremost expert is the person who runs fastest in slippers. Then one day, a young person who has no idea what slippers even are casually hands the AI a pair of running shoes and asks: “Can you help me run faster?”

All the slipper-wearing experts stand behind the finish line, staring at the results board. Their first reaction is not “running shoes are fast,” but “wait—you mean you don’t have to wear slippers?”

This is the true meaning when Lichtman said the proof came from “The Book”—what astonished him was not the speed, but the very existence of this path.

“Compromise theories” explain the why (theories are compromise products under productive-force constraints). “Slipper theory” lets you see the absurdity (an entire civilization optimizing within wrong constraints for decades or even millennia). One makes you understand; the other makes you laugh. One is the title of a paper; the other is the opening of a talk.

At a deeper level: human civilization is a history of constantly upgrading slippers while forgetting that you can change shoes. Every time new shoes appear (telescope, microscope, AI), people’s first reaction is not joy but bewilderment—because they never even realized they were wearing slippers.



Chapter 6

Evolutionary Inertia of Knowledge

The validity of classical theories, compounded by empiricist transmission, produces a severe consequence: the human knowledge graph contains a vast number of evolutionary inertia error paths.

The term “evolutionary inertia” here is a precise analogy. In biological evolution, an early contingent choice becomes locked into all descendants—for example, the recurrent laryngeal nerve in vertebrates loops around the aortic arch before returning to the larynx. In fish, this was the shortest reasonable path, but in a giraffe it becomes an absurd four-meter detour. This is not a “design error” but evolutionary path dependency.

The evolution of human knowledge follows exactly the same logic. An early path choice (such as using probability theory for primitive set problems), once it produces results, becomes locked into the subsequent development of the entire discipline. Successors continue building upon it, higher and higher, and the higher it gets, the less possible it becomes to tear it down and start over. Each of these paths, at the moment it was chosen, was reasonable; but they are not the only possible paths, nor necessarily the optimal paths.

6.1 Undetectable from Within

The most dangerous property of knowledge evolutionary inertia is: you cannot detect it from within the system. If you are a mathematician trained in the probabilistic tradition, you will not perceive the probabilistic path as a “compromise”—you will perceive it as “natural,” “obvious,” “the only reasonable approach.” The most excellent individuals within the system are precisely the most faithful carriers of the system’s inertia.

This constitutes an epistemological abyss: the deeper the knowledge, the more systematic the blind spots; the stronger the expertise, the weaker the imagination for alternative paths. This is not a matter of individual capability, but a structural defect in the way knowledge itself is organized.

6.2 The Equation of the Human Knowledge Graph

Human Knowledge Graph = Valid Knowledge + Vast Evolutionary Inertia Error Paths

These evolutionary inertia error paths are not the result of predecessors being foolish, but because each generation made optimal choices under the constraints of the previous generation, and then these choices were solidified into tradition, tradition produced experts, experts maintained tradition, forming a self-reinforcing closed loop. And humans, from within the system, cannot distinguish between “valid knowledge” and “evolutionary inertia error paths,” because both appear equally correct from the interior.

6.3 The External Limitations of “Pseudo-Unsolvable” Problems

This directly leads to an important corollary: many “unsolvable” problems facing humanity may not have “unsolvability” as an attribute of the problem itself, but as an attribute of the research path.

What the developmental inertia of finite knowledge brings is the external limitations of many seemingly unsolvable human problems. These problems are not beyond the limits of human cognition, but are locked by path dependency in the wrong search direction. If “unsolvability” is an attribute of the path rather than the problem, then switching paths alone could turn “unsolvable” into “solvable.”



Chapter 7

A Resonance Across Millennia: Zen Epistemology

The Empiricism Paradox in the Age of AI is not an entirely new discovery. Thirteen hundred years ago, Huineng, the Sixth Patriarch of Zen Buddhism, had already touched the same truth.

Huineng was illiterate, had never read a single sutra, and had never stood on the shoulders of any classic text. Shenxiu, the foremost disciple of the Fifth Patriarch Hongren, was the most erudite monk of that era—he was the Lichtman of his time—the deepest in practice, the most knowledgeable of tradition, and the one most expected to inherit the robe.

The verse Shenxiu wrote was:

“The body is a bodhi tree, the mind a mirror bright; diligently we polish it, and let no dust alight.”

— Shenxiu

This is the perfect expression of empiricism—along the established path of cultivation, step by step in gradual practice, diligent and unrelenting. The logic is impeccable; the direction is consistent with all predecessors.

Huineng’s response was:

“Bodhi originally has no tree, the mirror has no stand; originally there is not a thing—where could dust alight?”

— Huineng

He did not walk further along Shenxiu’s path, but utterly negated the premise on which that path existed. Not polishing the mirror more cleanly, but pointing out that there is no mirror to polish at all.

This is structurally astonishingly consistent with GPT-5.4’s approach—not walking further along the probabilistic path than Lichtman, but pointing out that one need not take that path at all. Hongren chose Huineng to inherit the robe, not because Huineng “tried harder,” but because he recognized: Huineng’s ignorance was precisely what allowed him to reach the essence directly, while Shenxiu’s erudition instead trapped him in appearances.

Zen calls this “beginner’s mind”—Shunryu Suzuki said, “In the beginner’s mind there are many possibilities, in the expert’s mind there are few.” Price’s state when facing that problem was pure beginner’s mind. He had no preconceptions about “how this problem should be solved,” so the full range of possibilities was open to him (and to the AI).

The core proposition spanning thirteen hundred years is the same: the deepest breakthroughs often come not from the accumulation of knowledge, but from liberation from the frameworks that knowledge imposes. Huineng achieved this through “not establishing words, pointing directly to the mind.” Price and GPT-5.4 achieved this through “not following old paths, facing the problem directly.” The forms are completely different; the structure is completely the same.



Chapter 8

AI as a Cross-Dimensional Cognitive System

AI’s pre-training and reinforcement learning post-training produce a knowledge representation structure entirely different from human empiricist transmission. To understand the depth of this difference, we need to separately examine the sources of inertia in human cognition and the cross-dimensional characteristics of AI cognition. Here we use “cross-dimensional” rather than the more precise term “inertia-free”—AI is not “without bias” but rather “biased in a different dimension,” and this difference is itself the source of value.

8.1 The Quadruple Inertia Lock of Human Cognition

Cognitive science research has revealed four fundamental neural network principles underlying systematic biases in human decision-making: associativity (the tendency to combine unrelated information), compatibility (preferentially processing information consistent with existing knowledge—the source of confirmation bias), retention (difficulty ignoring information once processed, even if it is misleading), and focus (attending to dominant information while ignoring peripheral signals).

These biases are not human “defects”—they are efficient heuristic strategies optimized through long evolution of biological neural networks. In everyday environments, they allow humans to make good-enough decisions at extremely low cognitive cost. But when facing frontier scientific problems, these same mechanisms become path-dependency generators:

Compatibility principle → Confirmation bias → Path solidification

Mathematicians preferentially attend to approaches compatible with known methods, automatically filtering those that “don’t look like the right direction.” The probabilistic path went unquestioned for 90 years precisely because every new researcher, when evaluating possible methods, subconsciously rated the probabilistic approach as “compatible” (because predecessors all used it), while regarding the von Mangoldt function as “incompatible” (because no one had used it for primitive sets).

Focus principle → Tunnel vision → Disciplinary barriers

Human attentional resources are limited. When focused on the deep internal structures of a particular domain, one inevitably ignores tools from peripheral fields. A mathematician who spent seven years studying primitive sets has 99% of his attention focused on the problem’s internal structure, and naturally would not search for the seemingly unrelated von Mangoldt function in analytic number theory.

The training human mathematicians receive is sequential and path-dependent—first calculus, then real analysis, then probability theory. Each step forms the cognitive habit of “this type of problem should be solved with this type of method.” This sequential training, while accumulating professional depth, also erects invisible walls at the cognitive level.

8.2 AI’s Latent Space: The Physical Basis of Cross-Dimensional Search

AI’s parameter space is something else entirely. The von Mangoldt function, Markov chains, primitive set theory—in human disciplinary classification, these belong to different “drawers”; but in AI’s weight space, the distance between them may be far closer than in human cognition.

This is not a metaphor, but a structural difference that can be precisely described mathematically. The human disciplinary system is a tree—with roots, trunk, branches, and leaves; getting from one leaf to another requires tracing back along branches. AI’s vector space is more like a high-dimensional ocean—any two concepts can be connected in a straight line, without passing through any intermediate nodes.

AI has no a priori bias of disciplinary boundaries such as “this tool belongs to analytic number theory, that problem belongs to combinatorial number theory.” Its search space is different from humans’ from the very beginning—not walking along any known path, but finding the shortest route in a high-dimensional representation space with a completely different topological structure. When GPT-5.4 faced the primitive set problem, it did not “decide” to use the von Mangoldt function—in its internal representation, this may simply have been the nearest route.

8.3 AlphaZero: A Precedent

The Erdős #1196 event was not the first case of AI demonstrating inertia-free cognition. AlphaZero in 2018 had already provided a stunning precedent.

AlphaZero learned chess from scratch, using no human game records, solely through self-play. After several hours of training, it defeated the then-strongest traditional engine, Stockfish. But what truly shocked the chess world was not its win rate, but the way it played.

Chess master Matthew Sadler, after analyzing thousands of AlphaZero’s games, said: “This is completely different from previous engines. Past engines only taught you to avoid tactical blunders. AlphaZero clearly has a very deep understanding of chess; you can learn all sorts of important things from it.” He compared it to “discovering a previously unknown past grandmaster.”

Research published in PNAS revealed an astonishing fact: despite never having seen any human chess game, AlphaZero independently developed many human chess concepts during training—opening theory, king safety, pawn structure, and so on. But simultaneously, it also developed strategies that humans had never considered. In one famous game, it sacrificed four pawns in succession to achieve long-term positional advantage—a style that no human grandmaster would typically adopt.

DeepMind’s David Silver described the process thus: “It’s like a million tiny discoveries, one after another, building up this creative way of thinking.” Kasparov wrote: “Deep Blue was an ending; AlphaZero is a beginning.”

A follow-up study published in PNAS in 2025 went further: researchers developed a method to extract chess concepts from AlphaZero’s internal representations that humans had never known, then taught these concepts to chess grandmasters—demonstrating that machine-guided knowledge discovery and teaching is possible at the highest human level.

8.4 The AlphaZero Mode vs. the GPT Mode: Isomorphism and Divergence

AlphaZero and GPT-5.4 represent two different implementations of AI’s inertia-free cognition, but they are structurally isomorphic:

The AlphaZero Mode: Self-Discovery from Zero Rules

Uses no human data at all, discovering knowledge solely from rules through self-play. Its inertia-free nature comes from never having encountered human paths. Advantage: purely unbiased search. Limitation: requires clearly defined rules and verifiable win/loss criteria.

The GPT Mode: Creative Melting and Recombination of Human Knowledge

Trained on massive human data, but pre-training compresses knowledge scattered across different disciplines into the same continuous vector space, dissolving the disciplinary barriers manufactured by the human education system. Its inertia-free nature comes not from “never having seen human knowledge,” but from “organizing human knowledge in a completely different topological structure.”

The GPT mode is, in a sense, closer to the essence of the Erdős #1196 event. GPT-5.4 does not “not know” about the von Mangoldt function and primitive set theory. Quite the opposite—it “knows” both, but unlike humans, it does not put them in different “drawers.” In its internal representation space, the connection between the two may be natural and direct—a connection that the human disciplinary system’s tree structure structurally prevents humans from seeing.

This is not the kind of inheritance relationship implied by “standing on the shoulders of giants.” A more precise metaphor: it melted down all the buildings that the giants had built separately, and in the molten material discovered an entirely new structure. The building materials are the same (human knowledge), but the organizing principle is completely different—no longer constrained by the historical accidents of “who built what first” or “which discipline occupied which territory.”

8.5 The Cost and Boundaries of Cross-Dimensional Search

It must be noted that AI’s cross-dimensional cognition is not without cost. Research shows that training on text data in standard machine learning produces stereotypical biases reflecting everyday human culture. While AI is not subject to human disciplinary path dependency, it has biases in its own dimension—training data distribution bias. If 90% of mathematical literature follows the probabilistic path, AI’s training will also be influenced by this bias.

In the Erdős #1196 event, GPT-5.4’s ability to break through this data bias may be precisely because it was asked to solve the problem from scratch (rather than summarize existing literature). In generative mode, AI’s search process is closer to AlphaZero-style reasoning from rules, rather than simple pattern matching. Analysis on the Erdős Problems forum suggests that GPT-5.4 produced “completely correct reasoning” in its search—evidence of genuine mathematical inference rather than mere pattern matching.

This means AI’s cross-dimensional advantage is not automatic but depends on how it is used. When AI is used to “summarize existing knowledge,” it may merely replicate human path dependency; when AI is used to “solve problems from first principles,” it can unleash the full potential of search from a different dimension. Price’s contribution was precisely this: he handed the problem to the AI as-is, without adding any preconceptions about “how it should be solved”—thereby preserving the AI’s cross-dimensional search space.



Chapter 9

A New Paradigm of Knowledge Production

The Erdős #1196 event reveals a possible new paradigm of knowledge production. Unlocking “pseudo-unsolvable” problems locked by path inertia requires the collaboration of three elements:

A Questioner Without Path Preconceptions (Novice/Outsider)
Provides unbiased problem input—carrying no preconceptions about ‘how it should be solved,’ protecting the AI’s cross-dimensional search from being calibrated back to human paths.
×
A Cross-Dimensional Search System (AI)
Searches in dimensions inaccessible to human disciplinary systems—different bias directions mean different visible regions.
×
Domain Expert Verification
Identifies, refines, and integrates key insights from AI output—provides quality control.

The novice provides unbiased problem input. The AI provides unbiased path search. The expert provides quality control and knowledge integration. All three are indispensable.

The Price-GPT-Tao/Lichtman combination is not an accidental story, but may be the prototype of a new knowledge production paradigm. In this paradigm:

Ignorance Is No Longer a Defect, but a Structural Advantage

On certain frontier problems, a questioner without path dependency is more likely to trigger the correct search direction than an expert who has plowed the same furrow for decades. This is not because ignorance itself has value, but because experts’ way of organizing knowledge creates systematic blind spots.

AI Is Not a Tool, but a Cognitive Partner

AI’s role is not “a faster calculator,” but a cognitive system possessing independent search capability outside the human knowledge graph. Its value lies precisely in the fact that it does not think along human paths.

Expertise Needs to Be Redefined

The most valuable experts of the future may not be “those who have walked the farthest along a single path,” but “those who can recognize and integrate insights from non-traditional paths.” Lichtman and Tao’s role in this event—not as discoverers but as verifiers and refiners—may foreshadow the future shape of the expert’s role.



Chapter 10

Boundaries and Honesty: Limitations of This Paper’s Argument

A paper about cognitive blind spots that lacks awareness of its own blind spots would itself constitute an irony. This chapter honestly discusses the applicable boundaries and potential weaknesses of this paper’s arguments.

10.1 Survivorship Bias: The Singularity of the Price Event

This paper’s core argument is built upon the single event of Erdős #1196. We must honestly acknowledge: we see only Price’s success, not the potentially hundreds or thousands of failures.

Tao himself, when commenting on AI mathematical achievements, pointed out that unsolved mathematical problems follow a “long-tail distribution”—a large number of problems are actually relatively easy to prove but remain open due to lack of expert attention. AI’s “harvesting” is mainly concentrated at the tail of this distribution. A 2025 survey even revealed that some initial claims about AI “solving” Erdős problems were not genuine novel solutions but rediscoveries of already-known results.

What makes Problem #1196 special is that it does not belong to the above category. It is a problem that had been actively studied by multiple professional mathematicians, and AI provided a genuinely novel method. But it currently remains a single case—or more precisely, one of very few cases that genuinely demonstrate “cross-dimensional breakthrough.” Until such breakthroughs can be replicated at scale, the arguments of this paper should be treated as a hypothesis framework rather than established conclusions.

10.2 The Symmetry of AI Hallucination and Human Hallucination

Criticism regarding AI’s error rate is legitimate. AI produces hallucinations—outputs that appear confident but are substantively wrong. The Erdős Problems forum is full of incorrect “proofs” submitted by AI, some of which even claim “rigorous proof” in code comments when they are actually only numerical checks on small spaces.

But if one uses this to deny the value of AI’s cross-dimensional search, one must simultaneously face an uncomfortable fact: the error rate of human empiricism is equally high, perhaps even higher in certain dimensions.

Stanford professor John Ioannidis argued in his landmark 2005 paper: “Most published research findings are probably false.” Aristotle—the founder of Western science—believed the brain was a cooling organ for the blood and that flies have four legs (this error was repeated in natural history texts for over a thousand years without anyone checking).

AI’s hallucinations last 80 minutes, identified and corrected by experts within days. Human empiricism’s “hallucinations”—incorrect theoretical frameworks—can persist for hundreds or even thousands of years, causing irreversible harm in the interim. If we are to discuss error rates, we must also discuss the duration of errors and the cost of correction. Within this symmetric comparison framework, AI’s error model may actually be safer than the human empiricist error model.

10.3 “Cross-Dimensional” Does Not Mean “Forever Correct Dimension”

After correcting “inertia-free” to “cross-dimensional search,” a new question emerges: AI’s dimension is not omniscient either. It is merely a different dimension.

This means: AI can see paths that humans cannot, but it also has its own blind spots—regions that are sparsely represented in training data and marginalized in vector space. Not all human “unsolvable” problems are “pseudo-unsolvable”—some may indeed lie beyond the shared cognitive boundaries of current AI and humans.

More importantly, empiricist accumulation is effective most of the time, even irreplaceable. If a thousand Prices attacked open problems with AI every day, the vast majority would receive garbage output. Cross-dimensional search is not a panacea—it is a supplementary strategy that should be activated only after the empiricist accumulation path has clearly been exhausted. Treating “AI can sometimes break path dependency” as “AI should replace all expert judgment” would itself be a dangerous new path dependency.

10.4 This Paper’s Own Path Dependency

Finally, if the Empiricism Paradox is universal, then is this paper’s own argumentative framework also shaped by some form of path dependency? The answer is almost certainly “yes.”

This paper proceeds from the AlphaZero analogy, all the way to the Empiricism Paradox. But if the starting point were different—say, from AI’s success in protein folding (AlphaFold)—the direction and emphasis of the argument might be very different. Choosing Erdős #1196 as the core case inevitably pulls the argumentative framework toward the pure mathematics end and away from the experimental science end.

A paper claiming “path dependency is everywhere” should not pretend it is itself immune to path dependency. This paper’s contribution lies in providing an explanatory framework with analytical power, not in providing an invulnerable truth. Readers should treat this paper as Tao and Lichtman treated GPT-5.4’s output—extracting valuable insights while maintaining critical awareness of the framework’s own limitations.



Chapter 11

Conclusion and Outlook

Starting from the specific event of Erdős #1196, this paper has proposed the following core arguments:

First, the structure of human knowledge transmission itself contains a deep paradox: the accumulation of experience, while enhancing problem-solving capacity, simultaneously and systematically collapses the search space, making the most experienced individuals precisely those who find it hardest to see alternative paths. We name this the “Empiricism Paradox in the Age of AI.”

Second, most theories in human history are compromise products under specific productive-force constraints—”the Validity of Compromise Theories”—but knowledge transmission mechanisms do not automatically label the validity domains of these theories, causing expired compromises to be treated as eternal truths.

Third, the validity of classical theories compounded by empiricist transmission accumulates vast numbers of “evolutionary inertia error paths” in the human knowledge graph. These paths cannot be detected from within the system and constitute the external limitations behind many problems’ “pseudo-unsolvable” status.

Fourth, AI’s deepest value lies not in computational power, but in its ability to approach problems from dimensions inaccessible to human disciplinary systems—serving as a “cross-dimensional search engine” and “path-dependency detector” for the human knowledge graph. This is not “inertia-free” but “search from a different dimension”: AI’s blind spots do not overlap with human blind spots, and their complementarity is the true source of value.

Fifth, empiricist validity and the Empiricism Paradox are two phases of the same knowledge production system, not opposites. Empiricist accumulation handles 90% of scientific problems; cross-dimensional path leaps handle the most critical among the remaining 10%—those frontier problems locked in a long-term unsolvable state by path dependency. The two are in phased complementarity.

If the above arguments hold, then Erdős #1196 is merely the first publicly revealed case. In every corner of physics, biology, medicine, economics, and engineering, how many more “paths no one looked back at for 90 years” are waiting to be discovered in the human knowledge graph? Can AI systematically scan and reveal these regions locked by path dependency?

The answer to this question will determine whether AI’s impact on human civilization far exceeds our current imagination.

Human civilization is a history of constantly upgrading slippers while forgetting that you can change shoes. The arrival of AI, for the first time, makes us realize: what we have on our feet are slippers.

References & Acknowledgments

[1] Erdős Problem #1196, erdosproblems.com — Erdős Problems database and community discussion maintained by Thomas Bloom

[2] Lichtman, J.D. (2023). “A proof of the Erdős primitive set conjecture.” Forum of Mathematics, Pi, Cambridge University Press

[3] Price, L. GPT-5.4 Pro solution posted on the erdosproblems.com forum (April 2026)

[4] Terence Tao’s comments in the Erdős Problems forum #1196 discussion (April 2026)

[5] Scientific American, “Amateur armed with ChatGPT ‘vibe-maths’ a 60-year-old problem”(April 2026)

[6] Greg Brockman, X/Twitter post on GPT-5.4 Pro Mathematics’s mathematical contributions(April 2026)

[7] Varsity, “Queens’ mathmo achieves world’s first autonomous AI proof”(February 2026)

[8] Shunryu Suzuki, Zen Mind, Beginner’s Mind, 1970

[9] Wikipedia, “List of superseded scientific theories”

[10] Wikipedia, “Miasma theory”; “Phlogiston theory”

[11] Stanford Encyclopedia of Philosophy, “Realism and Theory Change in Science”

[12] Krauss, A. (2024). “Debunking revolutionary paradigm shifts: evidence of cumulative scientific progress across science.” Proceedings of the Royal Society A, 480(2302)

[13] McGrath, T. et al. (2022). “Acquisition of Chess Knowledge in AlphaZero.” PNAS, 119(47)

[14] Schrittwieser, J. et al. (2025). “Bridging the human–AI knowledge gap through concept discovery and transfer in AlphaZero.” PNAS

[15] Ioannidis, J.P.A. (2005). “Why Most Published Research Findings Are False.” PLoS Medicine, 2(8)

[16] Berthet, Q. et al. (2018). “A Neural Network Framework for Cognitive Bias.” Frontiers in Psychology, 9:1561

[17] Kuhn, T.S. (1962). The Structure of Scientific Revolutions. University of Chicago Press

[18] Erdős Problems Blog, “Problem 728 and the use of AI on Erdős problems”(January 2026)

[19] Arbesman, S. (2012). The Half-Life of Facts: Why Everything We Know Has an Expiration Date. Current/Penguin

[20] Silver, D. et al. (2018). “A general reinforcement learning algorithm that masters chess, shogi, and Go through self-play.” Science, 362(6419)

[21] Yahoo News UK, “AI Solved A Mathematical Problem That Had Stumped The World’s Best Minds For Decades”(April 2026)

[22] webiano.digital, “The proof that forced mathematics to take AI seriously”(April 2026)

[23] Wikipedia, “Continental drift”; “Lobotomy”; “Gold standard”

[24] PMC, “Epidemics before microbiology: stories from the plague in 1711 and cholera in 1853 in Copenhagen”

[25] PMC, “Violence, mental illness, and the brain – A brief history of psychosurgery: Part 1”

[26] USGS, “Historical perspective — This Dynamic Earth”(History of plate tectonics)

[27] Smithsonian Magazine, “When Continental Drift Was Considered Pseudoscience”(2013)

[28] EMBO Reports, “The consequence of errors”(Historical case analysis of scientific errors)

LEECHO Global AI Research Lab · LEECHO Global AI Research Lab
& Claude Opus 4.6 · Anthropic
© 2026 All Rights Reserved · V2 · April 27, 2026

댓글 남기기