ORIGINAL THOUGHT PAPER · MAY 2026

Incremental Knowledge and Stock Knowledge

The Information Division of Labor in Human Civilization and the Foundational Logic of Knowledge Production

Incremental Knowledge and Stock Knowledge:
The Information Division of Labor in Human Civilization


Published May 3, 2026
Category Original Thought Paper
Fields Epistemology · Information Economics · Civilizational Structure · Philosophy of Science
이조글로벌인공지능연구소
LEECHO Global AI Research Lab
&
Claude Opus 4.6 · Anthropic
V2

Abstract

This paper proposes a foundational framework for understanding the operational structure of human civilization: dividing all human knowledge activity into two fundamentally distinct tiers—Incremental Knowledge Production (the creation of genuinely new information) and Stock Knowledge Reuse (the redistribution of existing information). Incremental knowledge is the engine of civilizational progress—the ex nihilo generation of new theories, discoveries, and methods. Stock knowledge is the gearwork of civilizational operation—the learning, dissemination, and execution of already-known information. This paper argues that throughout all of human history, the proportion of the population engaged in incremental knowledge production has never significantly exceeded 1%, while 99% of human activity is essentially the reuse of stock information. This structure is not accidental but is determined by the essential properties of knowledge production itself: high trial-and-error costs, low predictability, and strong positive externalities. The paper further introduces the concept of “tacit increment” to delineate the permeable boundary between the two categories, and analyzes the distinct mechanisms through which the industrial capital era and the financial capital era have acted upon this structure—revealing the historical trajectory by which incremental knowledge producers have degraded from “discoverer-as-monetizer” to “replaceable technical labor.” This framework serves as the theoretical foundation for the companion paper on AI’s impact on incremental knowledge.

Section I

Definitions: What Is Incremental Knowledge, and What Is Stock Knowledge
Defining the Fundamental Dichotomy

Before discussing any structural question about knowledge, we must first establish a clear set of definitions. The terms “incremental knowledge” and “stock knowledge” as used in this paper are not categories from traditional knowledge management theory, but rather a functional classification based on information entropy change.

Incremental knowledge refers to the output of activities that add previously nonexistent information to humanity’s collective information set. This includes: new scientific theories (e.g., relativity, quantum mechanics), new technological inventions (e.g., the transistor, CRISPR), new methodologies (e.g., the double-blind experiment, novel application paradigms for Bayesian inference), newly discovered causal relationships (e.g., the link between Helicobacter pylori and gastric ulcers), and new conceptual frameworks (such as the incremental-stock dichotomy proposed in this paper itself). The core criterion for incremental knowledge is: before it was produced, humanity’s information set did not contain this information.

Stock knowledge refers to activities involving the learning, dissemination, execution, and application of knowledge that already exists within humanity’s information set. This includes: education (transmitting known knowledge to new individuals), engineering implementation (converting known technical solutions into physical products), skills training (enabling individuals to master known operational methods), information dissemination (news, publishing, teaching), and the vast majority of everyday labor. The core characteristic of stock knowledge activity is: it does not increase the total volume of humanity’s information set, but rather alters the distribution of existing information among individuals and groups.

A farmer cultivating land, a worker operating a machine tool, a soldier executing tactics, a teacher lecturing on the laws of physics, a programmer writing applications using known frameworks—these are all acts of stock knowledge reuse. Proposing a new law of physics, inventing a new machining method, designing a new strategic paradigm—only these constitute the production of incremental knowledge.

It must be particularly emphasized that this classification is not a value judgment on human activities, but a structural description of the direction of information flow. Stock reuse is a necessary condition for civilizational operation—without it, incremental knowledge could not be socialized or generate practical utility. But stock reuse itself does not produce new information; it is the redistribution of information, not its production.

1.1 The Hierarchical Structure of Incremental Knowledge

Incremental knowledge itself is not homogeneous. It exists on a continuous spectrum ranging from the foundational to the applied. At one end of this spectrum lie paradigm-level increments—discoveries like Einstein’s relativity or Darwin’s theory of evolution that alter the fundamental framework through which humanity understands the world. At the other end lie combinatorial-level increments—cases where an engineer combines two known technologies into a new product, creating a previously nonexistent solution without altering the underlying theoretical framework. Between these two extremes lies a vast body of advance-level increments—discoveries of new facts, new relationships, and new applications within existing paradigms.

Increment Tier Definition Representative Cases Frequency
Paradigm-level Fundamental discoveries that alter humanity’s cognitive framework Relativity, evolution, quantum mechanics, information theory A few per century
Advance-level Extending the knowledge frontier within existing paradigms Discovery of new particles, proof of new mathematical theorems, identification of new drug targets Hundreds to thousands per year
Combinatorial-level Combining known elements into novel solutions iPhone (touchscreen + internet + phone), new business models Tens of thousands per year

A critical distinction exists across these three tiers: as one descends the hierarchy, the free-ridability of the increment rises dramatically. Paradigm-level increments are typically embedded in profoundly deep theoretical training and personal insight, making them difficult to simply replicate—you cannot automatically acquire the ability to discover the next paradigm merely by reading a paper. But combinatorial-level increments are highly dependent on information accessibility—once you know that two technologies can be combined, the act of combining them is far less difficult than the original discovery.

1.2 The Gray Zone: Tacit Increments and the Permeable Boundary

The boundary between incremental and stock knowledge is not absolutely rigid. Between them lies a zone of permeation constituted by tacit knowledge, and we term the new information generated within this zone tacit increments.

Tacit knowledge refers to knowledge that cannot be fully encoded and transmitted through language or text, but can only be accumulated through personal practice and interaction. The feel and intuition that a craftsman accumulates over decades of repetitive work, the diagnostic instinct that a clinician develops across thousands of cases, an experimental scientist’s sensitivity to subtle instrumental anomalies—all of these belong to tacit knowledge. The medium of this knowledge is not text or data, but the body memory and neural networks of the individual. The philosopher of science Ravetz observed that a scientist must be a skilled craftsman who, through a long apprenticeship, learns how to do things without being able to fully explain why they work. Research has demonstrated that even the massive codification of knowledge in the twentieth century has not diminished the contribution of tacit knowledge to innovation—due to the complexity of systems and the emergence of new technologies, tacit knowledge will continue to play a vital role in innovation.

The mechanism by which tacit increments arise is as follows: in the course of long-term practice of stock reuse, an individual accidentally and unplannedly discovers new information that did not previously exist in humanity’s explicit knowledge set. The discovery of penicillin is a classic example—Fleming serendipitously observed the antibacterial effect of mold during a routine stock operation (standard bacterial culture experiments). The gradual refinements that folk craftsmen make to tools and techniques, and the experiential optimization of crop varieties and methods by farmers through agricultural practice, also constitute tacit increments.

The existence of tacit increments means that stock reuse activities serve not only as the dissemination channel for incremental knowledge, but also as a covert site of its generation. Cutting off opportunities for humans to engage in extended hands-on practice would not merely reduce the efficiency of stock reuse—it would also block the accumulation pathway for tacit increments, constituting a form of knowledge loss more insidious and harder to detect than the stagnation of explicit increments.

· · ·

Section II

Historical Evidence: The Constant Ratio of 1% to 99%
A Persistent Structure Across Civilizations

If we trace the history of human civilization, we find a remarkably stable structure: in every era and every civilization, the proportion of the population genuinely engaged in incremental knowledge production has never significantly exceeded 1%. This is not a precise statistical figure, but an order-of-magnitude judgment—the actual proportion may fluctuate between 0.1% and 3%, but it has never approached 10%, let alone a majority of the population.

2.1 The Agricultural Civilization Era

In agrarian societies, over 90% of the population was engaged in agricultural labor—the purest form of stock knowledge reuse. Planting techniques, irrigation methods, and animal husbandry knowledge were passed down from generation to generation with minimal change over centuries. The production of knowledge increments was concentrated among a tiny few: the entire community of natural philosophers in ancient Greece numbered no more than a few hundred; China’s pre-Qin “Hundred Schools of Thought” comprised fewer than a hundred recorded thinkers; the community of scientists during the Islamic Golden Age was similarly small. These individuals represented far less than 0.1% of their respective total populations.

It is worth noting that the agricultural era was also one of the most active periods for tacit increments. A vast number of agricultural technique improvements, handicraft process advances, and construction method innovations came from anonymous craftsmen and farmers whose tacit knowledge, accumulated through long-term practice, formed the bedrock of civilization’s gradual progress. Yet these tacit-increment producers were entirely unrecognized as “knowledge producers” within the social structures of their time.

2.2 The Industrial Revolution Era

The Industrial Revolution ostensibly expanded the scope of “knowledge workers” enormously—engineers, technicians, and managers proliferated. But closer examination reveals that the overwhelming majority of engineers were applying known principles, not discovering new ones. The true incremental knowledge producers who drove the Industrial Revolution—Watt (improvement of the steam engine), Faraday (electromagnetic induction), Bessemer (converter steelmaking)—remained an extreme minority. The essence of the Industrial Revolution was not that incremental producers became more numerous, but that the efficiency and scale of stock reuse were vastly amplified.

2.3 The Modern Scientific System

Even within modern universities and research institutions, the proportion of researchers who produce meaningful increments is far lower than surface numbers suggest. There are approximately 8 million active researchers worldwide (UNESCO data), yet studies show that the citation distribution of scientific papers is extremely skewed: roughly 1% of papers account for the overwhelming majority of scholarly impact. A large share of academic activity consists of minor modifications to existing frameworks, replication studies, or low-increment combinations—formally classified as “research” but, in information-theoretic terms, closer to advanced stock reuse.

Moreover, incremental knowledge suffers from a severe recognition lag problem. Many increments are not acknowledged as such at the time of their production—Mendel’s genetics paper was ignored for thirty-five years after publication; Semmelweis’s discovery of the principle of hand disinfection was ridiculed by his peers for decades. This means that the 1% estimate itself contains an irreducible bias: we can only count increments that were retroactively acknowledged, not those that were produced but never recognized. The true proportion of increment producers may be somewhat higher than 1%, but a significant share of their contributions have been permanently lost in the silence of history.

Data Perspective

Total number of active researchers worldwide (UNESCO estimate)

~8,000,000

Proportion producing widely cited breakthrough results

≈ 1%

Among this 1%, the proportion generating paradigm-level or major advance-level increments likely does not exceed 1% of that figure—approximately 800 individuals globally. Relative to a total population of 8 billion, this is 0.00001%.

· · ·

Section III

Three Essential Properties of Incremental Knowledge
The Structural Incentive Trap

The constant ratio of 1% to 99% has multidimensional causes. On the supply side, the capacity for incremental knowledge production is distributed in an extremely uneven fashion—involving rare combinations of talent, training, and environmental factors. On the demand side, the incentive structure for incremental knowledge production is inherently weak. The following three essential properties primarily explain the latter—why even those with the ability to produce increments face enormous economic headwinds.

3.1 High Cost of Trial-and-Error

The production of incremental knowledge is an inherently unpredictable process. You do not know which path will lead to a new discovery; you can only eliminate non-viable paths one by one. Edison tested thousands of filament materials before finding a workable solution; Kepler spent twenty years computing planetary orbital data before discovering the three laws of planetary motion; countless laboratories invest billions of dollars annually in drug screening, the vast majority of which ends in failure.

Trial-and-error costs have two dimensions: time and resources. A theoretical physicist may spend a decade contemplating a problem to no avail—those ten years of life are themselves an irrecoverable cost. A biopharmaceutical company may invest $2 billion developing a new drug, only to fail in Phase III clinical trials—the overwhelming majority of that $2 billion is unrecoverable.

The trial-and-error cost of incremental knowledge is essentially a form of “sunk investment”—failed paths eliminate wrong answers, but the act of elimination itself generates almost no exchangeable economic value.

3.2 Low Predictability

If the generation of incremental knowledge were predictable, then it would not be genuinely incremental—because predictability implies that, in some sense, the knowledge was already “implicit” in the existing corpus. The defining characteristic of incremental knowledge is ex ante unknowability. This leads to a fundamental predicament: you cannot know in advance which research directions will succeed or which researchers will achieve breakthroughs.

This low predictability means that incremental knowledge production cannot be “planned” and “optimized” the way industrial production can. You can build more factories to increase automobile output, but you cannot build more laboratories to proportionally increase Nobel Prize–caliber discoveries. There is no linear relationship between input and output in fundamental research—sometimes massive investment yields zero returns, and sometimes minimal investment triggers a paradigmatic revolution.

3.3 Strong Positive Externality

Once incremental knowledge is produced, its value inevitably spills over. After a new theory is published, researchers around the world can build upon it. Once a new technology is invented, the subsequent innovations it inspires far exceed the value the inventor can capture. This is the deepest paradox of incremental knowledge: the greater its value to society, the lower the proportion of benefits the producer can internalize.

The Value Distribution Paradox of Incremental Knowledge:

Total social value ████████████████████████████ 100%
Producer’s share ██ Minimal
Free riders ████████████████████████████ Overwhelming majority

↑ The more fundamental the increment, the more extreme this ratio
↑ Einstein ushered in the nuclear age; personal economic return ≈ 0

Figure 1: Schematic of the value distribution structure of incremental knowledge

These three properties together constitute a structural incentive trap: the production costs of incremental knowledge are high, the outcomes are unpredictable, and the benefits largely spill over to society. From the standpoint of pure economic rationality, the production of incremental knowledge is almost always irrational for the individual. The 1% who engage in increment production are largely driven by curiosity, a sense of mission, or special institutional arrangements (such as tenure, government funding, and patent systems), rather than by pure economic incentives.

· · ·

Section IV

The Industrial Capital Era: The Golden Window for Knowledge Creators
When Discoverers Could Directly Monetize

In human history, there was a relatively brief period during which incremental knowledge producers could directly convert their discoveries into economic returns—this was the industrial capital era (roughly from the mid-eighteenth to the mid-twentieth century). During this era, the equation of discoverer-as-monetizer held to a considerable degree.

Watt improved the steam engine and, in partnership with Boulton, established a manufacturing company that profited directly from selling steam engines. Bell invented the telephone and founded the precursor to AT&T. Carnegie refined steelmaking processes and built a steel empire. Ford invented the assembly line production method and built the Ford automotive empire.

The defining feature of this era was that the distance between incremental knowledge and economic monetization was extremely short. After making a technological breakthrough, a person could complete the entire process from discovery to production to sales within the same organizational framework. Although intellectual property protection mechanisms (the patent system) were imperfect, the physical production barriers themselves provided natural protection—you might know the new steelmaking method, but replicating it required building an entire factory and training an entire workforce from scratch.

Monopoly in the industrial capital era was, in a sense, the act of the 1% of increment discoverers monetizing their findings through large-scale production and trade. The source of monopoly profit was not market manipulation, but the direct pricing of incremental information.

4.1 The Recovery Mechanism for Trial-and-Error Costs

In the industrial capital era, the trial-and-error costs of incremental knowledge could be fully recovered—and even earn enormous premiums—through subsequent monopoly profits. An inventor who spent five years and his entire savings experimenting with a new technology could, if successful, secure decades of monopoly returns through patent protection and first-mover advantage. This recovery window was long enough to cover the probability-weighted cost of failure.

More importantly, the “by-products” of the trial-and-error process itself—engineering experience, process know-how, supply chain relationships—constituted tacit knowledge embedded in physical systems that was extremely difficult for external competitors to replicate. You could read about the principles of the Bessemer converter, but to truly master furnace temperature control, raw material ratios, and operational rhythms, you needed to undergo extensive trial-and-error yourself. This depth of physical embedding provided a natural guarantee for recovering trial-and-error costs.

· · ·

Section V

The Financial Capital Era: Systematic Demotion of Knowledge Creators
The Structural Degradation of Increment Producers

Beginning in the mid-twentieth century, financial capital gradually supplanted industrial capital as the dominant force in the economic system. This transformation had a profound and systematically negative impact on the status of incremental knowledge producers.

5.1 The Insertion of Intermediary Layers

The defining feature of the financial capital era is the insertion of an ever-increasing number of intermediary layers between “discovery” and “monetization”: venture capital, investment banks, capital markets, derivatives markets, buyout funds, and private equity. The function of these intermediary layers is to transfer pricing power over incremental information. What inventors receive is no longer monopoly profits, but equity fragments diluted through multiple rounds of financing.

One of the earliest signals of this process appeared in the case of Edison. Edison was one of the greatest incremental producers of the industrial capital era—he invented the practical incandescent lamp, built an electrical distribution system, and founded the Edison General Electric Company. But in 1892, driven by financiers such as J.P. Morgan, his company was merged with Thomson-Houston Electric to form General Electric. After the merger, Edison’s control was drastically diluted; the new company’s president was Charles Coffin from Thomson-Houston—a former shoe salesman, not an inventor. Although Edison was appointed to the board, he attended only a single board meeting before selling all of his GE shares in 1894. One of the greatest producers of incremental knowledge was squeezed out of the very company he had created by the consolidation logic of financial capital—a landmark event in the transition from industrial to financial capitalism.

Industrial Capital Era
Inventor → Production → Market → Profit (flows directly back to inventor)
↓ Historical Evolution ↓
Transition Period (Edison Case)
Inventor founds company → Financiers drive merger → Inventor loses control
Mature Financial Capital Era
Inventor → VC → Company → Investment Bank → Capital Markets → Profit (diverted through multiple layers)
Result
The share of returns accruing to increment producers declines continuously

5.2 The Shift in Profit Sources Under Financial Capital

The original functions of the financial system span three dimensions: intertemporal resource allocation (savings/loans—transferring today’s resources for tomorrow’s use), risk transfer (insurance, hedging—shifting risk from those unwilling to bear it to those willing), and information asymmetry arbitrage (extracting profit from information asymmetries). In the industrial capital era, the first two functions constituted the core of finance—banks provided loans to factories, and insurance companies diversified the risks of overseas trade.

But in the financial capital era, the dominant profit source of the financial system shifted from the first two functions to the third—information asymmetry arbitrage. Hedge funds spend hundreds of millions maintaining quantitative teams to detect price movements fractions of a second faster than others. Investment banks doing M&A profit from information asymmetries between buyers and sellers. VCs investing in early-stage projects are betting that “I understand the value of this technology before the market does.” Financial professionals need not invent anything; they need only know who invented what, and what it means, faster than the market.

In the industrial capital era, the source of information asymmetry was incremental knowledge itself—I invented a new technology that no one else has, and that technological gap is my information advantage, which I monetize directly. But in the financial capital era, information asymmetry no longer derives from incremental knowledge itself, but rather from differences in the speed at which knowledge about increments propagates and differences in the ability to interpret them.

The dominant profit source of financial capital is essentially the gaming behavior of the 99%—zero-sum competition over the propagation speed and interpretive capacity of already-known information. Meanwhile, the industrial and productive capital that is being strangled is what carries the 1%’s increment monetization. The greatest beneficiaries of the financial capital era are not the producers of knowledge, but the intermediaries in the knowledge dissemination chain.

5.3 The Three-Stage Demotion of Increment Producers

Stage Increment Producer’s Identity Return Structure Examples
Early Industrial Capital Owner / Company founder Direct owner of monopoly profits Watt–Boulton, Ford, Carnegie
Transition Period Founder absorbed by financial consolidation Founded the company but lost control Edison (GE), Tesla (AC patents)
Mature Industrial/Financial Capital Highly paid employee / Chief Scientist High salary + research freedom, but does not own the output Bell Labs scientists
Financial Capital Era Replaceable technical labor Salary + fractional stock options; output belongs to the company AI startup researchers

5.4 Musk: The Fortress Effect of Physical Embedding Depth

In the contemporary world dominated by financial capital, the case of Elon Musk is highly illustrative. His wealth derives not from financial arbitrage, but from the direct monetization of industrial-scale incremental knowledge—the mass-production engineering capability for electric vehicles, reusable rocket technology, and the Starlink orbital communications network.

It is important to note that Musk is by no means averse to using financial capital. Quite the contrary: SpaceX has raised approximately $11.9 billion in cumulative funding across more than 30 rounds, with investors including top-tier institutions such as Fidelity, Google, and Andreessen Horowitz; Tesla, for its part, raised capital through public markets. Musk is deeply reliant on financial leverage. But the critical distinction is this: his financial capital serves physical-state increments—fundraising is directed toward building rocket factories and gigafactories—rather than financial capital dictating the direction of the increment.

The core reason Musk has been able to maintain control while being deeply dependent on financial capital lies in the physical embedding depth of his incremental information. A reusable rocket is not a paper, a piece of code, or a business model—it is a complex knowledge system embedded in materials, engineering, supply chains, and operational experience. The replication cost of this knowledge approaches the original trial-and-error cost, meaning that investors cannot bypass Musk himself to replicate these increments. It is the depth of physical embedding that grants him bargaining power against financial capital, not any refusal of financial capital on his part.

Yet Musk remains an outlier. His very existence proves that: in the financial capital era, only when incremental information is deeply embedded in the physical world and of sufficient scale to constitute a monopoly can increment producers maintain control in the face of financial capital.

· · ·

Section VI

The Completeness Problem of Incremental Knowledge
Where Human Knowledge Stands Relative to Physical Reality

The preceding chapters analyzed the economic incentive structure of incremental knowledge production. Before turning to the discussion of the AI era, it is necessary to examine the position of incremental knowledge from a more fundamental dimension: relative to the complete information of the physical world, where does the entirety of human knowledge—both incremental and stock—actually stand? The answer to this question will provide a physics-level criterion for understanding whether knowledge production can ever be “completed.”

6.1 The Information Funnel: From the Universe to Human Knowledge

Regarding the information capacity of the observable universe, physicists have offered estimates at multiple levels. The most widely cited upper bound derives from the cosmological extension of the Bekenstein bound: treating the cosmological horizon of the observable universe as an information boundary, the entropy upper limit is approximately 4×10122 bits (Egan & Lineweaver, 2010). This represents the theoretical ceiling for the amount of information the universe could possibly contain.

However, the information actually carried by matter in the observable universe is far below this theoretical upper bound. Wheeler’s estimate, derived from thermodynamic entropy, yields approximately 8×1088 bits; Vopson (2021), using a calculation based on Shannon information theory and the Eddington number, arrived at approximately 6×1080 bits of information stored in visible matter. The gap between the two (1080 to 1088) depends on whether one accounts for radiation fields, the potential information contribution of dark matter, and other factors. Dark matter constitutes approximately 27% of the universe’s mass-energy, and dark energy approximately 68%—neither of which humanity can directly observe. These two components alone mean that humanity has virtually no access to 95% of the universe’s mass-energy composition.

On top of this, the human biological perceptual system imposes further drastic filtering. The human eye can perceive only the 380–700 nanometer band of the electromagnetic spectrum; the human ear can receive only frequencies in the 20 Hz–20 kHz range. There is no magnetic field perception, no electric field perception, no direct sensing of chemical gradients. Human sensory organs receive approximately 107 bits per second, of which the brain consciously processes only about 50 bits per second.

As for the total recorded information of all humanity, according to IDC’s ongoing tracking, the total volume of data created, captured, copied, and consumed globally in 2026 is approximately 221 zettabytes (roughly 1.8×1024 bits). However, approximately 90% of this consists of replicated and redundant data, placing the independent information volume at approximately the 1023 bit order of magnitude.

Information Funnel Quantified

Cosmological horizon information upper bound (Bekenstein bound extension)

~4 × 10122 bits

Information actually carried by observable-universe matter (Wheeler / Vopson estimates)

~1080 — 1088 bits

Total independent recorded information of all humanity (global data volume, deduplicated)

~1023 bits

Information gap: From the information in cosmic matter to human knowledge, the most conservative estimate yields a gap of 57 orders of magnitude; compared to the theoretical upper bound, the gap exceeds 99 orders of magnitude.

The significance of this estimate is as follows: the entirety of humanity’s incremental knowledge production—from Sumerian cuneiform to modern quantum physics—has explored an information space that, relative to the information space of the physical world, is an infinitesimally small slice approaching zero. Moreover, this slice is not a uniform random sample but a structurally biased sample, severely distorted by the constraints of human perceptual systems, tool capabilities, and cultural preferences.

The production of incremental knowledge, viewed from this perspective, is not a task that can be “completed” but rather an eternal, finite expansion into the unknown. Any claim that “knowledge is sufficient” or “science is nearing its end” is absurd in the face of these magnitudes.

· · ·

Section VII

Conclusions and Extensions
Core Propositions and Future Directions

This paper has established a foundational framework for understanding human knowledge activity. Its core propositions are as follows:

Proposition One: All human knowledge activity can be classified according to information entropy change into incremental knowledge production (the addition of new information) and stock knowledge reuse (the redistribution of existing information)—two activities of fundamentally different natures. Between them lies a permeable boundary constituted by tacit knowledge—the tacit increment.

Proposition Two: Throughout all of human history, the proportion of incremental knowledge producers has never significantly exceeded 1%. This ratio is constrained on two fronts: the extremely skewed distribution of increment-production capability (supply side), and the insufficient incentives created by the three essential properties of incremental knowledge—high trial-and-error costs, low predictability, and strong positive externalities (demand side).

Proposition Three: The industrial capital era provided increment producers with the most effective monetization pathway (discovery-as-monetization), while the financial capital era systematically reduced the return share accruing to increment producers, shifting the dominant profit source of the economic system from the direct monetization of incremental knowledge to the intermediary layer of information asymmetry arbitrage.

Proposition Four: The physical embedding depth of incremental knowledge determines its resistance to free-riding. Purely digital-state increments (theories, software, text) are most vulnerable to free-riding, while physical-state increments (engineering systems, manufacturing processes) enjoy stronger protection. In the financial capital era, only increment producers with sufficiently deep physical embedding can maintain bargaining power against financial capital.

Proposition Five: Relative to the total information content of the physical world, all human knowledge (incremental + stock) is an infinitesimally small slice approaching zero—the most conservative estimate yields a gap of 57 or more orders of magnitude. The production of incremental knowledge is far from complete; it remains in an eternally early stage.

These five propositions constitute the theoretical foundation for understanding the knowledge-ecosystem crisis of the AI era. The companion paper, “AI’s Devastating Impact on Incremental Knowledge,” will build upon this framework to analyze how AI—by accelerating free-riding mechanisms, compressing trial-and-error recovery windows, and manufacturing cognitive bubbles—may ultimately lead to the systemic stagnation of incremental knowledge production: a digital-age civilizational dark age.

References and Notes

[1] Egan, C. A., & Lineweaver, C. H. (2010). “A Larger Estimate of the Entropy of the Universe.” The Astrophysical Journal, 710(2), 1825–1834. Cosmological horizon entropy estimated at ~4×10122 bits; observable universe content entropy estimated at ~5×10104 bits.

[2] Wheeler, J. A. (1990). “Information, Physics, Quantum: The Search for Links.” Thermodynamic entropy–based estimate of cosmic matter information at ~8×1088 bits.

[3] Vopson, M. M. (2021). “Estimation of the Information Contained in the Visible Matter of the Universe.” AIP Advances, 11(10), 105317. Visible matter information estimated at ~6×1080 bits.

[4] Shannon, C. E. (1948). “A Mathematical Theory of Communication.” Bell System Technical Journal, 27(3), 379–423.

[5] Kuhn, T. S. (1962). The Structure of Scientific Revolutions. University of Chicago Press.

[6] Romer, P. M. (1990). “Endogenous Technological Change.” Journal of Political Economy, 98(5), S71–S102.

[7] Schumpeter, J. A. (1942). Capitalism, Socialism and Democracy. Harper & Brothers.

[8] Ravetz, J. R. (1971). Scientific Knowledge and Its Social Problems. Oxford University Press. On the role of the scientist as craftsman and tacit knowledge in research.

[9] Senker, J. (1995). “Tacit Knowledge and Models of Innovation.” Industrial and Corporate Change, 4(2), 425–447.

[10] Acemoglu, D., Kong, D., & Ozdaglar, A. (2026). “AI, Human Cognition and Knowledge Collapse.” NBER Working Paper No. 34910.

[11] Peterson, A. J. (2024). “AI and the Problem of Knowledge Collapse.” arXiv preprint arXiv:2404.03502.

[12] Bazzichi, E., Riccaboni, M., & Castellacci, F. (2026). “Bridging Distant Ideas: the Impact of AI on R&D and Recombinant Innovation.” arXiv preprint arXiv:2604.02189.

[13] IDC / Statista (2026). Global Data Volume Statistics. Total global data creation in 2026 approximately 221 ZB, of which roughly 90% is replicated data.

[14] UNESCO Institute for Statistics (2024). Global Research and Development Expenditures and Researcher Data.

[15] Edison and GE: Edison General Electric Company merged with Thomson-Houston Electric Company on April 15, 1892, to form General Electric. After the merger, Edison attended only one board meeting and sold all his GE shares in 1894. See Rutgers University Edison Papers and GE Company History.

[16] SpaceX financing data: Cumulative funding of approximately $11.9 billion through 32 rounds from 240 investors as of 2026. See Tracxn, Wellfound.

[17] The “1%” figure used throughout this paper is an order-of-magnitude estimate, not a precise statistical value. The actual proportion may vary by era and field, but consistently remains at an extremely low level.

Note: This is an Original Thought Paper. The core framework originates from the independent thinking of the LEECHO Global AI Research Lab (이조글로벌인공지능연구소), with argumentation development and text generation completed through structured dialogue with Claude Opus 4.6. This paper has not undergone peer review and is intended to provide a new analytical framework for foundational questions concerning the structure of knowledge.

이조글로벌인공지능연구소
LEECHO Global AI Research Lab

© 2026 All Rights Reserved · V2 · May 3, 2026

댓글 남기기