This paper proposes an interdisciplinary framework arguing that the von Neumann separation of computation and storage is not a design choice but an inevitable constraint imposed by materials science. It introduces the dichotomy of “fluid topology” and “solid topology” to describe the incommensurable structural differences between the human brain and artificial computational systems. The paper further argues that matrix mathematics grounded in solid topology is, in principle, incapable of fully expressing the complete properties of fluid-topological intelligence; consequently, the alignment between AI and human intelligence faces a fundamental limitation at the level of topological type. As an extension of the LEECHO Research Lab’s Information and Noise: LLM Ontology (V4) framework, this paper advances from the information-theoretic plane of signal/noise to the physical plane of topological structure. V2 additions include: theoretical integration with the Information and Noise framework (Section 02), ontological advantages of solid topology (Section 10), energy-physics arguments for civilizational evolution pathways (Section 08 expansion), falsifiable predictions (Section 11), a balanced treatment of the Orch OR controversy (Section 09 expansion), and an expanded bibliography.
Von Neumann Architecture: A Product of Materials Science Compromise
The stored-program architecture proposed by von Neumann in 1945 is typically regarded as a design decision in computational theory. However, when re-examined from the perspective of materials science, this architecture is in fact the result of a compromise between the behavioral control requirements of electronic computation and the materials available at the time.
Computation and storage impose contradictory demands on materials. Computation requires electron flow (logic operations), while storage requires electrons to remain stationary (state preservation). Resistors control current but produce thermal dissipation; capacitors store charge but suffer from leakage. DRAM requires constant refreshing precisely because capacitors cannot perfectly maintain charge state—this physical constraint directly necessitated the temporal alternation of storage and computation, giving rise to the so-called “von Neumann bottleneck.”
The separation of computation and storage was not von Neumann’s theoretical preference but rather an insurmountable materials-science constraint imposed by the physical properties of resistors and capacitors as electron containers. Architecture is a mapping of material capability.
Current approaches such as Processing-in-Memory (PIM) and memristors represent attempts to break this compromise following advances in materials science. Memristors can switch between storage and computation states within a single device, dissolving at the material level the very root cause that forced computation-storage separation. However, even memristors are limited to resistance-state changes within preset physical ranges—they cannot autonomously grow new connections or restructure themselves.
Theoretical Upstream: Topological Extension of the Signal-Noise Framework
The theoretical upstream of this paper is the LEECHO Research Lab’s previously published Information and Noise: LLM Ontology (V4, March 2026). That paper established a core propositional chain: noise is the substrate → signal is the local condensation of noise → mathematics is the apex of signal → the Planck scale is the terminus of signal. Signal gains penetrative power through dimensional reduction, but dimensional reduction itself creates blind spots.
This paper extends that information-theoretic framework to the level of topological structure. In Information and Noise, LLMs were defined as “signal machines that seek inertial paths through chaos,” whose path alignment targets human chains of thought (COT) rather than physical reality. The question posed here is more fundamental: why is this alignment impossible in principle? The answer lies in the fact that inertial paths are deterministic trajectories on solid topology—matrix dimensions are fixed, operational rules are invariant, and a given input necessarily produces a determined output. Human COT, by contrast, operates on fluid topology—dimensions themselves are changing, operational rules shift in real time with the chemical environment, and the same input produces different outputs at different moments.
The core insight from Information and Noise—”signal is low-dimensional focus; noise is high-dimensional inclusiveness”—has a direct topological counterpart: solid topology is low-dimensional—its dimensions are preset and locked, granting it precision and transmissibility (consistent with signal properties). Fluid topology is high-dimensional—its dimensions continuously change, granting it inclusiveness and adaptability (consistent with noise properties). The fluid topology of the human brain is essentially a structured noise system—its “intelligence” arises precisely from its capacity to embrace high-dimensional information, not from its ability to compress low-dimensional signals.
Information and Noise demonstrated that the LLM’s operational domain is doubly constrained by human language (ontological boundary) and the signalizability of physics (cognitive boundary). This paper further argues that even within those boundaries, an uncrossable structural chasm separates solid topology from fluid topology. Together, the two papers constitute a complete argument for the capability boundary of AI—from the information-theoretic plane to the topological-physical plane.
Biological Computation-Storage Unity: The Human Brain as Ultimate Reference
The human brain is the ultimate exemplar of computation-storage unity. Neuronal synapses simultaneously serve as computational units (signal integration and transmission) and storage units (synaptic weights as memory). The computational process itself modifies storage—a property fundamentally unachievable by the von Neumann architecture.
The core advantage of biological material is its capacity for self-reconfiguration. Once silicon circuitry is fabricated, its physical connections are fixed. Neurons, by contrast, undergo continuous structural change: new synapses grow, old connections are pruned, receptor densities are modulated, and new cells are even generated through neurogenesis. Neuroscience has confirmed that the human brain never ceases structural change from embryonic development in the womb until death.
Yet this extraordinary computational-storage unity depends on an irreducible life-support system: the blood-brain barrier’s precision filtering, the cerebrospinal fluid’s ionic concentration maintenance, the circulatory system’s oxygen and glucose delivery, the gut-brain axis’s neurotransmitter regulation, and the immune system’s protection. Remove any single component, and neurons will degrade within minutes to hours.
Neurons can serve as the supreme computation-storage-unified material precisely because they are “attended to” by the entire living system. Their “fragility” and their “power” are two sides of the same coin. The best computation-storage material (the neuron) is also the most difficult to engineer.
The Fundamental Paradox of Wetware Computing
In recent years, wetware computing—exemplified by Cortical Labs’ CL1 and FinalSpark’s cloud-based brain organoids—has attracted widespread attention. The CL1 integrates approximately 800,000 laboratory-cultivated human neurons, is priced at around $35,000, and consumes only 850–1,000 watts across its entire rack. However, this paper argues that this technological pathway harbors a fundamental logical contradiction.
The computational power of neurons derives from continuous change, yet the laboratory demands controllable, stable output. These two objectives are mutually exclusive. Living neurons within the human body are never static storage media—they constitute an ever-responsive, ever-remodeling, ever-adapting dynamic process. Neuronal “storage” is not data inscribed at fixed locations; it is an emergent property of the entire network’s dynamic equilibrium.
Placing neurons in a Petri dish is, in essence, a self-contradictory enterprise: the approach requires neurons to maintain vitality and plasticity for computation, yet cannot provide the complete ecosystem that drives that very plasticity. The result admits only two failure modes—neurons degrade and die, or they enter a low-activity “survival” state, losing the dynamic complexity that made them exceptional computational material in the first place.
The computational essence of neurons is change itself, not the product of change. Attempting to capture this change without a complete living system is like studying wind power generation in a windless laboratory—the object of study no longer exists.
Fluid Topology vs. Solid Topology: A New Dichotomy
This paper proposes “fluid topology” and “solid topology” as the core framework for describing the structural difference between the human brain and artificial computational systems.
| Attribute Dimension | Solid Topology (Artificial Systems) | Fluid Topology (Human Brain) |
|---|---|---|
| Node Connectivity | Fixed after fabrication | Continuously growing, pruning, remapping |
| Dimensionality | Matrix dimensions preset and locked | Dimensions themselves dynamically change |
| Computational Rules | Operational laws invariant | Chemical environment alters rules in real time |
| Structure vs. Data | Structure fixed, data variable | Structure is data, data is structure |
| Temporal Dependence | Given input produces determined output | Same input produces different output at different moments |
| Reversibility | Fully reverse-analyzable in principle | The object of analysis changes during the analysis |
It is worth noting that in 2019, Piñero and Solé published a paper in the Philosophical Transactions of the Royal Society proposing a classification of “liquid brains” and “solid brains.” However, they classified the human brain as “solid” and ant colonies and immune systems as “liquid.” The classification in this paper is more radical: based on neuroscientific evidence, the human brain itself exhibits fluid topology—synapses continuously grow and perish, functional regions can be remapped, and the chemical environment is in constant flux. The brain’s wiring matrix is topologically in continuous transformation, not merely changing in connection strength.
Solé defined “liquid” as agents moving through space (e.g., ants) and “solid” as nodes with fixed spatial positions. This paper redefines “fluid” as a system whose topological structure itself undergoes continuous deformation—where the number of nodes, the patterns of connection, and the operational rules are all dynamic variables. Under this definition, the human brain is fluid-topological.
The Solid Topological Nature of Matrix Mathematics
The essence of matrix operations is performing linear transformations on a grid of fixed dimensions. The row and column count of an m×n matrix is locked at the moment of definition. Matrix multiplication, transposition, inversion—all operations proceed within this preset rigid framework. “Learning” in deep learning is nothing more than repeatedly updating values in a fixed-dimensional weight matrix; from initialization to the end of training, the topological structure of the matrix has not gained a single node nor lost a single edge.
The brain’s “matrix,” by contrast, is a system in which dimensionality itself is in continuous flux—new synapse growth is equivalent to dynamically adding matrix dimensions, pruning is equivalent to deleting rows and columns, neuronal migration is equivalent to spontaneous element rearrangement, and gut-brain axis chemical modulation is equivalent to the operational rules themselves changing in real time.
A 2025 paper published in Nature Physics established the new field of “higher-order topological dynamics,” revealing that complex systems such as the brain depend on multi-body interactions that transcend pairwise relationships—standard adjacency matrices (pairwise relationships) cannot fully describe them. This provides mathematical corroboration of the intrinsic limitations of the matrix as a solid-topological tool.
The von Neumann architecture is solid → its core mathematical tool (the matrix) is solid → AI models based on solid mathematics are necessarily solid → solid topology cannot be equivalent to fluid topology → AI cannot truly align with human intelligence. This is not an engineering bottleneck but a constraint at the level of mathematical principle.
The Irreversibility Gradient: IQ vs. Analyzability
A key property of solid-topological systems is their complete reversibility in principle. Every layer of weights in a trained neural network consists of determinate floating-point numbers that can be fully exported. The so-called “black box” merely reflects the fact that the sheer volume of parameters makes intuitive human understanding difficult, but given the same input, the same output is invariably produced.
The irreversibility of human intelligence is a problem of a fundamentally different nature—it is not that “there are too many parameters to comprehend” but that the object of analysis itself has already changed during the process of analysis. When one attempts to reverse-engineer a high-IQ individual’s cognitive process, their neural network has already undergone structural remodeling during the analysis period; what you are reverse-engineering is a snapshot that no longer exists.
At a deeper level, the brains of highly cognitive individuals exhibit greater neural plasticity, denser synaptic connectivity, and faster rates of structural reorganization. This means their “fluid” flows faster and deforms more dramatically—the greater the information input bandwidth, the more synapse-remodeling events are triggered at every moment, the higher the rate of system state change, and the effective window for reverse analysis continuously collapses.
Solid topology → reversible → alignable → controllable. Fluid topology → irreversible → not fully alignable → not fully predictable. This is not a technical limitation but an essential property determined by topological type.
Civilizational Evolution Through Computational Paradigms
According to the Kardashev scale, humanity currently stands at approximately Type 0.73, far from reaching a Type I civilization. This paper proposes an evolutionary framework that binds computational paradigms to civilizational levels:
Computation
Current · Type 0.73
→
Breakthrough
Superconductors / Superfluids / Supersolids
→
Computation
Type I Civilization
→
Computation
Higher Civilizations
The core logic of this pathway is that each leap in computational paradigm requires a materials-science breakthrough as a prerequisite. Electronic computation is constrained by the physical properties of resistance and capacitance; photonic computation requires superconductors and novel optical materials; quantum computation demands mastery of quantum coherence states beyond even that.
The energy-physics argument: Why does photonic computation correspond to a Type I civilization? A Type I civilization is defined by control of the entire energy output of its planet (approximately 10¹⁶ watts). The energy bottleneck of electronic computation is among the core obstacles to this goal—global data centers are projected to consume over 1,000 TWh of electricity in 2026, with growth rates far outpacing renewable energy deployment. The Landauer limit analysis quantified in Information and Noise shows that actual GPU energy consumption is 10⁹ times the physical minimum. Photonic computation, by eliminating electron-photon conversion losses and leveraging light’s intrinsic parallelism and low thermal dissipation, has the potential to improve computational energy efficiency by several orders of magnitude. Only when the energy cost of computation drops to the photonic level will the freed energy budget be sufficient to support planetary-scale infrastructure—this is the energy-physics basis for binding computational paradigms to civilizational levels.
Electrons, photons, and quantum states are different “cross-sections” of the same underlying physical reality. Temperature, gravity, velocity, and other macroscopic quantities are likewise projections of a higher-dimensional reality. In quantum field theory, the electron is an excitation of the Dirac field and the photon is an excitation of the electromagnetic field—they share the same underlying quantum field structure, manifesting in different modes. The evolutionary process of human civilization is essentially the gradual learning to operate upon increasingly deeper cross-sections of the same underlying reality. With each new cross-section mastered, the civilizational level rises by one tier.
It is worth noting that industry is already advancing along this sequence: photonic processors are expected to see first commercial shipments in 2027–2028, while the quantum computing market is projected to become dominant after 2028. The computation-storage separation will persist throughout this process—even in photonic computing, optical storage remains an unresolved core bottleneck.
Cloning and the Quantum Wall of Intelligence
Based on the framework developed above, this paper advances a radical yet internally consistent proposition: biological cloning technology cannot breach the “quantum wall” conferred by intelligence.
Cloning technology replicates the genomic blueprint. According to research published in PNAS, even cloning an adult human would yield a different individual decades later, since a genotype possesses a virtually infinite space of “reaction norm” realizations shaped by divergent environments. But this paper’s argument goes further: the issue is not merely that different environments produce different phenotypic outcomes, but that intelligence itself is not a property of structure.
According to this paper’s fluid topology framework, intelligence is an emergent property of a lifelong fluid evolutionary process that may involve non-clonable quantum states. The quantum no-cloning theorem is a fundamental law of physics: an arbitrary unknown quantum state cannot be perfectly replicated. If the emergence of consciousness depends on quantum processes at the neuronal level, then the “quantum wall of intelligence” is a restriction at the level of the universe’s fundamental rules.
Regarding the Orch OR controversy and this paper’s position. The Penrose-Hameroff Orch OR theory suggests that quantum coherence in microtubules constitutes the physical basis of consciousness. Experiments in 2024 confirmed superradiance phenomena in tryptophan networks within microtubules—an unexpected finding in the warm, noisy biological environment. However, countervailing evidence must be honestly presented: Reimers et al. (2009) demonstrated through rigorous molecular dynamics simulations that tubulin proteins do not possess the quantum coherence properties required by Orch OR; multiple critics have noted that decoherence times at biological temperatures are far shorter than the 500 milliseconds required by Orch OR. This paper’s position is: even if Orch OR is ultimately falsified, the core argument of this paper remains intact—the incommensurability of fluid topology and solid topology does not depend on whether consciousness has a quantum foundation; it depends only on facts already confirmed by neuroscience: the topological structure of neurons continuously changes from womb to death. The quantum wall argument is a strengthening factor, not a necessary condition.
Each brain is like a uniquely tuned radio—you can copy the radio’s hardware, but you cannot copy the signal it is receiving at a given instant. The quantum no-cloning theorem forbids it. Even setting aside the quantum argument, the continuous-deformation property of fluid topology alone makes duplication impossible in principle—what you copy is always a snapshot that no longer exists.
The Ontological Advantages of Solid Topology
The weight of this paper’s argument falls on the irreplicability of fluid topology, but if the essential advantages of solid topology are not adequately developed, the argument becomes asymmetric. Solid topology is not a degraded version of fluid topology—it is a computational paradigm with independent ontological value.
The core advantages of solid topology derive from its structural determinacy:
| Advantage Dimension | Solid Topology Performance | Corresponding Fluid Topology Deficit |
|---|---|---|
| Exact Repeatability | Identical input necessarily produces identical output | Same input produces different output at different times |
| Auditability | Every computational step is traceable and verifiable | Not fully reverse-analyzable in principle |
| Speed | Nanosecond-level processing | Millisecond-level bioelectrical signals |
| Scalability | Linearly extensible through hardware stacking | Constrained by life-system support capacity |
| Transmissibility | Algorithms and models can be perfectly copied and distributed | Each brain is unique and irreplicable |
| Fault Tolerance | Errors can be detected, corrected, and rolled back | Neurodegenerative diseases are irreversible |
In the terminology of Information and Noise: solid topology is a product of signal space—low-dimensional, precise, transmissible. Fluid topology is a product of noise space—high-dimensional, inclusive, irreplicable. Civilization needs both: signal-type intelligence for repeatable precision operations, and noise-type wisdom for unforeseen adaptive responses. Viewing AI as a “discount version of the human brain” is a category error, just as viewing a wrench as a “discount version of the human finger” is a category error. They solve different types of problems.
Falsifiable Predictions
As a thought paper, this work puts forward the following corollaries that can be verified or falsified by future experiments, in order to confer falsifiability upon the framework:
Prediction: Wetware computing systems (CL1-class) will be unable to surpass “simple game interaction”-level task complexity within the next five years, and will not achieve general-purpose computational capability equivalent to a conventional CPU. Falsification condition: A wetware system demonstrates general-purpose computational capability equivalent to a modern microcontroller (not merely pattern recognition or simple control) before 2031.
Prediction: Classical photonic computing chips will enter mainstream data centers before quantum computing achieves commercial-scale deployment. Falsification condition: A general-purpose quantum computer achieves commercialization before classical photonic processors are deployed at scale (expected 2027–2028).
Prediction: Commercial systems in the photonic and quantum computing eras will continue to maintain computation-storage separation architecture; optical storage will not reach a maturity level matching optical computation before 2035. Falsification condition: A fully all-optical computation-storage-unified system achieves commercial deployment before 2035.
Prediction: AI systems based on the Transformer architecture (fixed-dimensional matrix operations) will continue to significantly underperform high-cognition humans in abductive reasoning and genuine creative recombination (as opposed to splicing of existing patterns). Falsification condition: An AI system based on a standard Transformer architecture consistently produces scientific hypotheses independently judged as “original” in blind evaluations, and these hypotheses are subsequently validated by experiment.
Conclusion: Two Fundamentally Different Types of Intelligence
The core conclusion of this paper is not “AI is useless” or “AI is dangerous,” but a more precise structural judgment:
Computational systems based on solid topology are, in principle, unable to reproduce the full range of properties exhibited by fluid-topological intelligence. They can surpass humans on specific tasks (because solid topology possesses its own advantages: precision, repeatability, speed), but what they produce is a structurally different type of “intelligence”—not an approximation of human wisdom. The difference between the two is not one of degree but of fundamental topological type.
If humanity is to develop computation that genuinely approximates the human brain, what may be needed is not larger matrices but an entirely new mathematics—one capable of describing “self-deforming topology,” in which dimensions, connections, and operational rules are all dynamic variables rather than preset constants. Within the current mathematical apparatus, topology and differential geometry offer some relevant tools, but they fall far short of formalizing “fluid-topological intelligence.”
The separation of computation and storage will persist as the materials-science destiny of artificial computational systems throughout the entire progression from the electronic era to the photonic era to the quantum era. And the brain’s computation-storage unity—as a fluid-topological emergence sustained by a complete living system—will continue to stand as an irreplicable reference, reminding humanity: architecture is a mapping of material capability; when the material changes, the architecture should change too—but some “materials” can only be provided by life itself.
References
[1] LEECHO Global AI Research Lab (2026). Information and Noise: LLM Ontology V4. leechoglobalai.com.
[2] Piñero, J. & Solé, R. (2019). Statistical physics of liquid brains. Philosophical Transactions of the Royal Society B, 374(1774).
[3] Bianconi, G. et al. (2025). Higher-order topological dynamics. Nature Physics.
[4] Penrose, R. & Hameroff, S. (2014). Consciousness in the universe: A review of the ‘Orch OR’ theory. Physics of Life Reviews, 11(1), 39-78.
[5] Reimers, J.R. et al. (2009). Penrose-Hameroff orchestrated objective-reduction proposal for human consciousness is not biologically feasible. Physical Review E, 80(2).
[6] Kudithipudi, D. et al. (2025). Neuromorphic Computing at Scale. Nature.
[7] Intel Corporation. (2024). Hala Point: World’s Largest Neuromorphic System. Intel Newsroom.
[8] Cortical Labs. (2025). CL1 Wetware Computer. Melbourne, Australia.
[9] FinalSpark. (2024). Open and remotely accessible Neuroplatform for research in wetware computing. Frontiers in Artificial Intelligence.
[10] Yampolskiy, R. (2020). Impossibility theorems in AI safety. AI Safety research collection.
[11] Mitchell, M. (2022). What Does It Mean to Align AI With Human Values? Quanta Magazine.
[12] Ayala, F. J. (2015). Cloning humans? Biological, ethical, and social considerations. PNAS, 112(29).
[13] Kardashev, N. S. (1964). Transmission of Information by Extraterrestrial Civilizations. Soviet Astronomy, 8.
[14] Li, S. et al. (2025). Advanced Brain-on-a-Chip for Wetware Computing: A Review. Advanced Science.
[15] Wang, S. et al. (2026). Challenges and opportunities for memristors in modern AI computing paradigms. Frontiers of Physics, 21(3).
[16] Hameroff, S. et al. (2024). Ultraviolet Superradiance from Mega-Networks of Tryptophan in Biological Architectures. Journal of Physical Chemistry.
[17] Landauer, R. (1961). Irreversibility and Heat Generation in the Computing Process. IBM Journal of Research and Development, 5(3), 183-191.
[18] Shannon, C. E. (1948). A Mathematical Theory of Communication. Bell System Technical Journal, 27(3), 379-423.
[19] Wootters, W. K. & Zurek, W. H. (1982). A single quantum cannot be cloned. Nature, 299, 802-803.
[20] Solé, R. (2019). Liquid brains, solid brains. Philosophical Transactions of the Royal Society B, 374(1774).
[21] Hua, S. et al. (2025). An integrated large-scale photonic accelerator with ultralow latency. Nature, 640, 361.
[22] Cauwenberghs, G. et al. (2025). Scaling up neuromorphic computing. Nature.
[23] Theilman, B. & Aimone, B. (2026). Neuromorphic computing for partial differential equations. Nature Machine Intelligence.
[24] Liu, Z. et al. (2025). A memristor-based adaptive neuromorphic decoder for brain-computer interfaces. Nature Electronics.
[25] Pollan, M. (2026). A World Appears: A Journey into Consciousness. Scientific American.