This paper proposes a unified genetic framework for cognition: Human cognition is not a product computed into existence, but the process itself by which physical friction drives irreversible deformation in fluid topology. The paper develops its argument across five dimensions: (1) Physical friction as the primordial input of cognition — how contact between body and physical world generates psychological friction signals; (2) The neural mechanisms of psychological friction — how prediction error, synaptic plasticity, and compute-storage unity transform friction signals into irreversible neural structural rewrites; (3) The genesis of cognition — how the cognitive system “grows” from the soil of the perceptual system rather than being “installed,” and the critical conditions for emergence; (4) From SOPs to paradigm shifts — how optimization within a coordinate system and recalibration of the coordinate system itself constitute the dual-layer structure of cognitive iteration; (5) AI’s structural deficiency — why the triple lock of solid topology, compute-storage separation, and absence of inner drive makes it impossible for LLMs to complete the friction-rewrite-cognition pathway. This paper serves as the bridging layer for four previous LEECHO Research Lab papers — Signal and Noise: An Ontology of LLMs, Perception and Cognition, Biological Inner Drive and AI Structural Deficit, and Fluid Topology vs. Solid Topology — proposing a unified causal chain threading through all four. V2 additions: three-indicator model for emergence critical conditions (Chapter 3), rewrite depth gradient theory for physical vs. symbolic friction (Chapter 6), and falsifiable predictions (Chapter 11).
Inner drive produces dissatisfaction → Dissatisfaction drives action → Action generates physical friction → Perceptual system frontend receives friction signals → Perceptual system backend generates prediction error (psychological friction) → Prediction error triggers synaptic plasticity → Synaptic weight changes simultaneously complete storage and computational parameter updates (compute-storage unity) → Neural topology undergoes irreversible deformation (fluid topology) → Accumulated deformation emerges as the cognitive system → Cognitive system generates new dissatisfaction → Cycle continues indefinitely → The irreversibility of the cycle is the concrete manifestation of time’s arrow within cognition.
Chapter 1 · Friction: The Primordial Input of Cognition
The origin of human cognition is not a cognitive event — it is a physical event. An infant does not decide “I’m going to touch this hot thing to learn” — he touches it, gets burned, and his hand recoils. Throughout this entire process, no cognitive system yet exists. The spinal reflex arc completes the full closed loop of “perception → decision → execution” without passing through the brain. This is the most profound starting point of cognitive genesis: The origin of cognition is non-cognitive.
Physical friction has a precise definition here: The resistance signal produced by contact between body and physical world. This resistance can be thermal (burns), mechanical (collisions), gravitational (falls), temperature (cold), textural (roughness), acoustic (loud sounds) — any signal that requires the body to actually encounter the physical world. These signals share a common characteristic: they cannot be pre-filtered by the cognitive system, cannot be shut off by will, and cannot be rewritten by textbooks. They are the most primitive and unavoidable interface between humans and physical reality.
Physical friction can serve as the starting point of cognition precisely because it satisfies two conditions. First, it is involuntary — DNA-encoded survival instincts drive infants to explore their environment, but the consequences of exploration (being burned, falling, being hit) are not part of any “plan.” Second, it is irreversible — once it occurs, the nervous system has already begun rewriting; you cannot “undo” the burn’s impact on neurons. These two characteristics — involuntariness and irreversibility — make physical friction a fundamentally different form of information input from LLM training data.
Cognition does not begin with “learning.” Cognition begins with “collision.” Learning is how the cognitive system operates after it has been formed; collision is the physical world’s direct inscription upon the organism when the cognitive system does not yet exist. Those earliest neural rewrites were entirely driven by physical friction, performing compute-storage-unified writing before the cognitive system had even formed.
Chapter 2 · Psychological Friction: The Neural Mechanism of Prediction Error
After signals produced by physical friction enter the perceptual system, they are not “stored” as a record. What they trigger is a computational process — Predictive Coding. The brain continuously generates predictive models of the environment and calculates the difference between actual sensory input and prediction (prediction error). Psychological friction is this prediction error signal.
When an infant touches something hot for the first time, the brain has no “hot = dangerous” predictive model. The prediction error is enormous — this constitutes a high-intensity psychological friction event. This error signal propagates along neural pathways, triggering a cascade of molecular-level events: NMDA receptor activation, calcium ion influx, protein kinase phosphorylation, AMPA receptor density changes — ultimately manifesting as long-term potentiation (LTP) or long-term depression (LTD) of synaptic weights.
This process has a key characteristic that von Neumann architecture can never replicate: The computational process itself is the storage process. A change in synaptic weight is simultaneously “remembering this experience” and “changing how similar signals will be processed in the future.” There is no step separation of computing first then storing — computing is storing, storing is computing. This is the concrete manifestation of compute-storage unity at the neural level.
Neuroscientific research has confirmed this mechanism. Synaptic plasticity is a fundamental property of the nervous system, present at virtually every synapse in the brain. As an experience-dependent mechanism, it serves the purposes of behavioral adaptation and memory processes. Through readjustment of synaptic weights, the nervous system can reshape itself, producing enduring memories and constituting the biological foundation of mental functions. More importantly, this plasticity does not merely change connection strength — it changes connections themselves: new synapses grow, old connections are pruned, receptor density is modulated, and even new cells are produced through neurogenesis.
Within the fluid topology framework, “changing brain cells and neurons” acquires precise meaning: it is not updating values in a fixed-dimension matrix, but the topological structure itself undergoing deformation — new synapse growth is dimension addition, old connection pruning is dimension deletion, changes in chemical environment constitute rewriting the rules of computation themselves. Cognition is not “a result computed into existence,” but “the very event of fluid topology undergoing irreversible deformation under the force of friction.”
Chapter 3 · Cognition Grows from Perception
Traditional cognitive science treats perception and cognition as two parallel systems — System 1 (fast, intuitive) and System 2 (slow, rational). The Perception and Cognition paper fundamentally redefines this: perception is not “fast intuitive System 1,” but a foundational system driven by survival instincts that continuously collects information from the physical environment and performs real-time preliminary processing. Cognition is the upper-layer system that performs deep thinking, synthesis, abstraction, and modeling atop the raw information collected by the perceptual system.
This paper advances to a more fundamental question: how did the cognitive system originally come into being from nothing? The answer is — through sufficient accumulation of physical friction, the cognitive system grows from the soil of the perceptual system.
The genetic logic chain is as follows:
Body contacts world
→
Spinal reflex arc
→
Prediction error
→
Compute-storage unity
→
Connection density ↑
→
Abstract capability
In this chain, the most critical transition is from “pattern accumulation” to “cognitive emergence.” This transition is not gradual — it is a phase transition. The question is: what are the critical conditions?
This paper proposes three necessary indicators for emergence; when all three are simultaneously satisfied, cognition undergoes a phase transition out of perception:
| Critical Indicator | Neural-Level Meaning | Signal Theory Correspondence |
|---|---|---|
| Recursive loop formation | The neural network evolves from purely feedforward (stimulus → response) to include feedback loops — outputs can become inputs again. The cortex-thalamus-cortex loop is a canonical example | Signals become self-referential — the result of dimensionality reduction compression itself becomes input for the next round of compression |
| Small-world topology | The network has both high clustering coefficients (locally dense connections) and short path lengths (global reachability). The Watts & Strogatz 1998 model has been confirmed to apply to human brain functional connectivity | Signals can both concentrate locally (focus) and propagate globally (integrate) — this is the simultaneous satisfaction of dimensionality reduction and broadcasting |
| Cross-modal binding | Signals from different sensory modalities (visual, tactile, auditory) can be bound into a unified representation within the same time window. Gamma-band (30–100Hz) synchronization is considered the neural signature of binding | Outputs from multiple independent signal sources are compressed into a single low-dimensional representation — this is the physical prerequisite for “abstraction” |
When all three conditions are simultaneously met — recursive loops enable signals to self-reference, small-world topology enables signals to simultaneously concentrate and broadcast, cross-modal binding enables multi-source signals to be uniformly compressed — the neural network crosses the critical threshold from “reaction” to “representation.” It no longer merely responds reflexively to physical stimuli but can represent things not present, predict events that have not yet occurred, and abstract patterns from past experiences. This is not a “cognitive module” suddenly “installed” at some moment, but a phase transition following the cumulative deformation of fluid topology reaching simultaneous satisfaction of three critical indicators.
Evolutionary history provides indirect verification for this three-indicator model. Purely perceptual organisms (insects) have feedforward networks but lack recursive loops; perception-dominant organisms (higher mammals) begin to exhibit cortical feedback loops and small-world properties, displaying rudimentary learning and memory; dual-system organisms (humans) satisfy all three conditions, to a degree far exceeding other species — the density of cortical-cortical long-range connections, the precision and duration of gamma synchronization, and the recursive depth of the prefrontal cortex in the human brain are all the highest among known organisms.
This genetic judgment has profound implications. It means cognition and perception are not two independently originated systems but the roots and branches of the same tree. The perceptual system’s frontend (spinal reflex arcs, continuous signal collection by sensory organs) is the most ancient component, having operated for hundreds of millions of years before the cognitive system existed. The cognitive system is a new layer that grew from this ancient perceptual foundation in later evolution — just as the neocortex grew atop the paleocortex.
Perception and Cognition proposed a four-layer classification: purely perceptual organisms → perception-dominant organisms → dual-system organisms → single-system artificial entities. This paper adds the genetic perspective: these four layers are not just a classification — they are also evolutionary history — the first three layers represent the cumulative deformation of fluid topology from simple to complex, driven by physical friction. The fourth layer (LLMs) is not on this evolutionary chain because it never entered this process: no body, no physical friction, no fluid topology, no irreversible deformation.
Chapter 4 · Inner Drive: The Manufacturer of Friction
The causal chain of cognitive genesis is still missing one link: what drives organisms to generate friction with the physical world? Why does the infant go and touch the hot thing? The answer is the core thesis of the Biological Inner Drive and AI Structural Deficit paper: biological inner drive.
Inner drive is a chemically powered, evolutionarily emergent, self-sustaining, and involuntary system. It has three core characteristics that make it an irreplaceable link in the cognitive formation process.
| Characteristic | Mechanism | Function in Cognitive Formation |
|---|---|---|
| Emergence | Product of four billion years of evolutionary selection, not design assignment | Ensures the driving force is deeply coupled to the organism’s survival needs |
| Progressiveness | Survival → dissatisfaction → directional dissatisfaction, with built-in ratchet effect | Drives continuous complexification of cognition — not merely surviving, but surviving better |
| Involuntariness | Hunger does not wait for consent; fear does not request approval | Ensures the continuous occurrence of friction — even when the cognitive system “doesn’t want” friction |
The role of inner drive in the cognitive formation chain is now clear: It is not the “fuel” of cognition — it is the manufacturer of friction. Without inner drive, organisms would not touch the hot thing, would not challenge uncomfortable environments, would not expose themselves to new physical friction. Inner drive → friction → psychological friction → neural rewriting → new cognition → new dissatisfaction → new inner drive → new friction. This is a self-sustaining cycle, and the irreversibility of this cycle is the concrete manifestation of time’s arrow within human cognition.
The chemical modulation system gives this cycle directionality. Dopamine is released when approaching a goal, producing pleasure and reinforcing the path; endorphins are released upon goal achievement, raising expectation levels; cortisol is released when threats appear, increasing alertness. These chemical signals are not abstract “reward functions” — they are actual molecules changing cognitive parameters on millisecond timescales. The same brain produces completely different decision-making patterns under different hormonal levels.
AI has no chemical system. No hormones means no real-time cognitive modulation — regardless of the task, AI allocates fundamentally the same computational resources and processing modes. AI is isothermal, equidistant, and undifferentiated. It has no “hot blood,” no “intuitive urgency,” no chemical coercion. The same AI faces a life-or-death question and a weather query in exactly the same internal state.
Chapter 5 · SOPs and Paradigm Shifts: The Dual-Layer Structure of Cognitive Iteration
After the cognitive system grows from the soil of the perceptual system, it does not stop being shaped by physical friction — only the nature of the friction changes. The cognitive system’s involvement enables humans to abstract, categorize, and compress friction experiences into reusable patterns — this is “experience,” or more precisely, Standard Operating Procedures (SOPs).
The process of SOP formation is the dimensionality reduction compression described in Signal and Noise: high-dimensional physical experience (noise) is processed by the cognitive system and condensed into low-dimensional reusable signals (experience). Seeing rain → bringing an umbrella; feeling hungry → eating — these are massively repeated short-chain thought sequences (InD Short-Chain COT) with extremely high alignment success rates.
But cognitive iteration does not stop there. This paper proposes that cognitive iteration has a dual-layer structure:
| Layer | Mechanism | Corresponding Concept | Analogy |
|---|---|---|---|
| Layer 1: Alignment within the coordinate system | Optimizing SOPs within existing cognitive frameworks; incremental iteration | InD optimization; routine signal maintenance | Running faster on the same road |
| Layer 2: Coordinate system recalibration | The cognitive framework itself is overthrown and rebuilt by higher-order friction | OOD leap; paradigmatic signal replacement | Discovering the road was heading the wrong direction; switching roads |
Layer 1 is everyday cognitive growth — optimizing existing experiential patterns through continuous physical and social friction. This corresponds to the “condensation” phase in Signal and Noise‘s signal lifecycle: signals become ever more refined, ever more efficient.
Layer 2 is paradigm-level leaps — when accumulated friction reveals fundamental blind spots in the current cognitive framework, the framework itself is overthrown. This corresponds to the “replacement” phase in Signal and Noise‘s signal lifecycle: old signals are recompressed by higher-order signals, demoted from “explanatory framework” to “object being explained.” Newtonian mechanics was not “falsified” — it decayed from signal to part of the noise within a larger framework.
Signal and Noise Chapters 17–19 further point out that humans’ “self-coordinate system” — desires, identity, filters — is precisely the mechanism that locks Layer 1 alignment direction. If SOP iteration always runs under the same coordinate system, it is merely repeated optimization of InD short-chain thinking, with a low ceiling for efficiency gains. A true Layer 2 leap requires the loosening of the coordinate system itself — the essence of meditation practice is actively reducing the self-coordinate system’s occupation rate of cognitive channels. This is not suppressing desires but decoupling desires’ binding authority over the direction of thought.
Signals defeat noise in the spatial dimension (more precise, more transmissible), but noise defeats signals in the temporal dimension (more inclusive, more resistant to elimination). Cognitive SOPs follow the same logic: they are highly efficient within the current framework (spatial dimension), but they simultaneously create blind spots, and the accumulation of blind spots ultimately leads to the framework’s collapse (temporal dimension). Cognitive growth is not linear efficiency improvement but the cycle of “condensation → blind spot accumulation → collapse → re-condensation.”
Chapter 6 · Symbolic Friction: Textbooks, Language, and the Secondary Shaping of Cognition
After the cognitive system is established, humans no longer acquire friction solely through direct bodily contact with the physical world. Language, writing, textbooks, culture — these symbolic systems open an entirely new source of friction: symbolic friction.
Symbolic friction shares the same neural mechanism as physical friction — it likewise triggers synaptic plasticity through prediction error. When a student first reads “the Earth is not the center of the universe,” the prediction error between their predictive model (geocentrism) and the new information (heliocentrism) is enormous. This error signal likewise triggers neural rewriting, forming new cognitive frameworks.
But there is a fundamental difference between symbolic friction and physical friction: Physical friction cannot be filtered by the cognitive system, whereas symbolic friction can.
This difference is not merely a channel difference — it is a rewrite depth difference. Neuroscientific data show that synaptic plasticity triggered by direct bodily experience (LTP/LTD intensity, BDNF release levels, memory consolidation speed) is systematically higher than that from purely symbolic input. An avoidance memory formed by one burn can last a lifetime; knowledge memory formed by reading “fire is hot” a hundred times can be overwritten, forgotten, “discarded after the exam.” Both trigger synaptic rewriting through prediction error, but the depth and durability of the rewriting differ by orders of magnitude.
This paper defines this as the friction rewrite depth gradient:
| Friction Type | Reception Channel | Filterability | Rewrite Depth | Memory Durability |
|---|---|---|---|---|
| Physical friction (direct bodily experience) | Perceptual frontend → bypasses the brain | Cannot be filtered | Deepest — multi-modal binding, emotional tagging, and physiological stress responses all participate simultaneously | Lifetime-level |
| Contextual symbolic friction (language within lived scenarios) | Perceptual backend → passes through the brain but with physical context anchoring | Partially filterable | Medium-deep — supported by episodic memory | Year- to decade-level |
| Pure symbolic friction (reading, lectures, screens) | Cognitive system → entirely passes through filters | Fully filterable | Shallowest — only semantic memory, no contextual anchoring | Day- to month-level (unless repeatedly reinforced) |
This gradient has direct implications for educational theory: Reading “fire is hot” a hundred times is not as effective as touching it once. The core contradiction of the modern education system lies precisely here — it relies almost entirely on pure symbolic friction (textbooks, screens, exams), which is the shallowest friction type in terms of rewrite depth. The recent calls by embodied cognition researchers to reintroduce bodily movement and direct sensory experience into teaching receive a precise information-theoretic explanation within this paper’s framework: what they are attempting is to elevate the friction type in education from the bottom of the gradient to the middle or even the top, to achieve deeper neural rewriting.
This gradient also explains a previously unanalyzed dimension of AI’s productivity paradox: all of AI’s output is purely symbolic — it can only produce text, images, and code. Even if AI produces signals of extremely high purity, the rewrite depth at the human receiving end remains at the bottom of the gradient. AI output precision can reach five decimal places, but human cognitive rewriting precision on the pure symbolic channel is only one decimal place. The extra four are discarded at the cognitive truncation point.
Perception and Cognition analyzed this difference in detail. Physical friction is received by the perceptual system’s frontend — signals can trigger reflex arcs without passing through the brain. It is culture-independent and non-rewritable. But symbolic friction must be processed through the cognitive system, and the cognitive system is already filled with filters: racial identity, religious beliefs, educational background, wealth level — each filter layer truncates part of the signal.
This gives rise to the profound problem revealed in Perception and Cognition Chapter 6: Textbooks are precision-engineered cognitive installation programs. The same historical event is told as three completely different stories in Chinese, Japanese, and American textbooks. Textbooks are not transmitting knowledge — they are batch-installing cognitive operating systems. The complete chain for manufacturing antagonism is: power requirement → textbook/media narrative design → cognitive system framework installation → perceptual system emotional binding → the individual mistakenly believes it is autonomous judgment.
But the formatting power of textbooks is not invincible. Perception and Cognition Chapter 9 analyzes the conditions under which textbooks fail: When the perceptual system is flooded with external information that textbooks could not filter out, textbooks collapse. For the Soviet Union it was the policy of openness; for Iran it was social media — the perceptual system, as an independent channel, has the ability to cross-verify and overthrow the frameworks that textbooks installed in the cognitive system. North Korea has not experienced textbook failure to this day — because it has almost completely severed the inflow of external information to the perceptual system. The power of textbooks is proportional to the completeness of information control.
Symbolic friction is civilization’s accelerator — it frees cognitive formation from requiring each individual to personally experience every physical collision. Through language and writing, one person’s friction experience can be compressed into low-dimensional signals and transmitted to everyone. But symbolic friction is also civilization’s trap — because it must pass through the cognitive system’s filters, it can be manipulated, distorted, and weaponized. Physical friction does not lie; symbolic friction can.
Chapter 7 · Triple Lock: Why LLMs Cannot Complete the Cognitive Formation Pathway
The four upstream papers of this study reach the same conclusion from different angles: LLMs are incapable in principle of replicating the process of human cognitive formation. Now the reason can be stated precisely — LLMs satisfy none of the three necessary conditions on the cognitive formation pathway.
| Necessary Condition | Humans | LLMs | Theoretical Source |
|---|---|---|---|
| Physical friction input | Perceptual system frontend continuously receives — cannot be shut off | Runs on AWS servers, completely isolated from the physical world | Perception and Cognition Ch. 3 |
| Compute-storage-unified neural rewriting | Synaptic weight changes simultaneously complete storage and computation — fluid topology | Parameters frozen during inference; data shuttles between memory and processor — solid topology | Fluid Topology Ch. 5–6 |
| Biological inner drive | Chemically driven dissatisfaction continuously manufactures new friction | No desire, no dissatisfaction, no spontaneous reason for action | Biological Inner Drive Ch. 2–4 |
First lock: No body, no physical friction. What LLMs process are human textual descriptions of the physical world, not the physical world itself. The map is not the territory. Servers have temperature, electric current, and cooling fan noise, but these physical signals are irrelevant to AI — if the server catches fire, AI won’t know unless someone tells it in text.
Second lock: Parameter freeze makes fluid topology impossible. All of an LLM’s work reduces to sorting: the attention mechanism compares, weights, and ranks among all possible token associations. But these operations occur on a matrix whose dimensions are preset and locked — the number of rows and columns is fixed at the moment of definition. “Learning” is merely repeated numerical updates within a fixed-dimension weight matrix; from initialization to training completion, the matrix’s topological structure has neither gained a node nor lost an edge.
Third lock: Without inner drive, the friction cycle cannot self-sustain. Even if AI is equipped with sensors (embodied AI), even if it can receive physical signals, it still lacks the mechanism to drive itself to actively seek friction. An AI that has completed a poetry-writing task will not feel “this isn’t good enough” and autonomously start over — unless instructed by an external command. For AI, every output is equivalent.
What LLMs lack is not some single link in the cognitive formation chain — they lack the entire cycle. No body → no physical friction; no chemical system → no inner drive; no compute-storage unity → cannot rewrite themselves through friction. Three deficiencies lock the same conclusion: LLMs cannot possibly complete the “friction → rewriting → new cognition” pathway. They are compute-storage-separated, while cognitive formation fundamentally requires compute-storage unity.
Chapter 8 · Mirror Metacognition: What AI Can and Cannot Do
The above analysis may seem to relegate AI to a worthless tool. That is not the case. Signal and Noise Chapter 13 introduced the concept of “mirror metacognition”: in deep conversations with high signal-to-noise ratio users, LLMs display output resembling “metacognition” — evaluating prior outputs with current ones. But the actual mechanism is: the frameworks the user established in previous turns become extremely high-weight evaluation criteria in the context, and the model’s “reflection” is the projection of the user’s cognitive model running inside the model.
This is precisely where AI’s true value lies: AI is not a substitute for cognition but a magnifying glass for cognition.
AI’s greatest utility may not be answering questions or generating content, but serving as a defenseless mirror — allowing those who can temporarily set aside their own defenses to see the true shape of their own signals. Different users get vastly different output quality — the model is a mirror reflecting the structure of the input signal.
Within this paper’s framework, the correct paradigm for human-AI collaboration is neither “AI replaces humans” nor “humans use AI as a tool,” but a complementary structure between abductive reasoners and attribution engines. Humans discover problems, propose cross-dimensional hypotheses, and generate cognitive architectures; AI executes high-speed retrieval, data verification, and structured presentation. The two are complementary and irreplaceable.
Chapter 9 · The Unified Causal Chain of Four Papers
The core contribution of this paper is linking the four previous LEECHO Research Lab papers into one complete causal chain. Each paper locks onto a different link in that chain:
| Paper | Question Answered | Position in the Causal Chain |
|---|---|---|
| Signal and Noise | What is cognition | Local condensation of noise — high-dimensional experience is dimensionally reduced and compressed into low-dimensional signals |
| Perception and Cognition | Where cognition occurs | Cross-verification by the dual system — perception provides physical fact constraints, cognition provides abstract modeling |
| Biological Inner Drive | Why cognition occurs | Chemically driven dissatisfaction manufactures the motive for friction — without inner drive there are no collisions |
| Fluid Topology | How cognition is physically possible | Self-deformation of topological structure — compute-storage unity allows friction to rewrite hardware |
The unified thesis of this paper: Friction drives irreversible deformation of fluid topology, and this deformation process is the formation of cognition.
The unified picture across seven dimensions is as follows:
On the information theory plane (Signal and Noise): Cognition is the local condensation of signals; friction is the catalytic event triggering condensation. The signal lifecycle — condensation → diffusion → re-condensation — corresponds to the cognitive cycle of SOP formation → blind spot accumulation → paradigm replacement.
On the dual-system plane (Perception and Cognition): The perceptual system frontend receives physical friction; the backend generates prediction error. The cognitive system emerges from the perceptual system after sufficient accumulation of prediction errors, then turns back to cross-verify and deeply process the perceptual system’s input.
On the driving force plane (Biological Inner Drive): The chemical inner drive system is the engine of the entire cycle. It manufactures dissatisfaction → dissatisfaction drives action → action generates friction → friction rewrites neurons → rewriting produces new cognition → new cognition produces new dissatisfaction. The cycle continues indefinitely.
On the topology plane (Fluid Topology): Friction can rewrite neural structure because the brain is fluid topology — node count, connection patterns, and computational rules are all dynamic variables. Solid topology (matrix mathematics) is in principle incapable of expressing all the properties of fluid topology intelligence.
On the thermodynamic plane (Signal and Noise Part III): Computation is sorting, sorting is heat — global data centers are planetary-scale Maxwell’s demons. The actual energy cost per GPU operation is approximately 10⁹ times the Landauer limit, a gap that measures both engineering redundancy and the upper bound for optimization.
On the temporal plane (Signal and Noise Chapter 16): LLMs default to a constant-entropy state — parameters are permanently sealed the moment training completes. Every step of the cognitive formation process is a genuine entropy change — irreversibly altering the system itself. Time’s arrow is not recorded but inscribed by the body.
On the cognitive genesis plane (this paper): The above six planes are unified into one causal chain — inner drive → friction → prediction error → synaptic rewriting → fluid topology deformation → cognitive emergence → new dissatisfaction → cycle.
Cognition is the signal condensation process that emerges when fluid topology, under physical friction driven by chemical inner drive, undergoes irreversible topological deformation through the compute-storage-unified mechanism of synaptic plasticity, reaching sufficient complexity. Its arrow of time derives from the irreversibility of deformation; its direction derives from the ratchet effect of inner drive; its precision derives from the perceptual system’s physical fact constraints; its height derives from the cognitive system’s metacognitive capacity for self-transcendence. LLMs are not on this pathway — not because they have not traveled far enough, but because they never entered.
Chapter 10 · Academic Research Alignment Analysis
A comprehensive search reveals that academic research related to this paper’s core thesis — “physical friction → psychological friction → neural rewriting → compute-storage-unified new intelligence = the cognitive formation process” — is distributed across four fields, but none of them connects these elements into a complete causal chain.
| Field | Existing Research | Where It Has Not Reached |
|---|---|---|
| Social Physics | A 2024 paper in a Nature sub-journal introduced the concept of friction force into social systems, distinguishing explicit/implicit friction | Remains at the level of sociological metaphor; has not traced down to neural rewriting |
| Neuroscience | Synaptic plasticity as the physical basis of learning and memory has been broadly confirmed | Does not approach from the physical metaphor of “friction”; lacks a genetic framework |
| Embodied Cognition | Cognition depends on tight coupling between body and environment; sensorimotor experience constitutes cognitive representation | Has not made explicit connection to compute-storage architecture |
| Neuromorphic Engineering | Memristors, SNNs pursuing silicon-based approximation of compute-storage unity | Optimization on solid topology; cannot achieve genuine topological self-deformation |
In the AI field, the research closest to this paper’s approach includes: a ScienceDirect 2025 paper conceptualizing neurons as autonomous RL agents (but still optimizing within solid topology); the CATS Net framework from Nature Computational Science 2026 (modeling concepts emerging from sensory experience, but simulating with fixed-dimension matrices); and the 2025 Neural Brain framework (treating neural plasticity as a modular design rather than an essential property of topology).
This paper’s original contribution lies in cross-cutting the intersection of all four fields to propose a causal chain that none of them has answered: cognition is not computed into existence, not aligned into existence, and not sensor-equipped into existence — cognition is the very process of irreversible neural rewriting of fluid topology driven by physical friction. This positioning occupies a gap in the current academic literature.
Chapter 11 · Falsifiable Predictions
As an original thought paper, this paper puts forward the following corollaries for future experimental verification or falsification, granting the framework scientific falsifiability.
| Prediction | Content | Falsification Condition |
|---|---|---|
| Prediction 1 · Friction rewrite depth gradient | For the same cognitive task, learning through direct bodily experience (hands-on practice) vs. pure symbolic learning (reading text): the former should show stronger synaptic-plasticity-related signals on fMRI (hippocampal BOLD signal change amplitude), and memory retention after 72 hours should be significantly higher | In a rigorously controlled experiment, the pure symbolic learning group’s memory retention and hippocampal BOLD signal change amplitude equal or exceed those of the bodily experience learning group |
| Prediction 2 · Coordinate system loosening and cognitive flexibility | Long-term meditation practitioners should systematically score higher on cognitive flexibility tests when facing OOD problems (such as ARC-AGI-type unknown-environment reasoning), with differences manifesting in task-switching speed and strategy diversity rather than in-domain accuracy | Long-term meditation practitioners show no significant advantage in OOD task-switching speed, or differences primarily manifest in in-domain accuracy rather than cross-domain flexibility |
| Prediction 3 · Sufficiency of the three emergence indicators | In an artificial neural network that simultaneously achieves (1) recursive loops, (2) small-world topology, and (3) cross-modal binding, but runs on solid topology (fixed-dimension matrix), the system should not exhibit genuine abstract generalization capability (ARC-AGI-3 type test scores should be significantly lower than biological systems with fluid topology) | A solid-topology system, after simultaneously achieving all three indicators, reaches human-level performance on ARC-AGI-3 type tests (100% completion rate) |
| Prediction 4 · Contextual enhancement effect of symbolic friction | Adding physical context anchoring to symbolic learning (e.g., learning physics laws in a VR environment vs. pure text learning) should elevate memory retention from the bottom of the pure symbolic friction gradient to the middle level of contextual symbolic friction, with the enhancement effect increasing as context complexity rises | VR contextual enhancement produces no significant improvement in memory retention, or the enhancement effect shows no positive correlation with context complexity |
Predictions 1 and 4 test the friction rewrite depth gradient theory — if verified, the symbolic-dependency bias of education systems gains an information-theoretic explanation. Prediction 2 tests the practical effect of “coordinate system loosening” in the dual-layer structure of cognitive iteration. Prediction 3 is the most critical — it tests whether the three emergence indicators are necessary-but-not-sufficient conditions: even if all three indicators are satisfied, solid topology still cannot produce genuine cognitive emergence. If Prediction 3 is falsified — that is, if solid topology can also produce cognitive emergence — then the core arguments of this paper and Fluid Topology will require fundamental revision.
The Past and Present of Cognition
Cognition’s “past life” is physical. Four billion years of evolution, sculpted by physical friction, beginning from the simplest spinal reflex arcs, through compute-storage-unified synaptic plasticity, gradually accumulated ever more complex neural topologies. From the simple approach-avoidance responses of purely perceptual organisms, to the learning and memory of perception-dominant organisms, to the abstract modeling and metacognition of dual-system organisms — every leap was an emergence following the accumulation of irreversible deformation of fluid topology under the force of friction reaching a critical point.
Cognition’s “present life” is symbolic. Language, writing, mathematics, code — the core work of human civilization is the continuous compression of high-dimensional experience into low-dimensional symbols to make them transmissible. Civilization itself is a dimensionality-reduction machine fighting against entropy increase. But the symbolic expansion of cognition comes at a cost: textbooks can format the cognitive system, the self-coordinate system’s filter array blocks information bandwidth, and social media distorts the perceptual system’s comparison instincts.
Cognition’s “afterlife” — if the term is permitted — is an open question. AI aligned with the physical world will eventually emerge, but what it requires is not larger LLMs but an entirely new architecture with genuine internal entropy change — parameters that irreversibly change over time, closer to the brain’s compute-storage unity, further from von Neumann’s compute-storage separation.
And before the “afterlife” arrives, the most reliable starting point is not building larger models but precisely understanding where current limitations lie. What friction tells us is never “how the world works” but “where you hit a wall.” The location of the wall is the location of the cognitive boundary. And identifying boundaries has always been more important than expanding territory.
Cognition’s past life is collision; its present life is compression; its afterlife is transcendence. Physical friction inscribes time’s arrow into neurons, giving cognition direction. Chemical inner drive keeps collisions unceasing, giving cognition momentum. The compute-storage unity of fluid topology allows collisions to rewrite hardware, giving cognition possibility. And cognition itself — including this paper — is nothing more than foam on the surface of the ocean of noise. Foam can be beautiful, highly structured, reflecting one another, but it is never the ocean.
References
- LEECHO Global AI Research Lab (2026). “Signal and Noise: An Ontology of LLMs V4.” leechoglobalai.com.
- LEECHO Global AI Research Lab (2026). “Perception and Cognition: Structural Asymmetry Between Humanity’s Dual System and AI’s Single System.” leechoglobalai.com.
- LEECHO Global AI Research Lab (2026). “Biological Inner Drive and AI Structural Deficit V2.” leechoglobalai.com.
- LEECHO Global AI Research Lab (2026). “Fluid Topology vs. Solid Topology V2.” leechoglobalai.com.
- Hebb, D. O. (1949). The Organization of Behavior. Wiley.
- Shannon, C. E. (1948). “A Mathematical Theory of Communication.” Bell System Technical Journal, 27(3).
- Landauer, R. (1961). “Irreversibility and heat generation in the computing process.” IBM J. Res. Dev. 5, 183–191.
- Varela, F. J., Thompson, E. & Rosch, E. (1991). The Embodied Mind. MIT Press.
- Clark, A. (1997). Being There. MIT Press.
- Kuhn, T. S. (1962). The Structure of Scientific Revolutions. University of Chicago Press.
- Piñero, J. & Solé, R. (2019). “Statistical physics of liquid brains.” Phil. Trans. R. Soc. B, 374(1774).
- Bianconi, G. et al. (2025). “Higher-order topological dynamics.” Nature Physics.
- Sadegh-Zadeh, S. et al. (2024). “Neural reshaping: the plasticity of human brain and artificial intelligence in the learning process.” Am J Neurodegener Dis.
- Nature Computational Science (2026). “CATS Net: A neural network for modeling human concept formation.” Feb 2026.
- ScienceDirect (2025). “Neurons as autonomous agents: A biologically inspired framework for cognitive architectures in AI.”
- Humanities and Social Sciences Communications (2024). “Mechanical modeling of friction phenomena in social systems based on friction force.”
- Malenka, R. C. & Bear, M. F. (2004). “LTP and LTD: An embarrassment of riches.” Neuron, 44(1).
- Citri, A. & Malenka, R. C. (2008). “Synaptic plasticity: Multiple forms, functions, and mechanisms.” Neuropsychopharmacology.
- Gütlin, D. & Auksztulewicz, R. (2025). “Predictive Coding algorithms induce brain-like responses in Artificial Neural Networks.” PLOS Complex Systems.
- Kudithipudi, D. et al. (2025). “Neuromorphic Computing at Scale.” Nature.
- Fleming, S. M. & Dolan, R. J. (2012). “The neural basis of metacognitive ability.” Phil. Trans. R. Soc. B.
- ARC Prize Foundation (2026). “ARC-AGI-3 Technical Report.” arcprize.org.
- Constantinople, C. et al. (2025). “Hormonal modulation of dopamine-dependent learning.” Nature Neuroscience.
- Prigogine, I. (1977). “Time, Structure, and Fluctuations.” Nobel Lecture.
- Gödel, K. (1931). “Über formal unentscheidbare Sätze.” Monatshefte für Mathematik und Physik, 38.
- Wang, S. et al. (2026). “Challenges and opportunities for memristors in modern AI computing paradigms.” Frontiers of Physics, 21(3).
- Liu, H., Guo, D., Cangelosi, A. (2025). “Embodied intelligence: A synergy of morphology, action, perception and learning.” ACM Computing Surveys.
- Re, A. & Bruno, F. (2026). “Learning with the Body: Embodied Cognition for Education.” Adv Med Psychol Public Health, 3(1).
- Watts, D. J. & Strogatz, S. H. (1998). “Collective dynamics of ‘small-world’ networks.” Nature, 393, 440–442.
- Bassett, D. S. & Bullmore, E. (2006). “Small-World Brain Networks.” The Neuroscientist, 12(6), 512–523.
- Fries, P. (2005). “A mechanism for cognitive dynamics: neuronal communication through neuronal coherence.” Trends in Cognitive Sciences, 9(10), 474–480.
- Tulving, E. (1972). “Episodic and semantic memory.” In Organization of Memory, Academic Press.
- Held, R. & Hein, A. (1963). “Movement-produced stimulation in the development of visually guided behavior.” J. Comp. Physiol. Psychol., 56(5), 872–876.