SECURITY ANALYSIS REPORT · APRIL 2026

Architectureless AI Programming:
Historical Illumination and Future Predictions

From the demise of information theory, cybernetics, and hardware-software alignment, to infinite code bloat and supply chain security collapse — a structural analysis based on the architectural evolution from 2010 to 2026

이조글로벌인공지능연구소 · LEECHO Global AI Research Lab
· & Opus 4.6

April 13, 2026 · V2 · CONFIDENTIAL

Abstract — This report is a sequel to LEECHO Global AI Research Lab’s February 2026 AI Cybersecurity Risk Analysis Report [F1] and April 2026 Root Cause Analysis of Zero-Day Bugs Discovered by Mythos [F2]. The first two papers respectively identified the clinical symptoms of “systemic collapse and accountability vacuum” and the vulnerability formation mechanism of “cross-generational knowledge lock-in leading to emergent incompatibility.” This paper traces a deeper etiology — from the perspectives of information theory [A1] and cybernetics [A2], analyzing how the evolution of software architecture from 2010 to 2026 systematically eliminated the natural pruning mechanisms for code, causing AI autonomous iteration to operate within a structural security vacuum. We argue that parallelized single-layer architecture dissolved information flow constraints, degrading the architect’s role from the trinity of “information theory + cybernetics + hardware-software alignment” to purely cybernetic operations, allowing redundancy and overflow to expand without limit simultaneously. This is not only the structural inevitability of code bloat and supply chain collapse, but also the fundamental reason why zero-day vulnerabilities have degenerated from “localized defects at cross-layer seams” to “globally unstructured diffusion.”

01 Theoretical Framework

Information Theory, Cybernetics, and the Architect’s Trinity

Understanding the root cause of the current AI programming security crisis requires tracing back to two disciplines born simultaneously in 1948: Claude Shannon’s information theory [A1] and Norbert Wiener’s cybernetics [A2]. Though seemingly related, they answer fundamentally different questions.

1.1 Information Theory: Answering “Is It Needed?”

The core of Shannon’s information theory is channel capacity — when information is transmitted through a noisy channel, there exists a maximum transmission rate [A9]. Exceeding this rate inevitably increases the error rate. Shannon proved that channel coding protects information from errors by systematically adding controlled redundancy — the keyword being “controlled” [A1].

In the communication protocol stack, the physical realization of this principle is a hierarchical structure: physical layer → data link layer → network layer → transport layer → application layer. Each layer performs its own source coding and channel coding; redundancy and noise from lower layers are filtered when passed up to higher layers. When a phone call “just works,” it is because layer after layer is honoring the guarantees proved within Shannon’s information theory [A10]. The hierarchical architecture of software — firmware → operating system → middleware → application — is an isomorphic mapping of this communication protocol stack model into software engineering. Each layer acts as a “channel,” with information flowing from bottom to top, being filtered, compressed, and transformed at each level. These layers serve as natural pruning mechanisms for redundancy.

Lehman first introduced the concept of “entropy” into software engineering in 1974 [A3]. His second law of software evolution states: “As an E-type system evolves, its complexity increases unless work is specifically done to maintain or reduce it” [A4]. Information-theoretic research further confirms that the application of consistency rules has been mathematically proven to increase the information content of an architecture while improving its orderliness by reducing uncertainty [A5]. Consistency rules in hierarchical architecture are channel coding in the information-theoretic sense — they add controlled, structured redundancy that limits entropy growth. In contrast, the redundancy added by AI in parallel architectures is uncontrolled and repetitive — it is noise, not coding.

1.2 Cybernetics: Answering “Is It Permitted?”

Wiener’s cybernetics focuses on feedback and regulation within systems — sensing environmental changes, comparing deviations from targets, and adjusting behavior [A2]. In software architecture, this corresponds to access control, audit logs, security scanning, and gating in CI/CD pipelines. Lehman’s later FEAST project explicitly placed feedback and feedback control at the core of software process improvement [A4].

Cybernetic feedback loops are effective only when there is a clearly defined “target state.” “The code runs” is a clear target — cybernetics can detect functional failures. “The code is well-structured” is not a clear target — because no one has defined what “well-structured” means. Cybernetics answers “is this operation permitted?” rather than “does this module need to exist?” This is a critical distinction.

1.3 Hardware-Software Alignment: Physical Constraints as the Ultimate Pruning

Traditional architects possessed a third dimension — hardware physical constraints [B4]. CPU core counts, memory capacity, storage bandwidth, network latency — these physical limitations naturally constrained the possible design space for software. Architects had to make trade-offs within physical constraints, and these trade-offs themselves served as the ultimate pruning of redundancy. As we argued in [F2]: the first generation of architects making “reasonable compromises” under physical constraints did leave vulnerability seeds at cross-layer seams, but at least they left behind structured code — Mythos could still find those seams 27 years later.

Core Proposition: The traditional architect’s work was a trinity of information theory (is it needed?) + cybernetics (is it permitted?) + hardware-software alignment (is it physically possible?). The absence of any single dimension leads to uncontrolled growth in system complexity. The simultaneous absence of all three dimensions is equivalent to removing all constraints on code growth.

02 Historical Evolution

The Collapse of the Trinity: 2010–2026

2.1 Phase One: Cloud Abstraction Eliminates Hardware Alignment (2010–2015)

The core promise of cloud computing was to “abstract away storage, computing, and networking” [B3]. The IaaS/PaaS/SaaS three-tier model systematically removed hardware decisions from the architect’s hands. Traditionally, architects needed to consider everything from language selection, I/O models, and database strategies to CPU core counts, memory, and storage — these decisions provided a holistic view of the application lifecycle [B1]. After 2010, these decisions were replaced by cloud vendors. Architects no longer needed to know “which chip, how many cores, what disk array” — the role of physical constraints as a pruning mechanism was dissolved.

2.2 Phase Two: Microservices Dissolve Information Flow (2015–2020)

Microservice architecture decomposed monolithic applications into independently deployable service collections. This brought advantages in team autonomy and independent deployment, but simultaneously dissolved the global view of information flow — information no longer had a clear “bottom-to-top” direction, but instead traveled as peer-to-peer calls on a flat plane. The architect’s specialized knowledge was “naturally distributed across the entire team” [B2] — no one any longer held a global view of information flow.

2.3 Phase Three: Parallel Architecture Becomes Mainstream (2020–2023)

After 2020, architecture was completely flattened into single-layer parallel calls. “We are using yesterday’s data center architectures based on servers, operating systems, and virtual machines to support today’s workloads and applications. It’s like using Latin that Romans used 2,000 years ago to describe the modern world” [B5]. The architect’s role shifted from full-stack decision-maker to a coordinator “ensuring consistency, interoperability, and resilience of solutions” [B1] — this is almost entirely cybernetic language.

2.4 Phase Four: AI Autonomous Programming (2023–2026)

AI code generation tools operate on parallel architectures that have already lost information flow constraints and hardware alignment. AI’s behavioral pattern is “ensure it runs” rather than “ensure it’s lean.” GitClear data shows that refactoring’s share of code changes plummeted from 24% in 2021 to less than 3% [C8]. AI does not refactor — it copies and pastes patterns [C1]. More precisely: AI is rarely used for refactoring or working with existing code; its primary contribution is new features, new files, and new logic branches; work on legacy systems, technical debt, and historical compromises remains human responsibility [C10]. The result is that codebases grow faster than they improve. In one actual case, AI refactored 40,000 lines of code, all of which was reverted six months later — the AI had optimized for “tidiness” rather than “maintainability” [C9].

The fundamental reason AI does not refactor is not a lack of technical capability, but rather that in parallel architecture, no hierarchical layer tells it “this functionality already exists elsewhere.” In hierarchical architecture, upper layers calling lower-layer functions naturally encourages reuse — because the lower layer is a “common service.” In parallel architecture, every module exists independently: “writing my own from scratch is faster than finding and understanding yours.”

2010: Cloud Abstraction → Hardware-Software Alignment Capability Dies
2015: Microservices → Information Flow Thinking Dissolved
2020: Parallel Architecture → Architect Degrades to Purely Cybernetic Role
2023: AI Code Generation → Death of Refactoring (24%→3%), Code Only Grows
2025–2026: AI Autonomous Iteration → Expanding Attack Surface at Machine Speed on Architectureless Infrastructure
03 Quantitative Evidence

Infinite Code Bloat: From Lehman’s Laws to Empirical Data

Code Duplication Frequency
Growth [C1] GitClear 211M Lines
75%
Per-Developer Code Volume
Increase vs. 2022 [C3]
+154%
PR Volume YoY Growth
[C2] CodeRabbit 2026
24%→3%
Refactoring Share Collapsed
[C8] Refactoring Is Dead

GitClear’s analysis of 211 million lines of code changes found that since AI coding surged in mid-2022, code duplication frequency increased eightfold [C1]. For the first time in history, developers paste code more frequently than they refactor or reuse it [C1]. AI code completion tools tend to generate new code from scratch rather than reuse existing code — for example, importing an entirely new logging package even when another package is already performing the same task [C3].

Karpathy described the intuitive feel of this problem: “Agents bloat the abstraction layer, code aesthetics are terrible, and it’s extremely easy to copy-paste code blocks — it’s an absolute mess” [C6]. And Anthropic’s own Claude Code, in its March 31, 2026 source code leak, demonstrated an extreme case of this problem — within 512,000 lines of TypeScript: a 5,594-line file, a 3,167-line single function, and 12 levels of nesting [C5][E1].

“In 2025, the average developer committed 75% more code than in 2022. The output increase applies equally if not more to ‘how much code the team needs to maintain’ than to ‘how much output each developer gets.'”

— GitClear 2025 AI Copilot Code Quality Report [C1]

3.1 An Information-Theoretic Reading of Lehman’s Second Law

Lehman’s second law of software evolution [A3]: “As an E-type system evolves, its complexity increases unless work is specifically done to reduce it.” Information-theoretic research further confirms: across 25 open-source projects measured over time, all exhibited increasing entropy [A6]. Refactoring is the “dedicated entropy-reducing work” in Lehman’s law — and the proportion of this work has plummeted from 24% to 3% [C8].

In traditional hierarchical architecture, the layers themselves serve as the “entropy-reducing mechanism” — redundancy from lower layers is filtered when passed up to higher layers, just as each layer in a communication protocol stack filters noise from the layer below [A10]. In parallel architecture, this mechanism does not exist. AI autonomous iteration operates on an architecture without entropy-reducing mechanisms, while refactoring — the only manual entropy-reducing method — is already dead. Systemic thermal collapse is the mathematical inevitability of Lehman’s law.

04 Interface Explosion

API Security: The Inevitable Disaster of Parallel Calls

66%
of Orgs See 50%+ Annual
API Growth [D9] Salt Security
97%
of API Vulnerabilities
Exploitable in Single Request [D8]
59%
Exploitable Without
Authentication [D8]
+400%
AI-Related Threats
YoY Growth [D8]

The direct consequence of parallel architecture is interface explosion. APIs are growing horizontally (more endpoints), vertically (more business-critical logic), and contextually (embedded in AI agent workflows) [D7]. 93% of teams struggle with API collaboration [D10]. By 2026, most enterprises cannot answer the most basic question — how many API endpoints exist? 92% of organizations lack the security maturity needed to defend AI agent environments, and only 24% have fully automated API inventories [D9].

“AI security failures look familiar because they are familiar: over-trusted interfaces, excessive permissions, weak authentication, and insecure downstream consumption. AI raises the value of targets and the speed of abuse but rarely changes the underlying failure mode.”

— Wallarm 2026 API ThreatStats Report [D8]

4.1 MCP Protocol: An Amplifier of Interface Risk

The Model Context Protocol (MCP), as the control plane API for AI agents, pushes interface risk to new heights. Wallarm discovered 315 MCP-related vulnerabilities in 2025, with a 270% increase from Q2 to Q3 [D8]. MCP is an open-source standard where every user creates their own MCP server — MCP cannot be “fixed at the source” because there is no unified source to fix.

05 Empirical Validation

March–April 2026: Complete Validation of the Attack Chain

In our February paper [F1], we predicted that “2026 will be the year of AI coding vulnerability outbreaks.” Within just 8 weeks of that prediction being published, the following events occurred in succession:

Date Event Scale
Mar 24 LiteLLM Supply Chain Poisoning (TeamPCP) [D3] 3.4M daily downloads of AI infrastructure library
Mar 26 Trivy/KICS Security Tools Compromised [D4] All 35 version tags maliciously pushed
Mar 31 Axios NPM Package Poisoning (North Korea UNC1069) [D1][D2] 100M+ weekly downloads
Mar 31 Claude Code 512K-Line Source Code Leak [E1] npm packaging error exposed complete client code
Apr 1 Trojanized Claude Code Versions Begin Distribution [E4] Global weaponization completed within 24 hours
Apr 6 Fortinet CVE-2026-35616 Zero-Day [D14] CVSS 9.8, API authentication bypass
Apr 7 TrueConf Zero-Day Update Channel Poisoning [D13] Government networks hit with Havoc malware
Apr 9 Apache ActiveMQ 13-Year RCE [D11] Claude found in 10 min what humans missed for 13 years

5.1 The ActiveMQ Case: A Three-Dimensional Dissection Through Information Theory, Cybernetics, and the Architect

CVE-2026-34197 is a remote code execution vulnerability that lay dormant in Apache ActiveMQ Classic for 13 years [D11]. Researcher Sunkavally described the discovery process as “80% Claude, 20% human packaging” [D12]. Claude completed in 10 minutes what would have taken a human a week [D11].

This case perfectly illustrates how the absence of all three dimensions creates vulnerabilities:

Absence of Information Theory: The vulnerability involved multiple components developed independently over time — Jolokia, JMX, network connectors, and VM transport. “Each feature did what it was supposed to do in isolation, but together they were dangerous” [D11]. No architect examined whether “the information flow between these components is safe” — each component operated in its own silo, with information flowing unconstrained across a flat plane.

Failure of Cybernetics: When fixing CVE-2022-41678 in 2023, developers added a broad Jolokia allow rule in order to “preserve Web Console functionality” [D11] — this was a purely cybernetic fix (changing permissions), not an information-theoretic fix (redesigning information flow). The cybernetic fix introduced a new attack surface because it only managed “is it permitted?” without considering whether “this information flow path needs to exist.”

Absence of the Architect: No one stood at the elevation of “cross-component information flow” to examine the whole picture. The reason Claude found it in 10 minutes is precisely because it “efficiently chained together this path end-to-end, clear-headed and unencumbered by assumptions” [D11] — what it did was precisely information-theoretic thinking: tracing the complete flow of information between components.

5.2 Interlocking with the Mythos Paper

In [F2], we argued that the first generation of architects making “constrained reasonable compromises” under physical constraints left behind structured code — structurally low-entropy, with locally high entropy at the seams. The zero-day vulnerabilities discovered by Mythos were hiding in precisely those seams.

But the parallel architecture code generated by AI after 2020 has a fundamentally different nature of problems — there is no structure to speak of; defects are not embedded within a structure; rather, the defects are the structure itself. Legacy code is “structured but with structural blind spots”; AI-generated code is “globally high-entropy, structureless, with diffused defects.” The “abductive targeted mine-clearing” methodology proposed in [F2] relies on “vulnerability habitats being predictable” — i.e., vulnerabilities concentrate at cross-layer seams. But in AI parallel architecture, no layers means no seams, and no seams means no habitats. Security auditing of AI code will be harder, not easier, than auditing legacy code.

Core Finding: Intrusion occurs through identity, payloads are delivered through trusted distribution channels, and execution blends into normal behavior. In parallel architectures without software security layers, supply chain poisoning bypasses all external security mechanisms. The TrueConf case [D13] confirmed: once attackers controlled the update server, they directly distributed poisoned updates — fully consistent with our paper’s predictions about AI tool update channel poisoning.

06 Talent Gap

Irreversible Knowledge Extinction

All the structural problems revealed in this report — the absence of information flow constraints, the loss of hardware-software alignment, the flattening of architectural layers — are theoretically diagnosable and fixable, provided enough architects with “trinity” capabilities exist. But the fifteen-year “de-architecturing” process from 2010 to 2026 has systematically eliminated such talent [B1][B2].

6.1 The Educational Gap

Since 2010, computer science education has increasingly skewed toward the application layer — Python, JavaScript, frameworks, cloud services. Computer architecture, compiler theory, operating systems — these “foundational courses” have been marginalized. The new generation of engineers has worked on the cloud from day one of their careers, never having encountered the constraints of physical hardware.

6.2 The Industry Gap

After 2020, positions requiring hardware-software alignment decisions have virtually ceased to exist — cloud vendors make those decisions for you. A skill that has had no industry demand for fifteen years cannot possibly sustain its transmission. As we noted in [F1], “54% of engineering leaders plan to reduce junior developer hiring” — not only have veteran architects already retired, but new ones will not be trained either.

6.3 The Cognitive Gap

Most fundamentally: the new generation of engineers does not even know what it is missing. “This abstraction fundamentally changes our architectural thinking: we are no longer constrained by the granular details of implementation, but think in terms of orchestrating intelligent capabilities” [B1] — this passage is presented as progress, but from an information-theoretic perspective, it is evidence of knowledge loss. The flip side of “no longer constrained by granular details” is “no longer understanding granular details.”

07 Future Predictions

Endgame Scenarios

7.1 What Will Happen (Within 12 Months)

  • AI tool update channels will be targeted for poisoning. Desktop clients like ChatGPT and Claude Code run directly on user operating systems with access to file systems, terminals, and codebases. The TrueConf case [D13] has already proven that update channel poisoning is a realized attack vector. When the target upgrades from video conferencing software to AI programming tools, a single push will cover tens of millions to hundreds of millions of endpoint devices.
  • Distributed AI software will become a persistent attack surface. Local AI agents and edge AI models being massively deployed starting in 2026 will create unprecedented attack surfaces — every deployment point is a potential entry point, and every local model carries unaudited dependency chains.
  • AI autonomous iteration systems will become the ideal poisoning targets. They automatically consume dependencies, automatically execute, bypass human review, and hold high-privilege credentials — perfectly inheriting all security defects of parallel architecture, then amplifying them at machine speed.

7.2 Structural Predictions

Prediction 1: The total volume of AI-generated code will expand at 2–3× per year, while security vulnerability density will not significantly decline. Code bloat × vulnerability density = exponential growth of the attack surface. Lehman’s second law [A3] is unavoidable in systems without entropy-reducing mechanisms.

Prediction 2: A mass supply chain attack via AI tool update channels will occur in 2026–2027, affecting more than 1 million endpoints. SolarWinds (2020, 18,000 organizations) will no longer be the largest supply chain attack on record. Axios’s [D1] 100M+ weekly downloads and Claude Code’s [E1] direct execution on operating systems with full privileges — these two facts have already defined the mathematical upper bound of the impact surface.

Prediction 3: AI-generated parallel architecture code will produce a new class of vulnerabilities — no longer the “localized defects at cross-layer seams” described in [F2], but “globally unstructured diffusion.” These vulnerabilities cannot be found by traditional fuzzers (because there are no layers), nor by the abductive targeted mine-clearing method of [F2] (because there are no habitats). An entirely new security auditing paradigm will be needed.

Prediction 4: “Architectural return” will become a central topic in cybersecurity — but the industry will choose the path of least resistance: “using more AI to solve problems caused by AI,” accelerating a vicious cycle. Because the talent capable of executing a genuine architectural return no longer exists.

08 Conclusion

A High-Speed Train Without a Steering Wheel

The core argument of this report can be distilled into a single analogy: Software engineering in 2026 possesses the most powerful brakes and airbags in history (AI guardrails, sandboxes, automated testing), yet has simultaneously lost its steering wheel (information theory’s pruning capability) and its dashboard (the physical constraints of hardware-software alignment). It can crash into walls safely, but it cannot avoid crashing into walls.

This is not a bug in a specific tool, not an oversight by a specific team, but the structural inevitability of the entire software architecture paradigm shift since 2010. When the industry chose cloud abstraction over hardware-software alignment, parallel microservices over hierarchical architecture, cybernetics over information theory, AI generation over human review, speed over leanness — “software security defenselessness” was already written into the system’s DNA.

AI autonomous code iteration does not merely “face” security threats — it is itself a threat amplifier. It infinitely expands the attack surface at machine speed on an architecture that already lacks software security layers. The architects who could diagnose this problem have already retired; the educational system that could train new architects no longer exists; and attackers have already industrialized and become state-sponsored.

The solution lies not in better patches or more AI tools, but in rebuilding information-theoretic architectural thinking — which requires acknowledging a truth the industry is unwilling to face: the fifteen-year sprint in the direction of “abstraction” and “speed” has simultaneously been a sustained retreat in the direction of security.

Appendix

References

A — Theoretical Foundations · B — Architectural Evolution · C — Code Bloat & Technical Debt · D — Supply Chain Attacks & Security · E — Claude Code Incident · F — Prior Papers

[A1] Shannon, C.E. “A Mathematical Theory of Communication.” Bell System Technical Journal, 27(3), 379–423, 1948.
[A2] Wiener, N. Cybernetics: Or Control and Communication in the Animal and the Machine. MIT Press, 1948.
[A3] Lehman, M.M. “Programs, Life Cycles, and Laws of Software Evolution.” Proceedings of the IEEE, 68(9), 1060–1076, 1980.
[A4] Lehman, M.M. “Laws of Software Evolution Revisited.” EWSPT, 1996.
[A5] Czapiewski, P. et al. “Entropy as a Measure of Consistency in Software Architecture.” Entropy, 25(2):328, MDPI, 2023.
[A6] Santos, G. et al. “Applying Information Theory to Software Evolution.” arXiv:2303.13729, 2023.
[A7] Keenan, D. et al. “An Investigation of Entropy and Refactoring in Software Evolution.” PROFES 2022, LNCS 13709, Springer, 2022.
[A8] Jacobson, I. et al. Object-Oriented Software Engineering. Addison-Wesley, 1992.
[A9] MIT News. “Explained: The Shannon Limit.” January 2010.
[A10] MaxMag. “Claude Shannon Information Theory: The Digital Blueprint.” 2025.
[B1] Krasner, A. “The Evolving Role of the Software Architect.” DraftKings Engineering, Medium, 2025.
[B2] InfoQ. “Going from Architect to Architecting: the Evolution of a Key Role.” 2022.
[B3] Oracle. “Software Architecture for High Availability in the Cloud.” Oracle Technical Resources.
[B4] Neueda. “Why The Software Architect Role is Vital in Organizations.” 2025.
[B5] Van den Bergh, J. “Remaining Relevant: Abstraction Layers and APIs for Cloud Native Applications.” 2020.
[C1] GitClear. “2025 AI Copilot Code Quality Report.” 211 million lines analyzed, 2025.
[C2] CodeRabbit. “2026 State of AI Code Quality Analysis.” 2026.
[C3] DarkReading. “AI-Generated Code Poses Security, Bloat Challenges.” October 2025.
[C4] DNYUZ/NYT. “The Big Bang: A.I. Has Created a Code Overload.” April 2026.
[C5] Hyperdev. “Is The Claude Code Team Moving Too Quickly?” April 2026.
[C6] Greptile. “Slop Is Not Necessarily The Future.” 2026.
[C7] Wasserman, A. “Software Entropy Explained.” Toptal, January 2026.
[C8] Fenton, S. “What’s Missing With AI-Generated Code? Refactoring.” The New Stack / Medium, 2025.
[C9] Code Blows. “AI Refactored Our Codebase. 6 Months Later: We’re Reverting Everything.” Medium, March 2026.
[C10] Tulegenov, A. “AI in Software Development in 2026.” Medium, December 2025.
[D1] Google GTIG. “North Korea-Nexus Threat Actor Targets Axios NPM Package.” March 2026.
[D2] Microsoft Security Blog. “Mitigating the Axios npm Supply Chain Compromise.” April 2026.
[D3] Trend Micro. “Inside the LiteLLM Supply Chain Compromise.” March 2026.
[D4] Palo Alto Unit42. “Weaponizing the Protectors: TeamPCP’s Multi-Stage Supply Chain Attack.” April 2026.
[D5] Zscaler ThreatLabz. “Supply Chain Attacks Surge in March 2026.” April 2026.
[D6] Group-IB. “Six Supply Chain Attack Groups to Watch Out for in 2026.” March 2026.
[D7] SecurityWeek. “Cyber Insights 2026: API Security.” January 2026.
[D8] Wallarm. “2026 API ThreatStats Report.” February 2026.
[D9] Salt Security. “1H 2026 State of AI and API Security.” April 2026.
[D10] KPMG. “API Security 2026.” March 2026.
[D11] Horizon3.ai. “CVE-2026-34197 ActiveMQ RCE via Jolokia API.” April 2026.
[D12] Infosecurity Magazine. “Claude Discovers Apache ActiveMQ Bug Hidden for 13 Years.” April 2026.
[D13] The Hacker News. “TrueConf Zero-Day Exploited in Attacks on Government Networks.” March 2026.
[D14] CyberScoop. “Fortinet CVE-2026-35616 Zero-Day Exploited.” April 2026.
[D15] Help Net Security. “Week in Review: April 12, 2026.” April 2026.
[E1] Zscaler ThreatLabz. “Anthropic Claude Code Leak.” April 2026.
[E2] SecurityWeek. “Critical Vulnerability in Claude Code Emerges Days After Source Leak.” April 2026.
[E3] VentureBeat. “Claude Code’s Source Code Appears to Have Leaked.” April 2026.
[E4] Trend Micro. “Weaponizing Trust Signals: Claude Code Lures and GitHub Release Payloads.” April 2026.
[E5] Straiker. “Claude Code Source Leak: With Great Agency Comes Great Responsibility.” April 2026.
[E6] Vectra AI. “Breaking Down the Axios Supply Chain Incident.” April 2026.
[F1] LEECHO Global AI Research Lab. “AI Cybersecurity Risk Analysis Report.” February 13, 2026.
[F2] LEECHO Global AI Research Lab. “Root Cause Analysis of Zero-Day Bugs Discovered by Mythos.” V2, April 10, 2026.

이조글로벌인공지능연구소
LEECHO Global AI Research Lab & Opus 4.6
2026. 04. 13
CONFIDENTIAL

댓글 남기기