Critical Industry Analysis

Script Kiddies Running Naked
Without Architecture!

An Examination of Security Architecture at AI Technology Companies — No Defense in Depth on the Backend, No Boundaries on the Frontend, No Skeleton in AI-Generated Code. How a Generation of Engineers’ Architectural Fracture Created a Systemic Security Disaster
V1 · March 31, 2026
이조글로벌인공지능연구소 · LEECHO Global AI Research Lab
& Claude Opus 4.6 · Anthropic


Abstract

On March 31, 2026, Anthropic’s Claude Code leaked its entire 512,000 lines of source code through an unremoved source map file in its npm package. This came just five days after the same company leaked nearly 3,000 internal documents due to a CMS misconfiguration. Neither incident was a hacking attack — both were the most elementary operational configuration oversights. Starting from these two incidents, this paper poses a deep structural question the industry has been ignoring: why would a company valued at tens of billions of dollars, one that has built its core brand around “AI safety,” commit the most rudimentary security errors twice in a single week? The answer lies not in individual mistakes but in a structural fracture in architectural thinking across the entire AI technology industry — the generation of engineers trained after 2010 was never taught vertical full-stack systems thinking. They can call APIs, pull dependencies, and make features run, but they do not understand the lower layers, do not understand defense in depth, and do not understand the value of the phrase “hold on — let me check the dependency tree before we ship.” Meanwhile, the hacker community has retained complete architectural thinking — vertical full-stack, understanding trust chains, seeing the whole picture. The fundamental nature of the offense-defense asymmetry is not a technology gap, but a generational fracture in architectural thinking.

Architectural Fracture
Script Kiddies
Supply Chain Security
AI-Generated Code
Vertical Full-Stack
React2Shell
Claude Code Leak
Information Silos

01 · The Scene

Twice in One Week: The AI Safety Company’s Security Architecture Exam — Zero Points

Two Incidents in Five Days: The Security Architecture Exam That Anthropic Failed
March 26
CMS Misconfiguration — Anthropic’s content management system had all uploaded assets set to public by default. Nearly 3,000 unpublished files — including drafts of the unreleased model “Claude Mythos,” CEO summit plans, and business strategy PDFs — were exposed on the open internet. An external security researcher discovered the exposure and notified Anthropic, after which the company took action to lock it down.

March 31
npm Source Map Leak — When Claude Code was built using the Bun bundler, the default-generated cli.js.map file (57MB) was not excluded and was published alongside the npm package. Security researcher Chaofan Shou discovered it and used the file to reconstruct all 1,900 TypeScript source files — 512,000 lines of code. Within hours, multiple backup repositories appeared on GitHub, garnering 1,100+ stars and 1,900+ forks. Anthropic deleted the original link, but the code had already spread irreversibly across the internet.

Core Facts

Neither incident was a hacking attack. The first was a CMS default permission left unchanged. The second was a bundler default configuration left unchecked. Both were discovered and reported externally. Anthropic’s own security systems triggered zero alerts in either incident. A company that has built its core brand around “AI safety” failed to execute even the most basic deployment checklist.

57MB
Size of cli.js.map
A file that should not exist
512K
Lines of leaked source code
~3,000
Internal files leaked via CMS
5 Days
Interval between two incidents

02 · Architectural Autopsy

X-Ray of 512,000 Lines: Piled Up, Not Architected

What the Leaked Source Code Reveals About How AI Companies Build Software

The leaked Claude Code source directory structure exposed a shocking fact: this was not a system that was “designed” — it was a system that was “piled up.” Under the src/ directory, over 30 top-level directories sprawled flat — commands/, tools/, components/, hooks/, services/, screens/, bridge/, coordinator/, plugins/, skills/, voice/, remote/, server/, memdir/, tasks/, state/, query/, upstreamproxy/ — with no layering, no domain separation, no dependency direction constraints.

Even more alarming were the individual file sizes: QueryEngine.ts at 46,000 lines, Tool.ts at 29,000 lines, commands.ts at 25,000 lines, query.ts at 67,000 lines. A single file spanning tens of thousands of lines means no module decomposition, no responsibility boundaries, and fixing one bug could affect logic ten thousand lines away. Nobody — including the AI that wrote the code — can fully understand what is happening inside these files.

Claude Code (Leaked Source)

512,000 lines · 30+ top-level directories sprawled flat

Largest single file: 67K lines

No security layering

No acceleration/optimization modules

Feature Flags exposing the product roadmap

331 utils, 146 components piled up

LiteClaw (Architecture-First Design)

4,096 lines · 36 source files

Explicit 8-layer architecture L0–L7

L0 is the Security layer

Smart routing, dynamic windows, cost control

5-tier risk classification + 3-stage audit

Secret keys never enter Agent context

Architectural Diagnosis

Claude Code’s source reveals the typical product of AI-generated code: feature-complete, massive in scale, but with no skeleton. Every new requirement spawns a new directory, laid flat, with no consideration for its relationship to existing modules. AI doesn’t stop to say “these two modules have overlapping responsibilities — they should be merged.” It just keeps generating new ones. This is exactly what GitClear’s data reveals: in the AI coding era, code duplication increased 48% while refactoring decreased 60%.

03 · The Generational Fracture

The Disappearance of Architectural Thinking: A Generation’s Broken Training Path

How the Industry Stopped Teaching Architecture

The root cause is not the negligence of any particular engineer, but a fundamental shift in how the entire industry trains engineers.

Engineers who entered before 2000 were forced to start from the bottom: hardware principles, operating systems, compiler theory, network protocols, data structures. There were no frameworks to lean on, no npm to pull packages, no AI to generate code. They built every layer by hand, naturally developing a vertical full-stack system view, naturally understanding how the lower layers punish carelessness.

Engineers who entered after 2010 took the opposite path: React, Node.js, cloud services, three-month bootcamp graduation. The lower layers are handled by AWS, security is managed by the security team, architecture is decided by the framework. This wasn’t personal choice — it was a systemic shift in the industry’s training pipeline.

Those who entered after 2020 have it worse. They don’t even write frontend code themselves — describe the requirements, let AI generate it, if it runs, commit. One more layer removed from the base, one more layer removed from security, one more layer removed from architectural thinking.

Dimension Pre-2000 (Veterans) Post-2010 (Script Kiddies) Post-2020 (AI Natives)
Starting Point Hardware → OS → Protocols → Apps Frameworks → APIs → Cloud Prompts → AI Generation → Deploy
Full-Stack Type Vertical (understands every layer) Horizontal (writes front & back, but surface-level at each) No-Stack (describes requirements, AI executes)
Security Awareness Innate (knows how the base punishes you) Limited (security is someone else’s job) Absent (doesn’t know security is a dimension)
Attitude Toward Dependencies Line-by-line review, lock versions, verify signatures npm install, don’t check contents AI auto-selects, doesn’t know what’s used
When Problems Arise Debug layer by layer from the bottom Search Stack Overflow Ask AI
Architectural Judgment Proactive design, proactive trade-offs The framework decides for you Doesn’t know what architecture is
The Broken Pipeline

54% of engineering leaders plan to reduce junior engineer hiring. Talent requiring 2–4 years of debugging experience will cease to exist. The veterans have retired. The middle tier has been replaced by AI. New hires are no longer being made. What remains is AI writing code, AI reviewing code, AI deploying code — not a single person in the entire chain who truly understands what lies beneath their feet.

04 · Frontend Breach

React2Shell: The Collapse of Frontend Security’s Illusion

The “Log4Shell of the Frontend” — When Client-Side Became Server-Side

In December 2025, CVE-2025-55182 — dubbed “React2Shell” by the security community — became the most severe security event in frontend development history. A deserialization vulnerability in React Server Components allowed attackers to achieve unauthenticated remote code execution via a single HTTP request. CVSS score: a perfect 10.0.

The vulnerability was called “the Log4Shell of the Frontend” because React is the world’s most popular frontend framework, and the introduction of Server Components meant that vulnerabilities that were once “client-side only” could now directly attack servers. Financial applications became primary targets — a single crafted request could expose banking information, manipulate transactions, or plant a persistent backdoor.

10.0
CVSS Score (Maximum)
48,448
Total CVEs in 2025 (+17% YoY)
6,227+
XSS Vulnerabilities in 2025
29%
Exploited on CVE Publication Day

React2Shell exposed a fundamental blind spot in frontend development culture: frontend engineers have long believed that “security is the backend’s problem.” But Server Components shattered the front-back boundary — frontend code now executes directly on the server. The security researchers’ warning is unambiguous: the era of frontend security as an “afterthought” is definitively over.

Paradigm Shift

React2Shell is not just a vulnerability — it marks a paradigm shift. The boundary between frontend and backend security has effectively ceased to exist. Yet the knowledge systems and security awareness of most frontend engineers remain stuck in the old world of “I only handle the client side.” Cognition lags behind reality — and that is the source of the disaster.

05 · The Asymmetry

Hackers Are Architects; Defenders Are Script Kiddies

Attackers Retained Full-Stack Thinking While Defenders Lost It

The March 2026 LiteLLM supply chain poisoning attack (by the TeamPCP group) demonstrated textbook architectural thinking: understand GitHub Actions workflow mechanics → know Trivy’s role and privileges in CI/CD → know where PyPI publishing credentials are stored → know that .pth files auto-execute on Python interpreter startup → know Kubernetes lateral movement paths → know SSH key and cloud credential file system locations → install persistent backdoors → use ICP protocol for C2 channels to bypass conventional blocking.

This is not something a script kiddie can do. This is an attack chain that could only be designed by someone who has mastered every layer from hardware to application. Every step precisely strikes the gaps in information isolation — exactly where defenders cannot see.

The Regular Army (Defense)

Horizontal shallow layers · Division-of-labor thinking

Blind trust in dependencies · Sees only the local

Passive defense · Has eliminated its architects

Protects with security products — security products themselves get compromised

CMS default permissions unchanged, dependency versions unlocked

The Guerrillas (Offense)

Vertical full-stack · Architectural thinking

Understands trust chains · Sees the whole picture

Active offense · Retains veteran mindset

Targets nodes with highest privilege, deepest trust, least auditing

TeamPCP’s self-deprecating remark: “The snowball effect will be massive”

The Nature of the Asymmetry

The essence of the offense-defense asymmetry is not a technology gap, not a resource gap — it is an asymmetry of information views. Attackers see the complete chain; defenders see only their own fragments. The industry eliminated its own architects, but the hacker community’s veterans never left. The regular army has no architects, and as a result, the security layer was penetrated. What an absurd reality.

06 · The Cost of AI Coding

AI Writes Code: How a 512,000-Line Dumpster Fire Gets Built

The True Cost of AI-Generated Code at Scale
1.7×
AI Code Issue Rate
(vs. Human · CodeRabbit)
45%
AI-Generated Code
Contains Security Flaws
+48%
Code Duplication Increase
(GitClear, 130M Lines Analyzed)
-60%
Refactoring Decrease
(GitClear)

The core problem with AI coding is not “whether the code is correct,” but “whether any design was done at the architectural level.” What AI excels at is generating local function code based on a prompt, but it never pauses to ask: “How does this new module relate to the 30 existing modules?” “This file is already 46,000 lines — shouldn’t it be split?” “Has the security audit for this dependency been done?”

The result is exactly what the Claude Code source leak revealed: feature-complete, but structurally like a pile of sand. Each module works independently, but put them together and there’s no skeleton. No layer isolation means that once any single module is compromised, lateral movement encounters virtually zero resistance.

The deeper issue: AI coding has eliminated refactoring. Human engineers, when a codebase grows to a certain size, will say “stop — we need to refactor.” AI does not. AI only continues to pile onto the existing structure. Each round of piling increases complexity, increases coupling, increases technical debt. A 60% reduction in refactoring means technical debt is accumulating at an unprecedented pace.

Dumpster Fire Dynamics

AI generates flawed code → AI patches the flawed code → the patch produces new flaws → AI patches again. Each cycle, technical debt accumulates and architectural decay deepens. And the people who could actually perform architecture-level refactoring have already been eliminated. The 512,000-line Claude Code is the terminal product of this loop. Even after the source leak, any upgrades will be mere band-aids — because no one can perform architectural refactoring on a system that has no skeleton.

07 · Information Silos

Everyone Scores Full Marks in Their Own Box, but They Can’t See the Exam

The Systemic Blindness That Enables Every Failure

The frontend developer doesn’t know what the packages they npm install actually do under the hood. The backend engineer doesn’t know whether the security scanning tools in the CI/CD pipeline are themselves secure. The security team doesn’t know that the business division uploaded classified files to a public CMS. The CMS operator doesn’t know that the files they uploaded exceed their clearance level.

Everyone is making “correct” local decisions inside their own information cocoon, but the system as a whole is collapsing.

The more abstraction layers, the deeper the information isolation. Cloud services isolate hardware information. Frameworks isolate lower-layer information. AI isolates code logic information. Microservices isolate system-wide information. Division of labor isolates organization-wide information. And information isolation is precisely security’s greatest enemy — attackers do not isolate. TeamPCP simultaneously understands GitHub Actions, PyPI, Docker Hub, npm, and Kubernetes. Their view penetrates the full stack; the defenders’ view is fragmented.

Frontend ≠ Backend

Backend ≠ Ops

Ops ≠ Security

Security ≠ Business

Business ≠ Base Layer

Systemic Blind Spot

The Information View Asymmetry

Attackers see the complete chain; defenders see only their own fragments. That is the nature of the asymmetry. Not a technology asymmetry, not a resource asymmetry — an information view asymmetry. Attackers have architectural thinking; defenders have only division-of-labor thinking. If the veterans were still on the regular army’s side, they’d say one thing: “Hold on — let me look at the dependency tree before we ship.” That single sentence might have prevented the entire disaster. But in organizations that worship speed, it is the least welcome sentence of all.

08 · The Antidote

Open Architecture: The Security Paradigm for the Post-Code Era

The Return of Architectural Thinking

Traditional open source shares code — the output of execution. Code can be poisoned, leaked, rot, and needs maintenance. The March 31, 2026 Claude Code leak is the ultimate cautionary tale of this path.

The Open Architecture paradigm proposes a fundamentally different approach: open-source design blueprints, not code. Three Markdown documents — the Design Blueprint (defining “what to build”), the Construction Blueprint (defining “how to build it,” with 29 task cards), and the Launch Specification (defining “with what discipline to build it”) — zero lines of code, any AI generates the complete software locally from scratch.

Attack Vector Traditional Open Source Open Architecture
npm/PyPI Poisoning ⚠️ Users blindly install ✅ Immune — no packages exist to be poisoned
Source Map Leak ⚠️ 57MB of source code fully exposed ✅ Immune — no code exists to be leaked
CI/CD Hijacking ⚠️ Build pipeline can be hijacked ✅ Immune — no CI/CD pipeline exists
Blueprint Poisoning N/A ✅ Logical self-consistency defense — poisoned blueprints cannot produce a working system
Maintenance Burden ⚠️ Issues flood in, dependency upgrades ✅ Zero maintenance — Markdown doesn’t expire

The LiteClaw project validated this paradigm: three Markdown documents, approximately 2,700 lines of pure text, zero lines of code. Independent build tests in Korean, Chinese, and English all successfully generated complete, fully tested software products. One blueprint, three languages, three independent builds, one result.

Paradigm Manifesto

Code is not the moat — thinking is. In the AI era, open-sourcing code is sharing “the fish.” Open-sourcing design blueprints is sharing “the blueprint for building the fishing rod.” And the blueprint itself never expires — because architectural thinking never expires. Seeds don’t fear poisoning — because tampered genes cannot grow into viable plants. Seeds don’t need maintenance — because growth is the work of soil and sunlight.

09 · Warnings & Recommendations

To AI Companies, to Engineers, to the Industry

A Wake-Up Call

To AI Companies: You spent billions training models and conducting alignment research, only to be defeated by a CMS default permission and a bundler configuration. Security is not a model-layer problem — it is a whole-chain problem. You cannot defend only against AI risks while ignoring human risks — attackers don’t pick doors, they walk through whichever one is unlocked. Security is a barrel problem: the shortest stave determines the water level.

To Engineers: If you have never reviewed your dependency tree line by line, if you don’t know what the packages you npm install actually do under the hood, if you think security is the security team’s job — then you are the “script kiddie” in this paper’s title. This is not an insult; it is a diagnosis. The treatment: start relearning from the bottom layer. Understand every layer beneath your feet.

To the Industry: Stop measuring productivity by lines of code. Stop measuring team efficiency by release velocity. Start asking: “Can our architecture survive a security audit?” “How many people on our team truly understand the lower layers?” “When was the last time a human reviewed our dependency tree line by line?”

Final Warning

The industry traded safety for speed, traded experience for youth, traded judgment for AI. Now the bill has arrived. The two leaks in a single week of March 2026 are not the end — they are the beginning. If architectural thinking is not rebuilt at the fundamental level — not at the code level, but at the human level — the next incident is only a matter of time. And next time, the attackers may not just observe your source code — they will exploit it.

10 · Conclusion

The Era of Script Kiddies Must End

The causal chain of this paper:

(1) The training path for engineers shifted after 2010 from “vertical full-stack” to “horizontal shallow layers,” and architectural thinking went from a required course to a non-existent one.

(2) AI programming tools accelerated this trend — code generation replaced code understanding, feature piling replaced architectural design, and 512,000 lines of skeletonless code became the norm.

(3) The security boundary between frontend and backend vanished (React2Shell), yet engineers’ cognition remains stuck in the old world. In 2025, XSS vulnerabilities exceeded 6,227 — a problem from the 2000s, still unsolved.

(4) Information silos left defenders with only fragmented views, while attackers retained complete architectural thinking. The essence of the offense-defense asymmetry is a generational fracture.

(5) Anthropic’s two leaks in one week — CMS misconfiguration + unexcluded source map — are the most visible symptom of this structural problem. It wasn’t that someone made a mistake — it’s that no one in the entire system possessed the architectural instinct to say “let me check the dependency tree first.”

(6) The antidote is not more code, more tools, more security products. The antidote is the return of architectural thinking — starting with how we train people, starting with design blueprints, starting with “security is Layer Zero.”

One-Sentence Conclusion

The regular army eliminated its own architects, but the guerrillas kept every single one. The industry traded safety for speed, experience for youth, judgment for AI. The bill has arrived — and this is only the first page.

References & Data Sources

[1] GitHub/instructkr. “Claude Code Source — Leaked Source (2026-03-31).” Security researcher Chaofan Shou discovered the leak; 512K lines reconstructed via npm source map.

[2] Fortune. “Anthropic left details of an unreleased model in a public database.” March 26, 2026. Nearly 3,000 internal documents publicly exposed due to CMS misconfiguration.

[3] DEV Community. “Claude Code’s Entire Source Code Was Just Leaked via npm Source Maps.” March 31, 2026. Full technical analysis.

[4] CVE-2025-55182 (React2Shell). CVSS 10.0 (Maximum). React Server Components deserialization vulnerability, unauthenticated RCE. Called “the Log4Shell of the Frontend.”

[5] VulnCheck. “State of Exploitation 2026.” 884 known exploited vulnerabilities in 2025; 28.96% exploited on the day of CVE publication.

[6] Cycode. “2026 State of Product Security in the AI Era.” 92% of organizations use AI coding assistants; 81% lack complete visibility.

[7] GitClear. “Analysis of 130 Million Lines of Code.” AI coding era: code duplication up 48%, refactoring down 60%.

[8] CodeRabbit. AI-generated code issue rate is 1.7× that of human code.

[9] Stanford University. AI-assisted developers produce less secure code while exhibiting false confidence in its security.

[10] Getastra. “Common Web Application Vulnerabilities 2026.” 48,448 CVEs in 2025 (+17% YoY); 6,227+ XSS vulnerabilities.

[11] IEEE Spectrum. “AI Coding Degrades: Silent Failures Emerge.” January 2026. Newer LLM versions produce code with “silent failures.”

[12] Anthropic. “A Postmortem of Three Recent Issues.” September 2025. Acknowledged three infrastructure bugs in Aug–Sep caused response quality degradation.

[13] LEECHO Global AI Research Lab. “AI Cybersecurity Risk Analysis Report.” February 13, 2026.

[14] LEECHO Global AI Research Lab. “An AI Industry Lacking Humanism Will Deliver Neither Premium Nor Paid Value.” March 25, 2026.

[15] LEECHO Global AI Research Lab. “Open Source in the AI Era: Publishing Design Thinking, Blueprints, and SOP Flowcharts.” March 28, 2026.

[16] The Hacker News. “TeamPCP Backdoors LiteLLM.” March 2026. Supply chain poisoning attack affecting 36% of cloud environments.

[17] J.D. Hodges. “Claude AI Usage Limits: What Changed in 2026.” Post-QuitGPT movement, Claude user surge strains infrastructure.

[18] Alphaguru.ai. “What’s Going On with Claude Code?” March 2026. Systematic documentation of the Claude Code quality degradation timeline.

“Hold on — let me look at the dependency tree before we ship.”
— The sentence every veteran has said, and every organization refused to hear
Script Kiddies Running Naked Without Architecture · V1 · March 31, 2026
이조글로벌인공지능연구소 · LEECHO Global AI Research Lab
& Claude Opus 4.6 · Anthropic

© 2026 LEECHO Global AI Research Lab

댓글 남기기