Forensic codebase analysis. Find what's really wrong.
Point code-autopsy at any codebase and get a full diagnostic report — architecture map, tech debt score, complexity heatmap, security risks, and 12 more diagnostics. The output is an interactive HTML report you can share with your team, attach to a PR, or use in a technical review.
Like getting a full-body MRI for your code. It finds what you knew was wrong, what you suspected was wrong, and what you had no idea was wrong.
Each codebase gets a health score (0–100) broken into four categories, each scored independently:
| Category | What it measures |
|---|---|
| 🏗️ Architecture | Dependency graph, circular deps, layer violations, module cohesion |
| 🔬 Code Quality | Complexity heatmap, dead code, naming consistency, error handling |
| 🔧 Maintenance | Dependency health, test coverage, documentation, git health |
| Security surface, performance red flags, scalability bottlenecks, migration difficulty |
Visual dependency graph showing module relationships, circular dependencies, and "god modules" that everything imports.
Every file colored by composite complexity: nesting depth × function length × cyclomatic complexity × parameter count. Red = needs attention. Green = clean.
Each diagnostic produces a score (0–100), concrete evidence, and a specific recommendation. No vague warnings — every finding is backed by file names, line numbers, and counts.
| # | Diagnostic | What it catches |
|---|---|---|
| D1 | Dependency Graph | Circular deps, god modules, coupling score |
| D2 | Layer Violations | UI importing data layer, reverse dependencies |
| D3 | Module Cohesion | Directories with unrelated files |
| D4 | Entry Points | Duplicated initialization, spaghetti startup |
| D5 | Complexity | Functions with 12 nesting levels, 200-line methods |
| D6 | Dead Code | Unused exports, unreachable files, stale TODOs |
| D7 | Naming | snake_case in a camelCase codebase |
| D8 | Error Handling | Empty catch blocks, unprotected API endpoints |
| D9 | Dependencies | Deprecated packages, dependency bloat |
| D10 | Test Coverage | Untested controllers, missing integration tests |
| D11 | Documentation | README quality, JSDoc coverage, architecture docs |
| D12 | Git Health | Commit frequency, bus factor, contributor spread |
| D13 | Security | Hardcoded secrets, SQL injection, XSS vectors |
| D14 | Performance | N+1 queries, missing pagination, bundle bloat |
| D15 | Scalability | Missing connection pools, synchronous I/O in async |
| D16 | Migration | Framework coupling, version lock-in, upgrade effort |
2×2 grid plotting each risk by likelihood × impact. Instantly shows where the real danger is.
Prioritized by impact ÷ effort — not severity alone. A critical fix that takes 15 minutes ranks above a medium fix that takes a week.
Commit frequency chart showing team velocity over the last 6 months.
curl -fsSL https://raw.githubusercontent.com/OneSpiral/code-autopsy/main/install.sh | bash/plugin marketplace add OneSpiral/code-autopsy# Claude Code
git clone https://github.com/OneSpiral/code-autopsy.git ~/.claude/skills/code-autopsy
# Codex CLI
git clone https://github.com/OneSpiral/code-autopsy.git ~/.codex/skills/code-autopsy
# Pi
git clone https://github.com/OneSpiral/code-autopsy.git ~/.pi/agent/skills/code-autopsy
# OpenCode
git clone https://github.com/OneSpiral/code-autopsy.git ~/.opencode/skills/code-autopsyCopy SKILL.md into your agent's skills directory. Single file, no dependencies.
> Run a code autopsy on this project
or
> /code-autopsy
Output:
autopsy-report.html— interactive visual reportautopsy-summary.json— machine-readable summary
> /code-autopsy architecture # Architecture diagnostics only (D1–D4)
> /code-autopsy risks # Risk diagnostics only (D13–D16)
> /code-autopsy src/api/ # Analyze a specific directory
Run autopsies periodically and compare the JSON summaries:
> Compare autopsy-summary-jan.json vs autopsy-summary-mar.json
A complete example report is included:
examples/demo-report.html— interactive HTML report for a fictional "acme-api" project (score: 64/C)
Open it in your browser to see what the output looks like.
Codebase
│
▼
┌──────────────────────┐
│ PHASE 1: RECON │ Automated file scan, git history,
│ (silent, no prompts)│ dependency tree, test inventory
└──────────┬───────────┘
│
▼
┌──────────────────────┐
│ PHASE 2: ANALYSIS │ 16 diagnostics across 4 categories
│ (strategic sampling)│ Read key files, not everything
└──────────┬───────────┘
│
▼
┌──────────────────────┐
│ PHASE 3: SCORING │ 0–100 per diagnostic, weighted
│ (quantified) │ into category and overall scores
└──────────┬───────────┘
│
▼
┌──────────────────────┐
│ PHASE 4: REPORT │ Interactive HTML + JSON summary
│ (visual, shareable) │ Dark/light theme, responsive
└──────────────────────┘
The agent doesn't read every file. It uses file sizes, import graphs, git blame, and naming patterns to strategically sample the most important files — typically 15–30% of the codebase for 95% of the signal.
- Measure, don't moralize. "You have 3 circular dependencies" — not "circular dependencies are bad."
- Quantify everything. Every diagnostic produces a number.
- Prioritize by ROI. Actions sorted by impact ÷ effort.
- No false positives. If uncertain, don't flag it.
- Context matters. A prototype scores differently than a production system.
| Agent | Status | Install Location |
|---|---|---|
| Claude Code | ✅ | ~/.claude/skills/code-autopsy/ |
| Codex CLI | ✅ | ~/.codex/skills/code-autopsy/ |
| Pi | ✅ | ~/.pi/skills/code-autopsy/ |
| OpenCode | ✅ | ~/.opencode/skills/code-autopsy/ |
| Gemini CLI | ✅ | Copy SKILL.md to skills directory |
| Any agent | ✅ | Single SKILL.md file, universal |
How long does a full autopsy take? 2–5 minutes for a typical project (< 100K lines). The agent reads strategically, not exhaustively.
Does it work on monorepos?
Yes. Point it at a specific package: /code-autopsy packages/api/
What languages are supported? Best results with TypeScript, JavaScript, Python, Go, Rust, Java. The framework (file structure analysis, git health, dependency scanning) works with any language.
Can I run it on someone else's project? Absolutely. Clone the repo, point code-autopsy at it, share the report.
How is this different from SonarQube / CodeClimate / etc.? Those are CI tools that run on every commit. code-autopsy is a one-shot forensic analysis for when you need to deeply understand a codebase — inheriting a project, evaluating a codebase, or making a refactor/rewrite decision. It produces a single shareable report, not a dashboard.
See CONTRIBUTING.md. High-value contributions:
- New diagnostics
- Language-specific detection improvements
- Report visual enhancements
- Example reports from real open-source projects
- ghost-writer — Reverse-engineer any author's writing style, then write in their voice. 24-dimension forensic style analysis.
MIT — do whatever you want with it.




