Agent-first Node.js CPU, memory & experimental async profiler.
Spawns or attaches to your program, captures selected profile kinds plus timed runtime signals,
and emits a structured JSON report that humans and AI agents can act on directly.
Lanterna is built so its output is useful to an AI agent, not just a human reader. Instead of a flamegraph, you get a categorized, correlated, and actionable
LanternaReport— ready to pipe into an LLM or a CLI tool.
Most Node.js profilers were designed for a human staring at a flamegraph. That's a problem when an AI agent is doing the investigation: a flamegraph isn't parseable, hot stacks aren't categorized, and "what should I fix first?" requires a human to interpret the visual.
Lanterna takes a different stance:
- Structured JSON, not pixels. The
LanternaReportis a stable schema — hotspots, allocators, async chains, GC pauses, event-loop lag, and findings — that an agent can read, correlate, and act on directly. - Detectors, not just data. 17 built-in detectors emit categorized
findings(sync crypto, blocking I/O, deopt loops, memory growth, orphan async resources, …) withconfidenceandproofLevelso consumers know when to trust a hypothesis vs. require corroboration. - CPU + memory + async in one capture. Combine kinds in a single run; cross-kind detectors like
alloc-in-hot-pathandhot-async-contextsurface the highest-priority fixes (something flamegraph tools can't represent). - Spawn or attach. Profile a CLI, a server under load, or a live production process — same report shape, same detector surface.
| Tool | Primary output | CPU | Memory | Async | Findings / detectors | Agent-friendly |
|---|---|---|---|---|---|---|
| Lanterna | Structured JSON (+ text/markdown/agent renderers) | ✅ | ✅ | ✅ (experimental) | ✅ 17 built-in, pluggable | ✅ |
node --prof / --cpu-prof |
V8 isolate log / .cpuprofile |
✅ | — | — | — | |
| 0x | HTML flamegraph | ✅ | — | — | — | ❌ |
| Clinic.js (Doctor / Flame / Bubbleprof) | HTML dashboards | ✅ | ❌ | |||
| Chrome DevTools (inspector) | Interactive UI | ✅ | ✅ | — | ❌ |
When to reach for something else:
- You want a flamegraph for human inspection. 0x and Chrome DevTools are purpose-built for that.
- You're already on Clinic.js's Doctor diagnostics workflow. Clinic does well as a one-shot human triage.
- You need raw V8 internals (deoptimization traces, ICs, etc.). Use
--profand the V8 tooling directly.
Lanterna is the right fit when the consumer of the report is an agent or an automated pipeline that needs categorized signals, not pixels.
- Two capture modes —
lanterna runto spawn & profile a command,lanterna attachto connect to a live process via the inspector. - Three profile kinds — opt in with
--kind:cpu(V8 sampling profiler, default),memory(heap allocation profile + RSS series), andasync(experimental async-resource profiling). Combine kinds by repeating--kind(--kind cpu --kind memory) or using commas (--kind cpu,memory). - Enriched
LanternaReport— categorized hotspots, hot stacks, GC pauses, event-loop lag, allocator ranking, async chains, capture-integrity flags. - 17 built-in detectors across CPU, memory, and async kinds — see the Built-in detectors section below.
- Stable JSON schema with finding
confidenceandproofLevelfields so consumers can distinguish direct sampled evidence from heuristics. - Extensible — ship your own detectors and profile kinds as plugins.
# Install
npm install -g @lanterna-profiler/cli
# or run without installing
npx -y @lanterna-profiler/cli --help
# Profile a CLI script for 30 s and read the report
lanterna run --duration 30s --output report.json -- node app.js
lanterna report report.json --format text
lanterna report report.json --format agent --output report.agent.md
# Profile a server with representative load
lanterna run \
--duration 30s \
--wait-for-url http://127.0.0.1:3000/health \
--workload "npx -y autocannon http://127.0.0.1:3000" \
--output report.json \
-- node server.js
# Memory leak hunt with start/end heap snapshot
lanterna run --kind memory --heap-snapshot-analysis --duration 60s -- node app.jsCtrl+C stops profiling early and still emits a final report.
Lanterna ships 17 detectors out of the box. Each emits a Finding in the report with confidence and proofLevel so consumers can distinguish direct sampled evidence from heuristics.
CPU kind (8)
| ID | What it flags |
|---|---|
sync-crypto-on-hot-path |
Synchronous crypto calls (pbkdf2Sync, randomBytesSync, …) dominating CPU |
blocking-io |
Synchronous fs / zlib / dns calls on hot stacks |
json-on-hot-path |
JSON.parse / JSON.stringify dominating CPU |
excessive-gc |
High GC pause time relative to wall time |
event-loop-stall |
Long event-loop lag spikes correlated with stack samples |
deopt-loop |
V8 deoptimisation cycles repeatedly hit on the same function |
require-in-hot-path |
Dynamic require() resolved on hot stacks (cold-start surprise) |
node-modules-hotspot |
A third-party dependency dominating CPU |
Memory kind (4)
| ID | What it flags |
|---|---|
memory-growth |
Sustained heap / RSS growth over the capture window |
large-allocator |
A single allocator responsible for a dominant share of bytes |
external-buffer-pressure |
Off-heap pressure (Buffers, ArrayBuffers) |
alloc-in-hot-path |
Allocators that are also CPU hot stacks — double impact, top-priority fix (cross-kind: requires both cpu and memory, auto-skips otherwise) |
Async kind (experimental, 5)
| ID | What it flags |
|---|---|
long-await |
await expressions exceeding the wait-time threshold |
orphan-async-resource |
Async resources created but never resolved / destroyed |
deep-async-chain |
Deeply nested await chains amplifying latency |
microtask-flood |
Microtask queue saturation starving the event loop |
hot-async-context |
Async contexts dominating CPU (cross-kind: requires both cpu and async, auto-skips otherwise) |
Thresholds are configurable via .lanterna.json — see docs/configuration.md. To ship your own detectors, see docs/extending/detectors.md.
| Environment | Minimum | Why |
|---|---|---|
| Node.js running Lanterna | >= 22 |
Active LTS lines (22, 24). |
| Node.js running the profiled program | >= 12 |
Needs monitorEventLoopDelay and PerformanceObserver GC entries. |
The profiled target must support the V8 inspector. If the inspector cannot start, Lanterna fails fast — it never silently falls back to a weaker mode.
| Package | What it is |
|---|---|
@lanterna-profiler/cli |
The lanterna binary. |
@lanterna-profiler/core |
Capture orchestration, profile kinds, analysis pipeline, report builder. |
@lanterna-profiler/detectors |
Default detector pack for CPU, memory and async kinds, plus plugin helpers. |
Start here, then dive into whichever topic you need:
- docs/getting-started.md — install, first capture, reading the output.
- docs/cli.md — full CLI reference and option groups.
- docs/configuration.md —
.lanterna.jsonreference. - docs/programmatic-api.md —
runProfile,attachProfile, low-level capture and analysis APIs. - docs/report-schema.md —
LanternaReportshape (schema v2). - docs/reading-a-report.md — interpretation playbook and common mistakes.
- docs/signal-quality.md — confidence, integrity flags, degradation modes.
- docs/architecture.md — capture flow and enrichment pipeline.
- docs/troubleshooting.md — symptom-keyed fixes.
- docs/performance-overhead.md — measured startup cost and steady-state overhead per kind.
Per-kind details:
- docs/kinds/cpu.md — CPU kind, hotspots, event loop, GC, deopts.
- docs/kinds/memory.md — memory kind, allocators, RSS series, heap snapshots.
- docs/kinds/async.md — async kind (experimental), instrumentation modes, attach caveats.
Extending Lanterna:
- docs/extending/detectors.md — write a finding detector.
- docs/extending/profile-kinds.md — write a brand-new profile kind.
- docs/extending/plugin-loading.md — how plugins are discovered and packaged.
Runnable examples:
- examples/ — three tiny standalone scripts that exhibit a CPU hotspot, a memory leak, and an event-loop stall, with the matching
lanternacommand for each.
For agents (Claude Code skill):
- skills/lanterna-profiler/SKILL.md — the agent-oriented profiling workflow.
Install the skill into an agent workspace with:
npx skills add arkerone/lanterna --skill lanterna-profilerpackages/
core/ @lanterna-profiler/core — capture orchestration, kinds, pipeline, report
detectors/ @lanterna-profiler/detectors — default detector pack (CPU + memory + async) and plugin helpers
cli/ @lanterna-profiler/cli — `lanterna` binary
docs/ — human documentation
skills/lanterna-profiler/ — agent workflow for Claude Code
Dependency direction: cli → core, cli → detectors, detectors → core. core never imports detectors.
npm install
npm run build # builds all three packages
npm test # runs every package's vitest suitePer-package work: npm run build -w @lanterna-profiler/core, npm test -w @lanterna-profiler/cli, etc.
Tests use Vitest and cover frame classification, hotspot aggregation, detector evidence attribution, and live profiling paths — including short-lived processes and real event-loop stall correlation.
Each package ships its own changelog, generated by Changesets:
MIT.