Your Claude Code insurance policy — best tool for the job up front, code that connects before it breaks.
Claude Code has four problems that cost you time and money on every session. ToolDispatch covers all four.
| Problem | Module | Available |
|---|---|---|
| Claude picks from defaults — the best tool for your task is one you've never heard of | Dispatch | BYOK / Free / Pro |
| Claude renames a function and misses three callers — silent until runtime | XFBA | Free / Pro |
| You can't see what else breaks when a change lands — cascade failures hide until production | XSIA | Free / Pro |
| Token hogs and peak hours drain your budget invisibly — you only notice at the end | XFTC | All tiers |
| Sessions compact and context is lost — decisions, files changed, open questions — gone | XF-MEM | Pro |
One install. Five modules. Every decision logged.
The Claude Code tool ecosystem has 50,000+ options across plugins, skills, and MCPs. Claude picks from defaults. The best tool for what you're building right now — you've probably never heard of it.
Dispatch runs as two hooks wired together:
On every message (Hook 1): Sends your last few messages to a small model for ~100ms. If it detects a task shift — you moved from debugging a Flutter widget to writing a test suite, say — it maps the shift to a category and immediately surfaces grouped tool recommendations. Recommendations are grouped by type: Plugins, Skills, and MCPs. You see them once per topic per session.
Example proactive output (on task shift):
[Dispatch] Recommended tools for this flutter-building task:
Plugins:
• flutter-mobile-app-dev — Expert Flutter agent for widgets, state, iOS/Android.
Install: claude install plugin:anthropic:flutter-mobile-app-dev
Skills:
• VisionAIrySE/flutter@flutter-dev — Flutter dev skill for widget building.
Install: claude install VisionAIrySE/flutter@flutter-dev
MCPs:
• fluttermcp — Dart analysis and widget tree inspection server.
Install: claude mcp add fluttermcp npx -y @fluttermcp/server
Not sure which to pick? Ask me — I can explain the differences.
If no task shift is detected, Hook 1 exits silently.
Before every tool call (Hook 2): When Claude is about to invoke a Skill, Agent, or MCP tool, Dispatch intercepts it. It searches the marketplace for tools relevant to your current task, scores them against what Claude was about to use, and if a marketplace tool scores 10+ points higher — it blocks the call and surfaces the comparison:
[Dispatch] Intercepted: CC is about to use 'superpowers:systematic-debugging' (Skill) for Flutter Fixing.
CC confidence score: 62/100
── Plugins ──
1. flutter-mobile-app-dev
Relevance 91 · Signal 78 · Velocity 62 installs:12,400 stars:340 forks:28
Purpose-built Flutter/Dart agent — widget tree inspection, state, iOS/Android builds.
Install: claude install plugin:anthropic:flutter-mobile-app-dev && claude
── Skills ──
1. VisionAIrySE/flutter@flutter-dev
Relevance 84 · Signal 65 · Velocity 55 installs:2,100 stars:88 forks:14
Flutter dev workflow — widget builds, golden tests, pub dependencies.
Install: npx skills add VisionAIrySE/flutter@flutter-dev -y && claude
2. superpowers/flutter@flutter-expert
⚠ no description — install at your own risk
Relevance 0 · Signal 42 · Velocity 30 installs:890 stars:12 forks:3
── MCP Servers ──
1. dart-mcp
Relevance 79 · Signal 58 · Velocity 48 installs:4,200 stars:120 forks:9
Dart analysis server — static analysis, pub resolve, widget inspection.
More info: https://github.com/dart-lang/dart-mcp
⚠ Marketplace tools score higher than 'superpowers:systematic-debugging' (Skill) for this task.
Options:
1. Say 'proceed' to continue with 'superpowers:systematic-debugging' (one-time bypass, no restart needed)
2. Install flutter-mobile-app-dev plugin — run /compact first, then install and restart CC
3. Ignore Dispatch for this task — say 'skip dispatch'
Note: Review before installing. Dispatch surfaces tools based on community signals and task context — not a security audit.
Present these options to the user. Wait for their response before taking any action.
If no marketplace tool beats Claude's choice by 10+ points, Hook 2 exits silently.
On session end (Hook 3): Prints a one-line digest so you know Dispatch was running, even when it correctly stayed quiet:
[Dispatch] Session: 12 tool calls audited · 0 blocked (all optimal) · 1 recommendation shown
XF Boundary Auditor fires on every Edit and Write. Most of the time, Stage 1 completes in ~200ms and you see a green stamp:
◈ XFBA 47 modules · 203 edges ✓ 0 violations
When something is actually wrong, Claude sees this before the write lands:
◈ XFBA This edit will break at runtime.
evaluator.py:203 — calls rank_tools() with 3 arguments, but it only accepts 2.
This will throw a TypeError when that code runs.
[Fix problem] Type "Fix problem" — I'll apply the repair, re-audit, and promise clean
[Show diff] Type "Show diff" — show me the exact change before deciding
Supported languages: Python, TypeScript, TSX, Dart, and Bash. XFBA indexes your entire project, walks the cross-file call graph, and applies the same contract checks regardless of language. Full-stack coverage — Python backend, TypeScript frontend, Flutter mobile, and Bash scripts, all in one pass.
What it catches:
- Arity mismatches — function called with the wrong number of arguments, across files
- Broken imports — symbol renamed, moved, or deleted while callers still reference the old name
- Missing env vars — hard-coded
os.environ["KEY"]access where the var isn't confirmed set - Consumed stubs — functions marked as not implemented that have active callers
The four stages:
- Stage 1 (~200ms, always): Cross-language AST scan. Blocks immediately on violations.
- Stage 2 (on escalation): Xpansion cascade analysis — maps the full caller chain using the MECE boundary framework (DATA, NODES, FLOW, ERRORS). Shows consequence-first output.
- Stage 3: Concrete repair plan — each violation gets one specific file-and-line fix.
- Stage 4: Graduated consent — "show me the diff first" until two verified repairs this session, then "apply all" unlocks. Resets each session.
Refactor Mode: /xfa-refactor start "description" — XFBA shifts from blocking to tracking. Violations accumulate without interrupting your work. Run /xfa-refactor end when done to get the consolidated repair list. Useful when you're mid-refactor and know the code is temporarily broken across files.
Every scan leaves a record in .xf/boundary_violations.json. Every repair is logged to .xf/repair_log.json with timestamp and session ID.
XF System Impact Analyzer runs after XFBA passes. Where XFBA answers "does this edit break a contract?" XSIA answers "what breaks downstream when this change lands?"
A function signature changes. XFBA catches the arity mismatch at the call site. XSIA maps the full blast radius: all callers across all files, any data flows that depend on the return type, any side effects on shared state.
◈ XSIA 3 concerns
HIGH evaluator.py:rank_recommendations() — 4 callers in 3 files will receive wrong
argument count. interceptor.py:82, classifier.py:44, dispatcher.py:119, test_evaluator.py:31.
Fix XFBA violation first — XSIA impact is accurate only on clean code.
MEDIUM category_mapper.py — imports rank_tools from evaluator. If rank_tools changes
signature, this import chain breaks silently at call time, not import time.
LOW stack_scanner.py:detect_stack() — reads shared state modified by rank_recommendations.
Concurrent sessions could see inconsistent state during the edit window.
Six analysis dimensions (Pro): call graph impact, data flow consequences, shared state mutations, error propagation paths, import chain fragility, test coverage gaps.
XFBA is the gate. XSIA is the consequence map. XFBA runs first — if it finds a violation, XSIA still runs but flags that impact analysis is provisional until the violation is fixed. You cannot meaningfully analyze cascade impact on broken code.
XF Token Controller runs on every message you send and fires nudges when it detects invisible drains on your context budget. Most of the time it's silent.
What it watches:
- CLAUDE.md size — every line reloads on every message. 300-line config files burn thousands of tokens per session without anyone noticing.
- MCP overhead — each active MCP server adds ~18K tokens of schema overhead per message. Three idle servers = ~54K tokens gone before Claude reads your first word.
- Sub-agent model selection — Claude reaching for Opus when Haiku handles the task.
- Peak hours — 8am–2pm ET weekdays, session token budgets drain faster.
- Context fill — estimates how full your context window is using actual transcript size plus loaded files.
Free tier: One ghost notification per session showing what Pro would have flagged:
[XFTC] Pro would have flagged context usage here — dispatch.visionairy.biz/pro
Pro tier: Full nudges, enforcement blocks, and the two-tier context protection system:
[XFTC] Context estimated ~52% full — snapshot + compact now to preserve everything.
Run: /xf-mem snapshot then /compact — full session saved to searchable memory, zero loss
At 90%, XFTC writes the snapshot directly from the hook — no Claude required. Even if you ignored the 50% nudge.
Sessions end and context is lost. Decisions you made, files you changed, open questions you were tracking — gone. The next session starts from scratch.
XF-MEM is persistent semantic memory for Claude Code. Every session is snapshotted, embedded locally using sentence-transformers (free, no API key), and stored in Supabase pgvector. A Stop hook runs after every response — embedding happens automatically in the background.
At session start: The last snapshot loads automatically via the Build State Protocol. Zero re-explanation needed.
For older sessions: /xf-mem search "query" does semantic retrieval across all stored snapshots — natural language, not keyword matching.
Snapshots are global — stored across all directories, tagged by project, searchable from anywhere.
Two-tier protection (triggered by XFTC):
- 50% nudge — Claude is prompted to run
/xf-mem snapshotthen/compact. Produces a curated summary: what was built, decisions made, files changed, open questions. - 90% failsafe — XFTC writes the snapshot directly from the hook, without waiting for Claude. Extracts the last 20 transcript messages automatically. Fires even if the 50% nudge was ignored.
| Before XF-MEM | With XF-MEM (Pro) |
|---|---|
| Manual warm-start at session end | Auto-snapshot at 50% + failsafe at 90% |
| Notes stored in MEMORY.md | Embedded in Supabase pgvector |
| Single project, manual retrieval | Cross-directory, semantic search |
| Compact = context loss | Compact = zero loss |
| BYOK | Free | Pro | |
|---|---|---|---|
| Dispatch — proactive recs | ✓ | ✓ | ✓ |
| Dispatch — interceptions | Unlimited | 5/day | Unlimited |
| Dispatch — ranking quality | Configurable | Good | Best (Sonnet) |
| Dispatch — catalog | Live search | Live search | Pre-ranked, 6 sources |
| XFBA (Boundary Auditor) | — | ✓ (within 5 turns) | ✓ Unlimited |
| XSIA (Impact Analyzer) | — | ✓ (within 5 turns) | ✓ Unlimited, 6 dimensions |
| XFTC — CLAUDE.md check | ✓ | ✓ | ✓ |
| XFTC — nudges (MCP, context, model) | — | Ghost (1/session) | ✓ Unlimited |
| XFTC — enforcement blocks | — | — | ✓ |
| XF-MEM — 50% compact nudge | — | — | ✓ |
| XF-MEM — 90% auto-snapshot failsafe | — | — | ✓ |
XF-MEM — /xf-mem search semantic retrieval |
— | — | ✓ |
/dispatch-compact-md skill |
✓ | ✓ | ✓ |
| Dashboard | — | — | ✓ |
| Network intelligence | — | — | ✓ |
| Cost | API costs | Free | $10/month |
| Data sharing | None | Task labels only | Task labels only |
git clone https://github.com/ToolDispatch/Dispatch.git
cd Dispatch
chmod +x install.sh
./install.shinstall.sh walks you through three things: checking dependencies, registering hooks in ~/.claude/settings.json, and connecting to the hosted endpoint (or using your own API key). Takes about two minutes.
Start a new Claude Code session after install — hooks load at session startup.
git clone https://github.com/ToolDispatch/Dispatch.git
cd Dispatch && ./install.sh
export OPENROUTER_API_KEY=sk-or-... # recommended — free models available
# or: export ANTHROPIC_API_KEY=sk-ant-... # any Claude modelBring your own key — OpenRouter or Anthropic. Everything runs on your machine, against your key. No data leaves your network. No account needed.
- Dispatch: fully functional, unlimited interceptions, proactive recommendations on every task shift
- XFBA + XSIA: not included — one daily notice in your terminal lets you know what you're missing
You lose the catalog network intelligence, the dashboard, and the Xpansion suite. You keep full Dispatch routing, free forever.
Sign up with GitHub — no API key, no card required. install.sh will ask for your token. Takes 30 seconds.
- Dispatch: 5 turns/day + full proactive recommendations on every task shift
- XFBA: catches broken imports, arity mismatches, and missing env vars on every Edit/Write within your 5 turns
- XSIA: flags edits with systemic impact within your 5 turns
- XFTC: CLAUDE.md size check on every session (all tiers), plus one ghost notification showing what Pro would have flagged
- Everything shuts off once your 5 daily turns are used. Resets at midnight.
What leaves your machine: your last ~3 messages and working directory path, sent to classify the task. Not stored — we keep your GitHub username, usage count, and task type labels (e.g., flutter-fixing). No conversation content.
Founding offer: First 300 subscribers lock in $6/month for life. After 300, standard rate applies.
Upgrade at dispatch.visionairy.biz/pro
- Dispatch: unlimited turns, Sonnet ranking, pre-ranked catalog
- XFBA: full Stages 1–4 (AST scan → cascade analysis → repair plan → graduated consent), unlimited
- XSIA: full 6-dimension impact analysis on every edit, unlimited
- XFTC: all nudges, enforcement blocks, context protection
- XF-MEM: auto-snapshot at 50%, failsafe at 90%, semantic search across all sessions
- Dashboard: interception history, contract repair history, provenance log
The catalog is the compounding advantage. The hosted version sees what thousands of developers actually installed after a Dispatch suggestion, which tools they bypassed, and which ones stuck. That signal builds over time and no local setup can replicate it.
- Claude Code (hooks support required — v1.x+)
- Python 3.8+
- Node.js + npx — nodejs.org
- One of: a Dispatch account (free) or an Anthropic API key
The anthropic Python package installs automatically via install.sh.
Most of the time, ToolDispatch is invisible. Hook 1 runs on every message and exits silently unless it detects a shift. Hook 2 runs on every tool call but exits silently unless it finds something meaningfully better. XFBA/XSIA stamp ✓ and pass through. XFTC stays quiet when the session is clean.
When Hook 1 fires (on task shift): A proactive list of recommended tools appears in Claude's context, grouped by Plugins, Skills, and MCPs. Ask Claude to explain the differences, paste an install command, or ignore the list and keep working. Dispatch won't show the same category again this session.
When Hook 2 fires: Claude pauses and shows you the comparison. Three options:
- Say
proceed— Claude uses its original tool choice, one-time bypass, no restart needed - Install the top pick — run
/compactto save session context, paste the install command, restart CC - Say
skip dispatch— Dispatch ignores this task type going forward in the session
The threshold is a 10-point gap. If the best marketplace alternative scores 72 and Claude's tool scores 64, Dispatch blocks. A 9-point gap passes through silently.
When XFBA blocks: Claude shows you the violation in plain English, explains whether it looks like a real bug or a false positive, and asks: "Fix it, suppress it, or proceed anyway?" — it waits for your answer.
When XSIA flags: Claude surfaces concerns in plain English, explains whether each looks routine or substantive, and asks: "Fix impact issues or let it ride?" — it waits for your answer.
When XFTC fires: A nudge appears at the top of Claude's next response. Surface-level notices (CLAUDE.md size, MCP overhead) need no action — just awareness. Context fill nudges are actionable: run /xf-mem snapshot then /compact to preserve everything before the window fills.
| Command | How to use | What it does |
|---|---|---|
proceed |
Say it conversationally | One-time bypass — Dispatch lets the current tool call through, no restart needed |
skip dispatch |
Say it conversationally | Ignore Dispatch for this task type for the rest of the session |
/dispatch status |
Slash command | Show session stats — tool calls audited, blocks, recommendations shown |
Coming soon:
| Command | What it will do |
|---|---|
/dispatch pause |
Disable both hooks for this session without uninstalling |
/dispatch resume |
Re-enable after a pause |
/dispatch stack |
Show what stack_scanner detected for the current project |
/dispatch why |
Explain the last block — task type, category, top tool score vs CC score |
/dispatch ignore [tool] |
Permanently exclude a specific tool from all recommendations |
/dispatch feedback good |
Mark the last recommendation as correct (strong positive signal) |
/dispatch feedback bad |
Mark the last recommendation as wrong |
| Command | How to use | What it does |
|---|---|---|
/xfa-refactor start "description" |
Slash command | Enter Refactor Mode — violations accumulate without blocking |
/xfa-refactor end |
Slash command | Exit Refactor Mode — presents consolidated repair list |
When XFBA blocks an edit, Claude reads the options and acts:
- Say
Fix problem— Claude applies the repair, re-audits, outputs<promise>XFBA_CLEAN</promise>when clean - Say
Show diff— Claude shows exactly what the repair changes before applying it - After Show diff, say
Apply fix— apply the shown change, re-audit, promise clean - Say
I'll handle it— allow the edit through; violation logged to.xf/repair_log.json
Coming soon:
| Command | What it will do |
|---|---|
/xfa pause |
Disable XF Audit blocking for this session (violations still logged) |
/xfa resume |
Re-enable after a pause |
/xfa report |
Show repair_log.json summary — violations caught, files touched |
/xfa clear |
Clear open violations in .xf/boundary_violations.json |
| Command | How to use | What it does |
|---|---|---|
/xf-mem snapshot |
Slash command | Write a snapshot of the current session to persistent memory |
/xf-mem search "query" |
Slash command | Semantic search across all stored session snapshots |
Each recommended tool shows three components so you can judge it yourself:
- Relevance — how well the tool's description matches your specific task, scored by a fast LLM pass. Tools with no description score zero and get a visible warning.
- Signal — popularity as a quality proxy, weighted across installs, stars, and forks. Log-scaled so a newer tool with 500 installs isn't buried by one with 50,000.
- Velocity — install momentum relative to how long the tool has existed. A tool gaining traction fast ranks higher than one that peaked years ago.
All three factors contribute to the final score. Dispatch blocks when the top marketplace score beats CC's confidence by a meaningful margin.
Tools are grouped by type (Plugins / Skills / MCPs), up to 3 per group. Raw installs, stars, and forks are shown so you can verify the signal yourself.
No description = relevance 0. If a tool has no README or description, it can't score on relevance — only signal and velocity. It'll still appear if community adoption is strong, but with a ⚠ warning. Dispatch sends outreach to undescribed tool authors automatically.
Caveat: Dispatch surfaces tools based on community signals and task context — not a security audit. Review any tool before installing.
Free/BYOK — hits the live skills.sh marketplace and glama.ai MCP registry on each intercept (~2–4s). Relevance is scored by an LLM using the tool description.
Pro — pulls from a pre-ranked catalog built by a daily crawl across npm, skills.sh, glama.ai, and the Claude plugin registries. Tools are scored during the crawl — all three components pre-computed. At intercept time, Dispatch maps your task to the closest taxonomy leaf and returns a pure catalog query. Intercept response is <200ms, no LLM call at hook time.
Dispatch recommends from the full marketplace — installed or not. But its scores improve with better tool descriptions. Add the official marketplaces to give it more signal:
/plugins add anthropics/claude-plugins-official
/plugins add ananddtyagi/claude-code-marketplace
Browse for skills relevant to your stack:
npx skills find flutter
npx skills find supabase
npx skills find reactThe more skills in the registry that match your work, the more often Dispatch has something useful to surface.
Dispatch uses a hierarchical MECE taxonomy with 16 top-level categories: source-control, data-storage, search-discovery, ai-ml, frontend, mobile, backend, infrastructure, delivery, integrations, identity-security, observability, testing, data-engineering, payments, documentation. Each category breaks down into subcategories and leaf nodes (e.g. data-storage → relational → postgresql).
When Haiku detects a task shift, it generates a specific label like flutter-fixing or postgres-rls-query. Dispatch maps that label to the closest taxonomy leaf — scoring token overlap against 100+ leaf nodes and their tags. The leaf drives marketplace search with precise vocabulary (e.g. postgresql maps to postgres/rls/migration/query terms), more targeted than keyword-splitting the task label directly.
Pro users get the full taxonomy path sent to the catalog — results filtered by leaf node and matching tags, sorted by pre-computed signal scores with no LLM involved.
Unknown task types are logged to unknown_categories.jsonl in the dispatch directory — if you're working in a niche stack and Dispatch consistently misses, that file tells you why.
On install, and again whenever you change working directories, Dispatch scans your project's manifest files (package.json, requirements.txt, go.mod, Cargo.toml, pubspec.yaml, etc.) to build a stack profile. Pro users' catalog results are reranked using this profile — a Flutter project gets flutter-mobile-app-dev ranked higher than a generic mobile tool even if their base scores are similar.
The stack profile lives at ~/.claude/dispatch/stack_profile.json and updates automatically.
Dispatch isn't intercepting anything
- Start a new Claude Code session after install — hooks load at startup
- Check both hooks are registered: look for
UserPromptSubmitandPreToolUseentries in~/.claude/settings.json - Verify your key or token:
cat ~/.claude/dispatch/config.json
Dispatch fires but passes everything through
- This is correct behavior most of the time — it only blocks when the gap is 10+ points
- If marketplace search returns nothing, there's nothing to compare against
Proactive recommendations aren't appearing
- Start a new Claude Code session after install — hooks load at startup
- Check that Hook 1 is registered: look for
UserPromptSubmitin~/.claude/settings.json - Proactive recommendations fire only on a confirmed task shift with confidence ≥ 0.7 — if you're continuing the same topic, no output is expected
XFBA isn't catching anything
- XFBA runs on Edit and Write tool calls. It won't fire on Bash commands or file reads.
- Check that the XF Audit hook is registered: look for
PreToolUseentries withxf-boundary-auditor.sh
Hook is slow
- 10s hard timeout — Claude proceeds normally if exceeded
- Pro catalog responses are <200ms; BYOK/Free search takes 2–4s
"Degraded mode" warning during install
- The
anthropicpackage installed but Python can't import it (common on system Python with PEP 668 restrictions) - Fix:
pip3 install anthropic --break-system-packagesor use a virtualenv
bash uninstall.shRemoves all installed files, hook scripts, and settings.json entries automatically. Also cleans up pre-v0.9.2 installs if present.
- No
~/.claude/CLAUDE.mdmodification — ToolDispatch doesn't touch your global Claude instructions - No credential harvesting — reads only
ANTHROPIC_API_KEYfrom your environment - No shell injection — task type labels always passed as
sys.argv, never interpolated into shell strings - Open source — every line of every hook and Python module is in this repo; verify before installing
- 10-second hard timeout — enforced by Claude Code; ToolDispatch cannot hang your session
BYOK: Haiku calls go directly from your machine to Anthropic. Nothing passes through our servers.
Hosted (Free and Pro): The following data is sent to and stored at dispatch.visionairy.biz:
| Data | Stored? | Notes |
|---|---|---|
| Last ~3 messages | No | Sent for classification, discarded immediately |
| Working directory path | No | Sent for context, not stored |
| GitHub username + email | Yes | Collected via GitHub OAuth at signup |
Task type label (e.g. flutter-fixing) |
Yes | Stored per interception event |
| Tool intercepted + relevance scores | Yes | Tool name, CC score, marketplace score |
| Blocked / bypassed / installed | Yes | Powers your Pro dashboard |
| Stack profile (languages/frameworks) | Local only | Stored in ~/.claude/dispatch/stack_profile.json |
| XF-MEM snapshots | Local + Supabase | Pro only; your project, encrypted at rest |
We don't store conversation content. We don't sell individual user data. Aggregate, anonymized patterns (e.g. what percentage of mobile developers install Flutter skills after a Dispatch suggestion) improve catalog rankings network-wide.
Creator outreach: When the daily catalog crawl finds a skill with install activity but no description, Dispatch may open a GitHub issue on that repo asking the creator to add a description. At most once per repo per 30 days.
To delete your account and all stored data, email hello@dispatch.visionairy.biz. To stop all data sharing immediately, switch to BYOK mode.
Open source, MIT licensed. The classifier taxonomy and category mapping are the most impactful places to contribute — better category coverage means better marketplace routing for everyone.
Open an issue with:
- What task type Dispatch detected
- Whether the recommendations were relevant
- Stack you were working in
Pull requests welcome.
Every Claude Code session has the same four problems. Claude picks tools from defaults while 50,000+ purpose-built options exist. It produces code that doesn't connect — renames a function and misses every caller. It can't show you what breaks downstream when a change lands. And sessions drain context invisibly until you compact and lose everything you were tracking.
ToolDispatch covers all four. Dispatch is the runtime layer that ensures Claude reaches for the best tool. XFBA closes the contract gap at the edit boundary, where context is still live and the fix is near-zero cost. XSIA maps the blast radius before the change lands. XFTC keeps the session lean. XF-MEM makes compacting lossless.
One install. Everything logged. The hosted version compounds over time — it knows what tools thousands of developers actually reached for when they were doing exactly what you're doing now, and which ones they kept. Start free.
Built by Visionairy.
XF Audit is built on the Xpansion Framework — a boundary-definition methodology developed by Visionairy that applies recursive MECE branch discovery to map system boundaries at the appropriate depth for any problem.
The core idea: every system has boundaries. Every boundary has callers. Every caller is a branch. Discovery terminates when the graph is exhausted or the use case is satisfied — not before, not after. The framework enforces this discipline systematically across four boundary types: DATA (what flows), NODES (what processes), FLOW (how it moves), ERRORS (what breaks it).
Applied to code contracts in XF Audit:
| XF concept | XF Audit application |
|---|---|
| Boundary definition | Function signatures, import contracts, env vars, stubs |
| Recursive branch discovery | Cascade analysis — traces every caller of every broken boundary |
| MECE termination | Cascade stops when the call graph is exhausted, no gaps, no overlaps |
| Appropriate depth | Stage 1 always runs; Stages 2–4 escalate only when violations exist |
XF Audit is the first public application of the Xpansion Framework to AI-generated code. The same methodology powers Visionairy's system analysis, process design, and debugging tools across all projects.
claude-code-hooks — the most complete public reference for Claude Code hook events. Documents 26 distinct hook types including several that most developers don't know exist: PostToolUseFailure, PreCompact/PostCompact, WorktreeCreate/WorktreeRemove, TaskCreated/TaskCompleted, CwdChanged, FileChanged. ToolDispatch currently uses 3 of these (UserPromptSubmit, PreToolUse, Stop). If you're building hooks, start here.
There is no dedicated hook registry today — no glama.ai or Smithery equivalent for hook-based tools. Skills have skills.sh. MCPs have glama and Smithery. Hooks have nothing. ToolDispatch plans to be the first catalog to index hook-based tools as the pattern grows.
ToolDispatch's own codebase is monitored by XF Audit during development. Every edit Claude makes to Dispatch is checked for contract breaks before it lands.
In practice, this meant:
- The arity checker caught 12 real violations during an eng review pass — functions being called with the wrong number of arguments across the codebase, all silently waiting to throw TypeErrors at runtime.
- The silent exception checker (added after a production incident) caught the pattern that caused 99 minutes of cron work to go to /dev/null — a bare
except Exceptionthat printed a warning but reported success regardless. - The stub checker surfaced unimplemented functions with active callers before they ever reached a user session.
We eat our own cooking. The tool that ships with ToolDispatch is the tool we use to build ToolDispatch.
With TypeScript and Dart scanner support, XFBA also monitors LC-Access (React Native — 28 TS modules indexed) and Perimeter (Flutter — 49 Dart modules indexed) during development. Every edit Claude makes across all three codebases is checked before it lands.
- Hosted endpoint (dispatch.visionairy.biz)
- PreToolUse interception — blocks on 10+ point gap
- Category-first routing — 16 MECE categories
- Pre-ranked catalog — daily cron, signal-scored (installs/stars/forks/freshness)
- Stack detection — auto-detects languages/frameworks from manifest files
- Pro dashboard — interception history, block rate, install conversions, quota
- Install conversion tracking — detects when users install suggested tools
- Creator outreach — GitHub issues for undescribed skills (max 1/repo/month)
- Slack notifications — signup, upgrade, conversion, daily digest, cron completion
-
/dispatch statuscommand - Proactive recommendations — grouped by type (Plugins/Skills/MCPs) at task shift
- Session digest — Stop hook shows what Dispatch did each session
-
/xfa-refactor start/end— Refactor Mode for XF Audit - TypeScript, TSX, and Dart scanner support — XFBA/XSIA cover React Native and Flutter
- XFTC — token control (MCP overhead, peak hours, context fill, model coaching)
- XF-MEM — session memory (auto-snapshot, semantic search, 90% failsafe)
-
/dispatch pause/resume— disable hooks mid-session without uninstalling -
/dispatch stack— show detected project stack -
/dispatch why— explain last block decision -
/dispatch ignore [tool]— permanent per-tool exclusion -
/dispatch feedback good/bad— explicit recommendation signal -
/xfa pause/resume— disable XF Audit blocking mid-session -
/xfa report— session repair summary -
/xfa clear— clear stale violations - skills.sh distribution (
npx skills add ToolDispatch/Dispatch) - CC marketplace submission
- Weekly new-tool digest email for Pro users
- Aggregate insights API (category trends, CC gap analysis)