A single nervous system for your projects and your AI coding team.
Mission Control lets you message your AI coding team from your phone. You text a Discord bot — @mc fix the auth bug in my-app, branch off main — and Claude does the work on your laptop, makes the change on a new branch, and DMs you back with what it did. You watch its reasoning stream in live, like a chat reply that keeps writing itself.
Everything happens locally. The Discord bot is just the messenger; your code never leaves your machine. Runs on Mac, Linux, or Windows (via WSL2). $0/month. No cloud lock-in.
- A small background program (
mcd) runs on your laptop. On Mac it's a launchd agent; on Linux/WSL2 it's a systemd user unit. You install it once and forget it. mcdwatches your Discord bot for DMs. When you message it, it parses what you want — what to work on, in which project, off which branch.- It runs Claude on the project locally. Uses your Claude Code subscription via a long-lived auto-refreshing OAuth credential. After
claude login, runmc auth bootstraponce to copy credentials to a daemon-readable file at~/.mc/credentials.json; the daemon refreshes tokens automatically againsthttps://claude.ai/v1/oauth/token. The agent has full repo access — file reads, edits, shell, git. - Claude's reasoning streams back to Discord. The bot reacts 👀 on your message so you know it saw it; the first text delta posts a streaming anchor that subsequent deltas roll into via edits. Tool-use markers are discrete messages, and a final summary closes the run with the diff stat and the new branch name.
- A web dashboard at
http://localhost:7777mirrors the same stream live. Useful for long-running tasks. Optional Cloudflare Tunnel exposes it at a public HTTPS URL so the iPhone Discord client can deep-link in.
That's the whole loop. Markdown files in ~/.mc/memory/ give the agent project context across runs (a hybrid keyword + vector search picks the relevant chunks for each prompt). SQLite tracks task history.
From your iPhone Discord, DM
@mc work on the auth bug in mission, branch off main. The daemon receives the gateway event, forks a branch, spawns an agent against~/code/mission, streams the model's reasoning back as a rolling-edit reply (text deltas) plus discrete messages for each tool call and a final summary with diff stat + branch name. The web dashboard mirrors the same events via SSE.
Build status: Slices 1–32a merged on
main, plus the config-in-DB cut (Slice A foundation + Slice B "the cut" — daemon now reads creds and operational settings exclusively from SQLite viaresolveConfig, withprobeAndRegisterlifecycle and atomic batch credential writes). 950 offline tests across 85 files, all green on macOS + linux-x64 + linux-arm64 in CI. See CHANGELOG.md for per-slice detail.
Everything lives in one Node.js process on your machine. The dashboard isn't a separately deployed web app — it's a Hono HTTP+SSE server bound to 127.0.0.1:7777, served by the same daemon that runs Discord, the agent runner, and SQLite. If the daemon is off, the dashboard is off.
flowchart TB
iphone[iPhone Discord]
browser[Web browser]
cf[Cloudflare Tunnel<br/>transport only · no hosting]
subgraph host [Mac · Linux · Windows-via-WSL2 · single Node process]
direction TB
config[Config · Logger · EventBus<br/>cross-cutting]
subgraph svc [Lifecycle-managed services]
direction LR
ds[DataStore<br/>SQLite + WAL]
mem[MemoryProvider<br/>markdown → index]
rec[Reconcile<br/>orphan recovery]
dash[DashboardServer<br/>127.0.0.1:7777]
disc[DiscordAdapter<br/>conditional]
end
subgraph col [Collaborators]
direction LR
orch[TaskOrchestrator]
queue[MessageQueue<br/>per-lane FIFO]
agent[AgentRunner<br/>Claude Agent SDK]
embed[Embedder<br/>AI Gateway]
end
config --- svc
config --- col
svc --- col
end
md[(~/.mc/memory/<br/>repo/.mc/memory/<br/>markdown · git)]
db[(~/.mc/state.db<br/>derived index)]
iphone --> cf
browser --> cf
cf --> dash
browser -. local .-> dash
mem --> md
mem --> db
ds --> db
classDef ext fill:#fff,stroke:#888,stroke-dasharray:3 3
classDef store fill:#f0f7ff,stroke:#2563eb
class iphone,browser,cf ext
class md,db store
The dashboard listens on 127.0.0.1 only — even on the local network, no other device can reach it directly. The only path in from outside is the Cloudflare Tunnel, which terminates TLS at Cloudflare and forwards through an outbound-only cloudflared connection back to localhost:7777. No inbound ports open on your machine.
Truth flows one way. Markdown files are the source of truth for knowledge. SQLite is a derived index that can be rebuilt at any time via MemoryProvider.reindex(). Operational state (tasks, runs, messages, sessions) lives only in SQLite — that's not knowledge, it's process bookkeeping.
What actually happens when you DM the bot:
sequenceDiagram
autonumber
participant U as You (iPhone)
participant D as DiscordAdapter
participant O as TaskOrchestrator
participant Q as MessageQueue<br/>(per-lane FIFO)
participant G as Git (simple-git)
participant R as AgentRunner
participant SDK as Claude Agent SDK
participant B as EventBus
participant Dash as DashboardServer
U->>D: DM "@mc work on auth bug in mission, branch off main"
D->>D: parseDiscordCommand · ownerCheck · seenMessages dedupe
D-->>U: 👀 react on user's message (lightweight ack)
D->>O: create(CreateTaskCommand)
O->>B: task.{id}: status=queued
O->>Q: enqueue runTask on lane (channelId)
Q->>O: dequeue → runTask
O->>B: task.{id}: status=running
O->>G: forkBranch(repo, base="main")
O->>R: run(AgentRunInput)
R->>SDK: query(prompt, options)
loop streaming
SDK-->>R: RawSdkMessage
R-->>O: AgentEvent (translate)
O->>B: task.{id}: kind=agent
B-->>D: subscriber → first text delta posts the lazy anchor;<br/>subsequent deltas roll into edits; tool calls post fresh lines
B-->>Dash: subscriber → SSE frame
Dash-->>U: (web) live event stream
end
SDK-->>R: result(success)
R-->>O: AgentEvent: completed
O->>G: diffStat(base..HEAD)
O->>B: task.{id}: status=posting_summary
D-->>U: final summary message + diff stat
O->>B: task.{id}: status=completed
Two things worth noting:
- Per-lane FIFO.
MessageQueuekeys byqueueLaneId(the Discord channel id for v1). Two rapid DMs to the same channel run sequentially — the second's first event arrives only after the first'scompletedsettles. Different channels run in parallel. - The orchestrator never imports
discord.jsorbetter-sqlite3. It receivesCreateTaskCommandfrom any channel adapter, persists rows through repo interfaces, and publishes events on the bus. Discord and the dashboard subscribe; they don't dictate. That inversion is what lets a future Telegram or Slack adapter plug in without changing the orchestrator.
You drop markdown files into ~/.mc/memory/ (global) or <repo>/.mc/memory/ (per-project). A chokidar watcher reindexes on every change. The agent system prompt gets the top-K hits from a hybrid FTS + vec search — keyword and semantic, fused via Reciprocal Rank Fusion.
flowchart LR
A[markdown file<br/>add / change / unlink] --> B[chokidar<br/>awaitWriteFinish 250ms]
B --> C[serialize gate<br/>per scope+relPath FIFO]
C --> D[parseFrontmatter<br/>gray-matter]
D --> E[chunk on H2/H3<br/>1200-token cap<br/>100-token overlap]
E --> F{content_hash<br/>cache hit?}
F -- yes --> G[skip embed]
F -- no --> H[embed batch<br/>OpenAI via AI Gateway]
H -- ok --> I[upsert tx]
H -- fail --> J[upsert tx + DLQ]
G --> I
I --> K[(memory_index<br/>memory_fts<br/>memory_vec + sidecar)]
J --> L[(memory_embed_queue<br/>retry next reindex)]
J --> K2[(memory_index<br/>memory_fts only)]
classDef sink fill:#f0f7ff,stroke:#2563eb
class K,K2,L sink
Six invariants the reindex pipeline holds:
- Embed-outside-tx. The network call to AI Gateway happens before any SQLite write opens.
- All-or-nothing tx. Within one
(scope, relPath)reindex:DELETE memory_vec → DELETE memory_vec_meta → DELETE memory_index → INSERT memory_index → INSERT memory_vec → INSERT memory_vec_metaruns as one transaction. A crash mid-flight rolls everything back. - Enqueue-in-tx. When the embedder fails, the
memory_indexrow still inserts (FTS keeps working), and theindex_idenqueues tomemory_embed_queuein the same transaction. - Retry timing. Failed embeds retry on the next reindex of that file (no periodic worker yet — that's deferred).
- No-embedder-no-enqueue. When the daemon runs with no
AI_GATEWAY_API_KEY, vec is disabled entirely and no DLQ rows accumulate. - Unlink-cleans-vec. When a markdown file is deleted,
deleteVecForPathruns beforedeleteForPathso vec rows clean up while the index_id is still resolvable.
flowchart LR
Q[search query] --> F[searchFts<br/>BM25, top-K]
Q --> V[embed query]
V --> N[searchVec<br/>cosine KNN, top-K]
F --> R[rrfFuse<br/>k=60]
N --> R
R --> O[ordered hits<br/>origin: fts / vec / fused]
classDef src fill:#fff7e6,stroke:#d97706
class F,N src
rrfFuse(ftsHits, vecHits, { k: 60, limit: 10 }) scores each hit Σ 1/(k + rank) across whichever lists it appears in. Tiebreaks are deterministic: FTS-only > vec-only > lower-FTS-rank > lower-vec-rank > lex-asc relPath. The returned rank is the negated score so the "lower is better" contract that callers expect from BM25 holds across the fused list.
If AI_GATEWAY_API_KEY is missing, searchVec is skipped — search degrades to pure FTS. If embedding fails for a chunk, the chunk is still searchable via FTS until a later reindex retries the embed.
The dashboard has a collapsible bottom panel that streams every event the daemon publishes — agent SDK events, task status transitions, and raw Discord I/O — onto one filterable timeline. Click ▾ Debug to expand. Filter checkboxes hide/show categories in real time without reconnecting; selection persists in localStorage (mc.debug.v1).
Two sources feed the panel:
- Bus events — every
OrchestratorEventpublished on the in-processEventBus. Categories:status,agent.started,agent.tool_use,agent.tool_result,agent.assistant_text_delta,agent.assistant_thinking,agent.usage,agent.completed,agent.failed,agent.cancelled. The chatty deltas/thinking/usage start hidden by default. - Discord I/O — raw inbound DMs and outbound sends/edits, surfaced via the optional
DiscordAdapter.onActivityhook (wired bycomposeMcdto push into the dashboard's debug buffer).discord.inboundfires pre-filter so the panel sees DMs the adapter ignores (non-owner, redelivery dedupe, parse failures);discord.outbound.editfires after the rate batcher coalesces, so what the panel logs is what Discord actually saw.
Backed by a 500-event global ring buffer (@mc/dashboard/debug-event-buffer.ts) and exposed at:
GET /api/debug/events/recent?limit=N— initial bootstrap, latest-first.GET /api/debug/events— SSE live tail withLast-Event-IDreplay and the same 15s heartbeat as the per-task stream.
The bus stays typed EventBus<OrchestratorEvent> — Discord events ride a sidecar recordDiscord() path on the buffer rather than widening the bus, so no other consumer (orchestrator, channel adapters, projection layer) needs to learn a new event shape.
pnpm workspace, strict TypeScript (exactOptionalPropertyTypes, noUncheckedIndexedAccess, verbatimModuleSyntax, NodeNext), Biome for lint/format, vitest with pool: 'forks' for native-binding compatibility.
mission/
├── pnpm-workspace.yaml
├── tsconfig.base.json
├── biome.json
├── vitest.workspace.ts
├── .env.example # copy to .env.local for smoke probes
├── CHANGELOG.md # per-slice ship log
└── packages/
├── core/ # SQLite, schema/migrations, repos,
│ embeddings, parser, agent runner,
│ orchestrator, queue, events, projects,
│ memory chunker/frontmatter, git ops,
│ logger, config loader, testing helpers
├── memory/ # MemoryProvider interface + contract test
├── memory-markdown/ # Markdown-on-disk provider + chokidar watcher
├── discord/ # DM adapter + gateway client + rate batcher
├── dashboard/ # Hono HTTP+SSE server (read API, write delegates)
├── cli/ # `mc` binary (commander) — per-platform daemon installers
│ (launchd-installer.ts on macOS, systemd-installer.ts on Linux/WSL2)
│ dispatched from daemon.ts via `getPlatform()`
└── daemon/ # `mcd` binary — composition root, lifecycle,
single-instance lock, launchd + systemd unit generators, tunnel,
cross-platform helpers (platform.ts: getPlatform/isWSL2/findExecutable/
linuxDistroFamily)
Cross-package imports are restricted by intent. The orchestrator never imports discord.js. The dashboard never imports better-sqlite3. Coordination flows through the typed EventBus and constructor-injected interfaces.
graph TB
classDef iface fill:#f5f3ff,stroke:#7c3aed,stroke-dasharray:4 2
classDef impl fill:#ecfdf5,stroke:#059669
classDef bin fill:#fef3c7,stroke:#d97706,stroke-width:2
daemon[/"@mc/daemon"<br/>mcd · composition root/]
cli[/"@mc/cli"<br/>mc/]
core["@mc/core<br/>datastore · agent · orchestrator<br/>queue · events · embeddings<br/>parser · projects · git · config<br/>logger · testing"]
memIf(["@mc/memory<br/>interface + contract"])
memMd["@mc/memory-markdown<br/>MarkdownGitProvider"]
discord["@mc/discord<br/>DiscordAdapter<br/>(client-js subexport)"]
dash["@mc/dashboard<br/>Hono server"]
daemon --> core
daemon --> memIf
daemon --> memMd
daemon --> discord
daemon --> dash
cli --> core
cli --> daemon
cli --> memIf
cli --> memMd
dash --> core
dash --> memIf
discord --> core
memMd --> memIf
memMd --> core
class daemon,cli bin
class memIf iface
class core,memMd,discord,dash impl
| Package | Owns |
|---|---|
@mc/core/datastore |
The only place that opens SQLite. WAL mode, FTS5, sqlite-vec (capability-gated migrations), prepared statements, repos. Includes IntegrationCredentialsRepo for per-integration secrets (integration_credentials table; values stored as BLOB, never returned through list()/summary()); ConversationsRepo (open/get/close per (transport, scope_kind, scope_id), with conversations_active partial unique index enforcing one-open-per-scope), MessagesRepo (scope-routed inserts with per-kind truncation + content-hash dedup, atomic counter bump), AgentEventsRepo (cross-agent activity log keyed by trace_id/span_id/parent_span_id), PendingJobsRepo (TOCTOU-safe single-statement claim for the sleep-time-compute queue), and SeenMessagesRepo (transport-keyed dedup, replaces discord_seen_messages). |
@mc/core/projects |
ProjectRegistry — slug generation, path resolution, bulk scan walker. |
@mc/core/agent |
AgentRunner, SdkStreamAdapter (the only file that knows the Claude Agent SDK message shape), record-replay harness. cli-allowlist.ts exports makeCliBashAllowlist({ command, allow, deny? }) (Slice J3) — a generic factory the chat lane uses to build per-CLI Bash gates with shared deny-on-shell-metachar logic; composeBashGates(first, ...rest) OR-composes multiple gates (allow when any allows, first deny's message otherwise). Used by both pbrain/bash-allowlist.ts and neon/neon-allowlist.ts. |
@mc/core/orchestrator |
TaskOrchestrator + a pure transition() state machine (queued → running → posting_summary → completed|failed|cancelled). reconcileOnStart() for crash recovery. When CreateTaskCommand.conversationScope is supplied, the run wraps in withTraceSpan and persists assistant / tool_use / tool_result rows back to messages keyed on the scope (with <private> strip on tool_result content), threads resumeSessionId from conversations.last_session_id for cross-run continuity, and emits run.started / tool.<name> / tool_result.<ok|error> / compact.triggered / run.<terminal> rows into agent_events via the observability EventWriter façade. |
@mc/core/queue |
MessageQueue — per-queueLaneId FIFO so two rapid messages on the same lane don't race the same Claude session. |
@mc/core/events |
EventBus — typed pub/sub, hierarchical topics (task.<id>.event, memory.changed), prefix matching. |
@mc/core/parser |
parseDiscordCommand — @mc work on <goal> in <project> [branch off <ref>] grammar, table-driven. parseRoutedMessage — same grammar minus the in <project> requirement, used when a channel_routes row supplies the project (Slice 32a). |
@mc/core/embeddings |
EmbeddingService interface + OpenAIGatewayEmbedder, DeterministicEmbeddings (test), LRU makeCachedEmbedder (10k cap, ~60MB ceiling). |
@mc/core/config |
resolveConfig(store, registry) — reads secrets and operational settings from SQLite (integration_credentials + settings), validates, returns deep-frozen ResolvedConfig with discriminated unions on discord, memory.vec, and skills (Slice J2: env-driven MC_SKILLS_ROOT, default ~/.claude/skills/, validated for existence). pbrainCli: boolean and neonCli: boolean are sibling top-level fields set by resolvePbrainCli() and resolveNeonCli() respectively (both walk $PATH with X_OK checks via a private isOnPath helper). The Neon gate trusts the CLI's own auth cache (neonctl auth → ~/.config/neonctl/, inherited via $HOME by the launchd-spawned daemon) — no parallel NEON_API_KEY env var required. Skills availability, pbrain CLI presence, and Neon CLI presence are independent capabilities. Invalid setting values surface via the StatusRegistry as system errors and fall back to defaults. |
@mc/core/git |
simple-git ops — branch fork before query, diff stat at completion. |
@mc/core/logger |
pino makeLogger — structured JSON, child loggers per module, redaction for secrets. |
@mc/core/messaging |
MessagingAdapter interface (start({ onInbound, onCommand }), stop, send, optional react) — every transport adapter implements it; Discord is the first concrete impl as of Slice H4a. InboundDispatcher — two-stage accept(msg) (sync-fast: single-statement INSERT OR IGNORE dedup, conversation upsert, fresh trace span, inbound.accepted/inbound.deduped event) → process(msg, traceId) (re-enters the trace and calls the injected onProcess handler — makeProcessRouter is the production handler as of Slice I1b). resolveRouteForScope(channelId, cleanText, deps) returns a discriminated RouteResolution (explicit-task / chat-with-project / chat-no-project / rejected); makeProcessRouter({ orchestrator, chatBridge, resolveProject*, sendReply, adapterId }) dispatches by kind: explicit work on … in <slug> → orchestrator.create(), free-form text in routed/default-DM channels → chat-bridge with the route's project, free-form text in unrouted DMs → chat-bridge with no project, unknown explicit slug → sendReply with the unknown-project hint. makeChatBridge runs the chat lane against the same AgentRunner the orchestrator uses but with a brainstorming system prompt (read-only Read/Glob/Grep filesystem tools sandboxed via additionalDirectories: [project.rootPath], optional mc-memory MCP server for the memory_search tool); inserts a runs row with NULL task_id, captures started.sessionId into conversations.last_session_id, strips the <<READY_TO_PROMOTE>> sentinel and emits chat.ready_to_promote, and on a thrown stream error emits chat.failed plus a user-visible reply (Slice K2: when the agent failure carries subtype: 'auth_required' from the runner's auth-error tagging, the bridge sends CHAT_AUTH_REQUIRED_REPLY with the exact recovery commands instead of the generic CHAT_ERROR_REPLY). makeRunCoordinator() enforces single-run-per-conversation across both lanes. makeDoCommandHandler implements /do promotion (refuses on no-active-conversation, null lastSessionId, in-flight run, or unknown explicit slug; otherwise builds a CreateTaskCommand whose orchestrator path resumes the SDK session). makeDispatcherHandlers(dispatcher, { onError }) is the shared handler shape compose + tests both wire. command(cmd) switches on cmd.name: /new//clear close the active conversation and enqueue a summarize_conversation job (refused with REPLY_RESET_REFUSED when isRunInFlight says a run is mid-flight); /do calls the wired-in promotion handler and does NOT close the conversation. stripPrivate(text) removes <private>...</private> regions with depth-tracked nesting; persistScopedMessage is the shared assistant/user-row writer used by both lanes (single source of truth for the redaction policy). trim(messages, budget) — pure turn-group-atomic trimmer (drops whole atoms oldest-first, never orphans tool_use/tool_result pairs); ships locked-in for the future non-Claude provider slice. |
@mc/core/observability |
withRootSpan(traceId, fn) / withSpan(fn) / withTraceSpan(fn) — AsyncLocalStorage-backed trace propagation. withTraceSpan opens a child span when a parent context exists, otherwise a fresh root — used by the orchestrator so dispatcher-rooted runs join the inbound trace and CLI/scheduler runs auto-root. EventBus.publish is synchronous, so handlers see the publisher's trace context with zero per-listen plumbing. EventWriter (info / warn / error / denied / compacted) — façade over AgentEventsRepo that auto-pulls traceId/spanId/parentSpanId from the current context; throws if called outside any trace as a deliberate boundary. |
@mc/core/testing |
makeOrchestratorIntegration, silentLogger, captureLogs — real DataStore + real bus + faked SDK. |
@mc/memory |
MemoryProvider interface + memoryProviderContract shared spec runnable against any provider impl. |
@mc/memory-markdown |
MarkdownGitProvider — markdown reader + chokidar watcher + reindex pipeline + RRF fuse. FTS works even on embed failure. |
@mc/discord |
DiscordAdapter implements MessagingAdapter from @mc/core/messaging (Slice H4a) — owner-allowlist gate → dispatcher.accept → 👀 react on accept → fire-and-forget dispatcher.process. Two parallel bus subscriptions (Slice I1b): task.<id>.event drives the task-lane lazy-anchor streaming + final "done"/"failed" summary; chat.<conversationId>.event drives the chat-lane lazy-anchor streaming with no terminal sentinel (chat just stops streaming when completed/failed arrives). Allow-listed channel types: DM, GuildText, PublicThread, PrivateThread, AnnouncementThread (threads opened in H4a; voice / forum / announcement-parent stay rejected). Slash commands /new, /clear, and /do registered globally on start (/do from Slice I1a, real promotion logic from I1b) — interactions ack ephemerally within Discord's 3 s deadline; /new and /clear close the active conversation and enqueue a summarize_conversation job, /do promotes the active conversation to a task that resumes the SDK session. DiscordRateBatcher (5 edits / 5s, 1/s edit cap per message, 800ms coalesce); transient edit failures route through onEditError to eventWriter.warn (Slice I1b — no longer silent). DiscordGatewayMessage and DiscordGatewayCommand are the raw gateway-shape contracts ({ messageId, authorId, channelId, channelKind, content } / { name, channelId, channelKind, authorId, args, reply }); the adapter maps channelKind to ConversationScope.kind. Optional onActivity hook surfaces raw I/O (DiscordActivityEvent, including discord.command) for the dashboard's debug panel. Production gateway client at sub-export @mc/discord/client-js. |
@mc/dashboard |
DashboardServer (Hono) — read API + write API + SSE event stream with 15s heartbeats + in-memory ring buffer (last 200 events per task) for Last-Event-ID replay. Also serves /api/debug/events{,/recent} against a global DebugEventBuffer (500-event ring) backing the collapsible debug panel. Pairing-token auth on /api/* (cookie or Authorization: Bearer); /__pair handshake exchanges a query-param token for a session cookie; /__pair/auto silently pairs loopback hosts. /api/integrations exposes credential CRUD (values never returned, with audit_log rows on every mutation). /api/integrations/:kind/status returns live StatusRegistry state (ready/unverified/disabled/error); /api/integrations/:kind/batch writes multiple (key, value) rows in a single SQLite transaction; /api/test-connection/:kind probes user-provided credentials before saving (rate-limited 5/min/kind, 1 in-flight, distinguishes 200 ok:false upstream-rejected from 502 unreachable). /api/settings is a parallel CRUD surface for daemon-wide operational knobs (anthropic.defaultModel, memory.embeddingModel, dashboard.heartbeatMs, logger.level). /api/observability/* (Slice H6) exposes the three-layer-memory operational surface: conversations (filterable list + per-conversation detail with messages + manual POST /:id/close that marks closed_at and enqueues a summarize_conversation job — each row enriched server-side with projectId/projectSlug and a derived readyToPromote flag from Slice I2), agent-events (cross-trace activity feed with traceId/agentId/action/scope/since/until filters, plus a lane=chat|task|system segmented filter for the post-I1 chat-vs-task split, plus /trace/:traceId for span-tree replay), agent-events/rollup?groupBy=agent|scope|day (token cost rollup with one prepared statement per groupBy so SQLite can use the existing single-column indexes), and POST /vacuum (manual sweep, returns VacuumStats). The React app (Vite + Tailwind v4) ships an /onboarding wizard for first-run setup — auto-redirected from / when no anthropic/apiKey row exists; Settings page with two tabs (Integrations + Settings) for ongoing edits; URL-state-backed faceted filtering. New /conversations and reborn /activity pages render the observability surface (manual close, span-tree replay, token rollup, manual vacuum). The /conversations list shows a project chip per row (the I1 project_id snapshot) and a "ready to promote" badge when the chat-bridge has emitted <<READY_TO_PROMOTE>> more recently than the latest /do; /activity colors and segments events by lane. A top-of-page SetupBanner stack surfaces three persistent alerts: setup-incomplete (deep-links to /onboarding), daemon-unreachable, and system integration errors (e.g. DB tighten failure). /api/tasks accepts source / branch / since / until / q / model filters (200-row default cap); /api/runs is a global runs feed with keyset pagination; /api/memory/search exposes the same FTS+vec RRF-fused search the agent uses. |
@mc/cli |
mc binary — project, tasks, runs, run, daemon, tunnel, integrate, routes, memory subcommands. Talks to the daemon via the dashboard's HTTP API; never opens the DB directly (except mc integrate, mc routes, and mc memory, which work with the daemon stopped). mc tasks list and mc runs list accept --source / --branch / --since / --until / --q / --model filters; mc tasks <prompt> from inside a registered repo binds to that project via resolveByPath(cwd). mc memory search <query> [--scope global|project:<id>] runs the same FTS+vec RRF-fused search the agent gets in its system prompt; mc memory reindex [--full] rebuilds the index. Slice J1/J2: mc project add <path> runs the pbrain project-onboard skill as a one-shot Claude SDK call after the project is registered (--no-pbrain to skip); mc project pbrain-backfill [--slug X | --all] runs the same skill retroactively. Skills are sourced from ~/.claude/skills/ (override via MC_SKILLS_ROOT); the onboarder runs only when both skills are available and the pbrain CLI is on PATH. |
@mc/daemon |
mcd binary — composition root: opens DataStore, builds collaborators, wires services into runLifecycle, acquires single-instance lock, generates the platform service unit (launchd plist on macOS, systemd user unit on Linux/WSL2), installs Cloudflare Tunnel. Includes the sleep-worker DaemonService (Slice H5a) — drains pending_jobs on a 5 s tick, hydrates a fresh TraceContext from pending_jobs.trace_id per job (cross-process trace continuation), dispatches by kind to a registered JobHandlerRegistry, and reconciles orphan claims every 60 s. Single-source-of-truth for orphan reclaim (vacuum no longer touches pending_jobs). The summarize_conversation handler (Slice H5b) reads a closed conversation's messages via messages.fetchByConversation (capped at 200 turns), calls a Haiku-class model via AgentRunner (single-turn, no tools, cwd: '', model: 'claude-haiku-4-5-20251001'), writes conversations.summary, and emits summary.computed with token totals. |
Mission Control supports three first-class platforms: Mac (native), Linux (native), Windows (via WSL2). Linux runs on x64 and arm64.
All platforms need:
- Node 22+ and pnpm 10+ (pinned in
.nvmrcandpackage.json#engines) - (Optional)
cloudflaredif you want the mobile Discord path
The platform-specific bits below cover (1) cloudflared install and (2) native-module build deps for the rare better-sqlite3 / sqlite-vec source build.
brew install cloudflare/cloudflare/cloudflaredbetter-sqlite3 and sqlite-vec ship Mac prebuilds; no Xcode CLT install needed for the daemon itself.
# Debian / Ubuntu
sudo apt install build-essential python3 pkg-config
curl -fsSL https://pkg.cloudflare.com/install.sh | bash && sudo apt install cloudflared
# Fedora / RHEL / CentOS / Rocky
sudo dnf install gcc-c++ make python3 pkgconfig
# cloudflared: see https://pkg.cloudflare.com/index.html for the dnf repo
# Arch / Manjaro
sudo pacman -S base-devel python pkgconf
# cloudflared: paru -S cloudflared (AUR)The build-tool packages are a fallback for when a prebuild-install miss forces a node-gyp rebuild (unusual Node version, musl libc, fresh source build). On Node 22 + glibc with prebuilds available, install completes without invoking gcc. Same prereqs apply inside WSL2.
WSL2 is treated as a peer platform: from Mission Control's perspective it is Linux, and every Linux improvement applies inside WSL2 unchanged.
# In PowerShell (admin), one-time install:
wsl --install -d Ubuntu-24.04
wsl --updateThen enable systemd inside the distribution (one-time edit, then restart WSL):
# Inside the WSL2 shell:
sudo tee /etc/wsl.conf > /dev/null <<'EOF'
[boot]
systemd=true
EOF
# From PowerShell: wsl --shutdown (then reopen the terminal)After that, follow the Linux instructions above inside WSL2. Two WSL2-specific guardrails:
- Keep
~/.mc/inside the WSL2 filesystem. Use~/.mc(i.e./home/<user>/.mc), not/mnt/c/Users/<user>/.mc. Chokidar's inotify watcher misses events on the 9P-mounted Windows drive — silent corruption of the memory index.mc daemon installwarns at install time ifMC_DATA_DIRresolves under/mnt/. - Auto-start at Windows boot. With
boot.systemd=truein/etc/wsl.confandmc daemon installhaving registered the user unit + linger, the daemon comes back up when WSL2 starts, which itself starts on Windows boot once the distribution is launched once.
pnpm install
pnpm build
pnpm test # 1003 offline tests across 88 files (≈10s on M-series, similar on Linux)Secrets and operational settings live in SQLite, in <MC_DATA_DIR>/state.db (default ~/.mc/state.db, override via the MC_DATA_DIR env var). The daemon reads them once at boot via resolveConfig(store, registry) and exposes the resolved config to compose. There is no longer a .env or config.toml file the daemon consults — that path was retired in Slice B.
The recommended setup path is the dashboard onboarding wizard at /onboarding. The dashboard auto-redirects there from / whenever no anthropic/apiKey row exists. The wizard walks five steps: Welcome → Anthropic (required) → Discord (optional, batched bot-token + owner-snowflake write) → AI Gateway (optional) → Done. Each credential step has a Test connection button that probes the upstream service before saving via POST /api/test-connection/:kind (rate-limited to 5/min/kind, 1 in-flight). Discord's Connect uses POST /api/integrations/:kind/batch for atomic two-row writes so the daemon respawns once, not twice. No file editing required.
For headless/CI/scripted setup, the same rows can be written via:
- CLI:
mc integrate set <kind>.<key> <value>(talks to the dashboard's HTTP API) - Direct SQL:
sqlite3 ~/.mc/state.db "INSERT INTO integration_credentials (kind, key, value, updated_at) VALUES ('anthropic', 'apiKey', 'sk-...', strftime('%s','now')*1000)"— the config-watcher service notices the change within ~1.5s and triggers a daemon respawn.
Recognized credentials and settings:
| Where it lives | Key | Required for | Effect when missing |
|---|---|---|---|
integration_credentials |
anthropic.apiKey |
any agent run | daemon starts; POST /api/tasks fails at attempt time |
integration_credentials |
discord.botToken + discord.ownerId |
enabling the Discord adapter | adapter is omitted from the service list (both rows required) |
integration_credentials |
aiGateway.apiKey |
vec-search memory | memory.vec.enabled = false, FTS still works |
settings |
anthropic.defaultModel |
non-default Claude model | falls back to claude-sonnet-4-6 |
settings |
memory.embeddingModel |
non-default embedding model | falls back to openai/text-embedding-3-small |
settings |
dashboard.heartbeatMs |
SSE keepalive cadence | falls back to 15000; invalid values surface as a system registry error |
settings |
logger.level |
log verbosity | falls back to info; invalid values surface as a system registry error |
mcd propagates anthropic.apiKey → process.env.ANTHROPIC_API_KEY (fallback only — primary auth is the OAuth refresher) and aiGateway.apiKey → process.env.AI_GATEWAY_API_KEY BEFORE composeMcd runs, so the Claude Agent SDK (which reads process.env at module-import time) sees them. Real shell-exported values (or anything already in the launchd plist's EnvironmentVariables) win over DB rows. Keys-only is logged at startup — never values.
CLAUDE_CODE_OAUTH_TOKEN is set by the daemon's OAuthRefresher from ~/.mc/credentials.json (not from the DB); users do not configure this directly. The refresher rotates the bearer every ≤10 min before expiry and on the first 401 a query receives.
There is no hot reload — to change any credential or setting, write the row and the daemon respawns within ~1.5s under launchd's KeepAlive / systemd's Restart=always.
After pnpm build, the mc binary lives at packages/cli/bin/mc:
# Projects
./packages/cli/bin/mc project add ~/code/mission
./packages/cli/bin/mc project list
./packages/cli/bin/mc project scan ~/code
# Tasks
./packages/cli/bin/mc tasks list
./packages/cli/bin/mc tasks show <task-id>
./packages/cli/bin/mc tasks abort <task-id>
# Integration credentials (Discord/Telegram/WhatsApp tokens)
./packages/cli/bin/mc integrate set discord botToken <your-bot-token> # values are never logged
./packages/cli/bin/mc integrate list # prints kind/key/updated-at, never values
./packages/cli/bin/mc integrate rm discord botToken # falls back to .env on next daemon boot
# Daemon lifecycle (platform-aware: launchd on macOS, systemd user unit on Linux/WSL2)
./packages/cli/bin/mc daemon install # symlinks mcd into ~/.mc/bin; writes the unit file (launchd plist or
# systemd .service) + starts it. On Linux/WSL2 also runs
# `loginctl enable-linger` so the daemon survives logout (best-effort;
# prints a sudo hint on systemd <256). Warns if MC_DATA_DIR is on
# /mnt/ under WSL2 (chokidar cross-FS unreliability).
./packages/cli/bin/mc daemon start
./packages/cli/bin/mc daemon status
./packages/cli/bin/mc daemon stop
./packages/cli/bin/mc daemon restart # `pnpm -r build` + restart-in-place (launchctl kickstart -k on macOS,
# systemctl --user restart on Linux/WSL2) + poll /healthz;
# --no-build / --no-wait / --timeout <ms>
./packages/cli/bin/mc daemon uninstall # stops the service + removes the unit file (leaves user data intact)
# Cloudflare Tunnel
./packages/cli/bin/mc tunnel install --hostname mc.<your-domain>
# Direct agent run (one-shot, no Discord)
./packages/cli/bin/mc run <projectSlug> "list the top-level files"pnpm --filter @mc/daemon build
node packages/daemon/bin/mcdLogs land in ~/.mc/logs/daemon-YYYY-MM-DD.log as line-delimited JSON. Dashboard binds http://127.0.0.1:7777 (override with MC_DASHBOARD_PORT; 0 asks the OS for an ephemeral port). On first boot it generates a pairing token at <MC_DATA_DIR>/dashboard.token — see Securing the dashboard before opening it in a browser.
If you'd rather not memorize a port number — and especially if you run MC_DASHBOARD_PORT=0 (ephemeral) or work across multiple worktrees — install Portless and put a stable .localhost name in front of the daemon. (Linux/WSL2 hosts: skip this section — .localhost resolution is browser-side and works fine without a proxy; just open http://localhost:7777.)
npm install -g portless
portless alias mc 7777 # one-time: register mc.localhost → 127.0.0.1:7777
portless proxy start # starts the HTTPS proxy daemonThen open https://mc.localhost in any browser (or https://mc.localhost:1355 if portless proxy start couldn't get sudo to bind :443 and fell back to :1355). The alias is per-machine state stored in ~/.portless/; the repo doesn't ship a config file because Portless's run mode would fight the daemon's existing lifecycle (Discord, SQLite, agent runner all run inside mcd, not under Portless's process).
This changes nothing about how the daemon runs: it still binds 127.0.0.1:7777, and the Cloudflare Tunnel still forwards directly to 127.0.0.1:7777 (Portless is local-only — Tunnel never sees it). Use it, skip it, your call.
If
https://mc.localhostwon't load in one browser butcurl https://mc.localhostworks: It's almost always cached state — HSTS pinning from a prior:1355attempt, a stale cert error in the browser's site data, or a service worker. Try an incognito/private window first; if that loads, clear site data formc.localhostin the original browser. Don't bother re-checking the server side until you've ruled this out.
To get rid of the :1355 fallback and have the proxy survive reboots, install Portless as a system LaunchDaemon. This is once per Mac, not once per project — every Portless alias on the machine (mc, plus any future ones) is served by the same daemon.
Save this to /tmp/com.vercel.portless.plist (adjust the two absolute paths to match which node and which portless on your machine):
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
<key>Label</key>
<string>com.vercel.portless</string>
<key>ProgramArguments</key>
<array>
<string>/opt/homebrew/bin/node</string>
<string>/Users/YOU/.npm-global/bin/portless</string>
<string>proxy</string><string>start</string><string>--foreground</string>
</array>
<key>EnvironmentVariables</key>
<dict>
<key>HOME</key><string>/Users/YOU</string>
<key>PATH</key><string>/opt/homebrew/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin</string>
</dict>
<key>RunAtLoad</key><true/>
<key>KeepAlive</key><dict><key>SuccessfulExit</key><false/></dict>
<key>ThrottleInterval</key><integer>30</integer>
<key>StandardOutPath</key><string>/var/log/portless.out.log</string>
<key>StandardErrorPath</key><string>/var/log/portless.err.log</string>
</dict>
</plist>Then install it (run from a real Terminal so sudo can prompt):
plutil -lint /tmp/com.vercel.portless.plist
sudo install -m 0644 -o root -g wheel /tmp/com.vercel.portless.plist /Library/LaunchDaemons/com.vercel.portless.plist
sudo launchctl bootstrap system /Library/LaunchDaemons/com.vercel.portless.plist
sudo launchctl kickstart -k system/com.vercel.portlessVerify (the proxy should be listening on 443 as root):
sudo lsof -nP -iTCP:443 -sTCP:LISTEN
curl -sS -o /dev/null -w "%{http_code}\n" https://mc.localhost # 502 if mcd is off, 200 if it's upTo uninstall: sudo launchctl bootout system /Library/LaunchDaemons/com.vercel.portless.plist → sudo rm /Library/LaunchDaemons/com.vercel.portless.plist.
Notes: KeepAlive: { SuccessfulExit: false } restarts the proxy on crash but not on a clean stop, and ThrottleInterval: 30 prevents respawn loops. Logs go to /var/log/portless.{out,err}.log. The daemon runs as root (required to bind :443 on macOS) but reads from your ~/.portless/ via the HOME env var.
Mission Control authenticates to Anthropic via your Claude Code subscription. Setup is a one-time terminal command:
- Log into Claude Code:
claude login(opens a browser, completes the OAuth flow, writes credentials to your system keychain / Linux fs / libsecret). - Bootstrap Mission Control:
mc auth bootstrap.
The bootstrap command reads from your platform's credential store (macOS keychain via security, Linux fs at ~/.config/Claude/credentials.json or ~/.claude/.credentials.json, Linux libsecret via secret-tool) and writes them to a daemon-readable file at ~/.mc/credentials.json (mode 0600, atomic).
The daemon's OAuthRefresher then refreshes the OAuth token in the background — every 5 minutes it checks expiry; if less than 10 minutes remain, it POSTs to https://claude.ai/v1/oauth/token with grant_type=refresh_token to get a new bearer + new refresh token. The Claude Agent SDK reads CLAUDE_CODE_OAUTH_TOKEN from process.env directly; the refresher writes it on every successful rotation. No manual intervention required after bootstrap.
Re-bootstrap when:
- The bot DMs you a "
⚠️ Claude Code auth expiring soon" or "❌ Claude Code auth failure" alert (proactive 2h heads-up via the Discord owner DM — fix it before the chat lane stalls) - The dashboard surfaces a
Refresh failed: re-bootstrapbanner (refresh token revoked or rotated server-side, e.g. afterclaude logout) - The dashboard's
claudeCodeintegration goesunverifiedwith a "Token expiring soon" detail - You ran
claude loginagain on a different account
mc auth bootstrap # always overwrites ~/.mc/credentials.json with the freshest keychain values
mc auth bootstrap --dry-run # preview the new expiry without touching the fileThe bootstrap command always overwrites by default and prints both the old and new expiry timestamps so you can confirm the rotation. (Earlier behavior — silent skip when the file existed — was a footgun: users running the command after claude login were surprised to see "Already bootstrapped" and the daemon kept using the stale token.)
Why a separate file (not state.db): the launchd-spawned daemon cannot access the user's keychain (different "responsible parent" attribute than the user's terminal). The bootstrap command runs from the user's terminal where keychain access works; the daemon only reads the file at ~/.mc/credentials.json and refreshes via plain HTTPS — never touches keychain at runtime.
The daemon's mc auth bootstrap watcher polls ~/.mc/credentials.json mtime alongside the SQLite summaries, so re-running bootstrap triggers an EX_TEMPFAIL=75 daemon respawn within ~1.5s — no launchctl kickstart needed.
The dashboard is auth-gated. On first daemon boot, mcd generates a random 64-char hex token at <MC_DATA_DIR>/dashboard.token (mode 0600) and persists it across daemon restarts. Every /api/* request requires it; /healthz, /, and /app.js stay open so launchd healthchecks and the bootstrap UI keep working.
To pair a browser (one-time per browser profile):
# Read the token (don't share it)
cat ~/.mc/dashboard.tokenThen either:
- Open
http://127.0.0.1:7777/, paste the token into the pair page, click Pair. Or - Open
http://127.0.0.1:7777/?token=<paste-here>directly. The page consumes the token, sets the cookie, andhistory.replaceStatescrubs the URL before any screenshot or bookmark could capture it.
The cookie is HttpOnly, SameSite=Strict, Path=/, 1-year max-age. JavaScript on the page can't read it (XSS resistance); the daemon's Origin check on writes is still in place under the cookie path.
For scripts and curl: use Authorization: Bearer <token> instead of the cookie:
TOKEN=$(cat ~/.mc/dashboard.token)
curl -H "Authorization: Bearer $TOKEN" http://127.0.0.1:7777/api/integrationsCloudflare Tunnel users: the tunnel forwards public traffic to 127.0.0.1:7777. The pairing requirement now travels with it — paste the token from your laptop's ~/.mc/dashboard.token into the remote browser the first time you connect via the tunnel hostname. The cookie is keyed to that domain; a token rotation requires re-pairing both browsers.
Rotating the dashboard token: delete the file and bounce the daemon. Every paired browser will need to re-pair.
rm ~/.mc/dashboard.token
mc daemon restart --no-build
cat ~/.mc/dashboard.token # new valuePer-integration tokens (Discord, Telegram, WhatsApp, Anthropic, AI Gateway) live in the SQLite integration_credentials table — the daemon reads them via resolveConfig at boot and re-reads on respawn. There is no longer a .env fallback (Slice B retired that path).
Three ways to set one:
# 1. CLI (no daemon needed; works against the same SQLite file)
mc integrate set discord botToken <token>
# 2. Dashboard Settings page (browser, after pairing — see above)
# http://127.0.0.1:7777 → Settings tab → paste, save
# The Slice D onboarding wizard (POST /api/integrations/:kind/batch) writes
# multi-row updates atomically — Discord botToken + ownerId in one txn.
# 3. Direct SQL (advanced; for headless/CI environments)
sqlite3 ~/.mc/state.db \
"INSERT INTO integration_credentials (kind, key, value, updated_at)
VALUES ('discord', 'botToken', '<token>', strftime('%s','now')*1000)"Auto-restart on rotation. A config watcher polls integration_credentials AND settings every ~1.5s. When either table's row count or latestUpdatedAt changes, mcd exits with EX_TEMPFAIL=75 and launchd / systemd respawn it with the fresh value. No manual mc daemon restart.
If you run mcd directly (no service supervisor), the daemon still exits 75 on a credential change but won't auto-restart — relaunch it manually.
What's never written: mc integrate list prints kind, key, updated_at — never the value. The dashboard GET /api/integrations API returns the same shape; values cannot be read back through any HTTP surface or CLI command. To rotate or replace, write a new value over the old one.
| Variable | Default | Read by | Effect |
|---|---|---|---|
MC_DATA_DIR |
~/.mc |
daemon, cli |
Root directory for state.db (the only persistent config source), daemon.lock, logs/, memory/ |
MC_DASHBOARD_PORT |
7777 |
daemon (mcd) |
Dashboard listener port. 0 = ephemeral. Malformed values exit 70 before any resources open |
HOME |
(system) | core/projects |
Tilde expansion for project paths |
MC_RECORD |
(unset) | cli run |
When set to a path, writes the SDK message stream as JSONL for the replay harness |
MC_REPLAY |
(unset) | cli run |
When set to a path, replays a JSONL fixture instead of calling the live SDK — no network |
Secrets (anthropic.apiKey, aiGateway.apiKey, Discord credentials) live in state.db's integration_credentials table. At boot, mcd propagates anthropic.apiKey → process.env.ANTHROPIC_API_KEY (fallback) and aiGateway.apiKey → process.env.AI_GATEWAY_API_KEY BEFORE composeMcd runs, so the Claude Agent SDK and embedding gateway (which read from process.env at module-import time) see them. Shell-exported values win over DB rows.
The Claude Code OAuth bearer (CLAUDE_CODE_OAUTH_TOKEN) lives in ~/.mc/credentials.json (mode 0600, atomic write), NOT in the DB — the daemon's OAuthRefresher reads it on boot, sets the env var, and refreshes it against https://claude.ai/v1/oauth/token every ≤10 min before expiry. See Authentication below for the bootstrap flow.
| Variable | Effect |
|---|---|
MC_INTEGRATION=1 |
Includes *.integration.test.ts (real chokidar / fs / sqlite-vec) |
MC_SMOKE=1 |
Master gate for *.smoke.test.ts. Per-probe vars: DISCORD_BOT_TOKEN (Discord gateway), CLAUDE_CODE_OAUTH_TOKEN or ANTHROPIC_API_KEY (Anthropic streaming), AI_GATEWAY_API_KEY (AI Gateway embed dim) |
The daemon's vitest.config.ts auto-loads <workspace-root>/.env.local (gitignored) before resolving its inclusion globs, so a single file can drive both pnpm dev:daemon and the smoke probes. Shell env wins over the file (override: false). Copy .env.example to .env.local and fill in the keys you need.
Every layer has a deliberate test style. The goal is fast offline runs in --watch mode (≈3s for the full 676-test suite today) without sacrificing real coverage.
| Layer | Style | Tools |
|---|---|---|
| Pure logic (state machine, parser, batcher, RRF) | vitest unit, table-driven, no I/O | vitest |
| DataStore | :memory: SQLite per test, full real schema |
better-sqlite3 |
| Markdown / fs | mkdtemp(tmpdir()) per test, FakeFileWatcher for unit tests, real chokidar gated behind MC_INTEGRATION=1 |
chokidar |
| Discord REST | msw at the boundary; never real network |
msw |
| Claude SDK streaming | Record-replay JSONL fixtures asserting structural equivalence of post-translate events | custom harness |
| Time-dependent (rate batcher, heartbeats, watcher debounce) | vi.useFakeTimers() |
vitest |
| Provider contract | memoryProviderContract(make) runnable against any MemoryProvider impl |
vitest |
flowchart LR
A[pnpm test] --> B[*.test.ts<br/>1003 offline tests<br/>~10s, no network]
A2[MC_INTEGRATION=1<br/>pnpm test] --> C[+ *.integration.test.ts<br/>real chokidar / fs / sqlite-vec]
A3[MC_SMOKE=1<br/>pnpm test] --> D[+ *.smoke.test.ts<br/>real network probes]
classDef def fill:#ecfdf5,stroke:#059669
classDef opt fill:#fef3c7,stroke:#d97706
class B def
class C,D opt
| File | Verifies | Cost |
|---|---|---|
mcd.smoke.test.ts |
Spawns real mcd, drives dashboard, SIGTERM, lock release, SQLite persistence |
free |
discord-gateway.smoke.test.ts |
Real Discord gateway boot reaches READY (catches token validity, MESSAGE_CONTENT intent regressions, SDK drift) | free |
anthropic-stream.smoke.test.ts |
SDK query() streams an assistant message + successful result (catches API key validity, region/quota, wire-format drift) |
<$0.001 |
ai-gateway-embed.smoke.test.ts |
AI Gateway returns 1536-dim float vector for text-embedding-3-small (catches model deprecation that would silently break the vec0(1536) schema) |
~$0.00001 |
cloudflared transport intentionally not probed in v1 — the local demo loop (Discord gateway is outbound from the host machine) doesn't need it.
- Native bindings: vitest runs with
pool: 'forks'becausebetter-sqlite3,sqlite-vec, and chokidar's fsevents misbehave under worker threads. - Helpers live next to the package they help test (
@mc/core/testing,@mc/dashboard/testing,@mc/memory/contract).vitestis apeerDependencyon those packages so it never reaches a runtime bundle. - Fixtures:
packages/<pkg>/__fixtures__/. SDK replay JSONL atpackages/core/__fixtures__/sdk-streams/. - File naming:
*.test.tscolocated with source (offline).*.integration.test.ts(chokidar/fs) gated byMC_INTEGRATION=1.*.smoke.test.ts(real network) gated byMC_SMOKE=1.*.contract.tsfor shared specs.
The v1 concrete-next-steps list is closed as of 2026-05-03. All five originally-listed items either shipped (#3
mc memoryCLI, #5 audit log for credential writes) or were resolved as already-wired / overkill (#1 pair-url CLI, #2 DLQ-drain worker, #4 vec → agent system prompt). See the strikethrough rationale below for each, and CHANGELOG.md for the per-slice ship history. Future work moves to the deferred-slices list immediately below.
Resolved v1 todo items (kept for breadcrumb):
CLI integration for the dashboard pairing URL— skipped. Slice 25's auto-pair already handles loopback (no paste required), and Slice 23 made the token stable across restarts so the tunneled / non-loopback case is a one-timecat ~/.mc/dashboard.token→ paste. Promoting?token=<x>to a CLI happy path would also encourage tokens in URLs (browser history, screenshots, paste buffers — RFC 6750 specifically discourages bearer tokens in URLs); slice 23 left the query-param form as an opt-in escape hatch on purpose.Periodic DLQ-drain worker— skipped. Failed embeds retry on the next reindex of the same file, which in practice keepsmemory_embed_queuedrained naturally — verified empty in the live~/.mc/state.dbafter months of use, no real consumer has hit a stuck embed. Re-engage if a memory file gets edited rarely AND the AI Gateway is down at exactly the moment of that edit.— done.mc memory {search|reindex}CLImc memory search <query> [--scope global|project:<id>] [-k <n>]andmc memory reindex [--full]are registered.Wire vec search into the agent system prompt— already wired.MarkdownGitProvider.search()does RRF-fused FTS+vec when the embedder is configured (FTS-only fallback when noAI_GATEWAY_API_KEY), and the orchestrator callsmemory.search()(notsearchFts()) atmemory-injection.ts:47. The README item was stale — Slice 11.7'sprovider.search()change implicitly closed this.Audit log entries for credential writes— done. CLI and dashboard mutations both writeaudit_logrows on every successful set/remove (see PR #51 in CHANGELOG).
After v1 is end-to-end live, deferred slices in priority order:
- Per-project bot identities — Slice 32a shipped channel routes with the single owner bot. The full Slice 32 design adds per-token gateway clients (so
missionandpicspotcan use different Discord identities), per-(token, channelId)rate-batcher state, and drop-at-adapter dedup when two bots are in the same guild. - Telegram adapter — proves the
ChannelAdapterinterface generalizes. - Cloud projection (
@mc/projection-convexor@mc/projection-supabaseas a passiveEventBussubscriber). - Memory consolidation — daily summarization of
notes/intodecisions/. - Subagent fan-out —
AgentRunneraccepts subagent definitions;TaskOrchestratortracks parent/child runs. - Token accounting aggregations — daily/weekly rollups over
runs.input_tokens|output_tokens|cost_usd. - Web-based task launcher (React UI only) —
POST /api/tasksalready exists end-to-end (delegates to the orchestrator, never opens the DB directly). The remaining work is purely the React form — a "new task" button + dialog that calls the existing endpoint.
For the per-slice ship history, see CHANGELOG.md.
Lessons from prior agent systems baked into the workspace:
| Anti-pattern | Guardrail |
|---|---|
| Monolithic router | @mc/discord may NOT import @mc/core/orchestrator, @mc/core/datastore, or @mc/core/agent directly. Coordination flows through EventBus and constructor-injected repos |
| No data access layer | Dashboard/CLI cannot import better-sqlite3. Reads go through DataStore repos; writes go through the orchestrator |
| Scattered config | process.env.X is read only inside @mc/core/config (test files may read MC_RECORD/MC_REPLAY/MC_SMOKE/MC_INTEGRATION) |
| Filesystem IPC | No /tmp/*.json for cross-process state. In-process EventBus + MessageQueue only |
| Tight Convex coupling | No Convex anywhere. MemoryProvider and ChannelAdapter are interfaces; impls live in their own packages |
| Decision | Choice | Rationale |
|---|---|---|
| Agent runtime | Local daemon on Mac / Linux / Windows-via-WSL2, Node + TypeScript | Direct repo access, no 10-min cap, free compute |
| v1 vertical | Discord-first slice (DM → agent → rolling-edit + dashboard) | Proves the mobile loop earliest |
| Operational store | better-sqlite3 + WAL + sqlite-vec + FTS5 |
Zero ops, mature, faster than hosted vector at this scale |
| Memory truth | Markdown in ~/.mc/memory/ and <repo>/.mc/memory/ |
Git-auditable, PR-reviewable, cat/grep/vim-able |
| Memory index | Derived from markdown into SQLite via chokidar reindex | Single-direction flow; SQLite never writes back |
| Realtime surface | Hono HTTP+SSE inside the daemon, bound to 127.0.0.1 |
Dashboard is part of the daemon, not externally hosted |
| Mobile access | Cloudflare Tunnel proxies inbound HTTPS → localhost:7777 |
The only path to the dashboard from outside the host machine |
| Channel v1 | Discord DM-only (gateway client inside daemon). No thread creation | Discord forbids threads.create() on DM channels |
| Project model | Manual register CLI + mc project scan bulk helper |
Stable metadata, explicit opt-in |
| LLM auth | CAS keeps OAuth (CLAUDE_CODE_OAUTH_TOKEN); non-CAS LLM calls go through Vercel AI Gateway |
Subscription-tied auth for the agent runner; gateway gives provider flexibility for embeddings |
| Cloud projection | Deferred to v2 | Add only if "laptop asleep" mobile UX hurts |
packages/core/src/datastore/migrations/0001_init.sql— full v1 schema with v2'schannel_routesreservedpackages/core/src/datastore/migrations/0002_memory_vec.sql— capability-gated vec0 migrationpackages/core/src/agent/sdk-stream-adapter.ts— the only place that knows the Claude Agent SDK message shapepackages/core/src/orchestrator/state-machine.ts— puretransition()function, no I/Opackages/core/src/orchestrator/orchestrator.ts— the orchestrator integration seampackages/core/src/embeddings/openai-gateway.ts— production embedder with six guardrailspackages/memory-markdown/src/provider.ts— reindex pipeline, embed cache, dead-letter queuepackages/memory-markdown/src/rrf.ts— Reciprocal Rank Fusion with deterministic tiebreakspackages/discord/src/rate-batcher.ts— 800ms coalesce + 1/s edit cap per messagepackages/dashboard/src/server.ts— binds 127.0.0.1, SSE with 15s heartbeat, mountsrequireAuthmiddleware on/api/*packages/dashboard/src/auth.ts—ensureDashboardToken(file generation/read at<dataDir>/dashboard.tokenmode 0o600),requireAuthmiddleware (constant-time compare, hoistedexpectedBuffer), cookie + Bearer pathspackages/dashboard/src/routes-pair.ts—/__pairhandshake (setsmc_sessioncookie: HttpOnly, SameSite=Strict, 1y max-age) +/api/__authstatusprobepackages/dashboard/src/routes-integrations.ts— credential CRUD (zod kind allow-list, Origin-header CSRF check on writes, never echoes values)packages/dashboard/src/routes-integrations-batch.ts— atomic multi-row credential write inside a single SQLite transaction (used by the onboarding wizard's Discord step sobotToken+ownerIdcommit together)packages/dashboard/src/routes-integrations-status.ts— read-onlyGET /api/integrations/:kind/statusexposing liveStatusRegistrystate (ready/unverified/disabled/errorwith sanitized detail)packages/dashboard/src/origin.ts— sharedoriginAllowedOrigin-header CSRF check for state-changing routespackages/dashboard/src/task-event-buffer.ts— 200-event ring buffer forLast-Event-IDreplaypackages/dashboard/src/debug-event-buffer.ts— global 500-event ring buffer (bus events + Discord I/O) backing the debug panelpackages/dashboard/src/routes-debug.ts—/api/debug/events{,/recent}SSE + JSON endpointspackages/core/src/datastore/repos/integration-credentials.ts— per-integration secrets (BLOB-stored values,summary()cheap snapshot for the config watcher,batchSet()for atomic multi-row upsert inside the caller's transaction)packages/core/src/datastore/repos/settings.ts— operational settings (string-typed values forlogger.level,dashboard.heartbeatMs, etc.; samesummary()shape as integration_credentials so the watcher polls both tables uniformly)packages/core/src/config/resolve.ts—resolveConfig(store, registry)reads creds + settings from SQLite and returns a deep-frozenResolvedConfig. Invalid values (e.g. non-numericdashboard.heartbeatMs) flow into theStatusRegistryassystemerrors and the field falls back to its defaultpackages/core/src/probe-and-register.ts— wraps an upstream probe with a 10s timeout and writes the outcome to the registry (unverified → readyon success,unverified → errorwith sanitized detail on failure)packages/core/src/probes.ts— concrete probes (probeDiscordReady,probeAnthropic1Token,probeAiGatewayDimension) using structural client interfaces so neitherdiscord.jsnor@anthropic-ai/sdkenter@mc/core's transitive depspackages/daemon/src/services.ts—makeConfigWatcherServicepolls bothintegration_credentials.summary()ANDsettings.summary()every 1.5s and triggersEX_TEMPFAIL=75on change so launchd/systemd respawn picks up either kind of rotationpackages/daemon/src/compose.ts— composition root; takes an already-openstoreandstatusRegistryfrommcd-mainso config can be resolved and env propagated before any module that readsprocess.envat import time runspackages/daemon/src/start-daemon.ts— lifecycle + signal handler exit semanticspackages/daemon/src/launchd.ts— plist withSessionCreate=truefor keychain unlock at bootpackages/daemon/src/systemd.ts— INI user unit +systemctl --user/loginctl enable-lingerbuilderspackages/daemon/src/platform.ts—getPlatform(),isWSL2(),findExecutable(),linuxDistroFamily()packages/cli/src/commands/installer-helpers.ts— shared lock-status / healthz-poll / programPath logic both installers reuse
MIT — see LICENSE.