opencode-ai-coding-system is a production AI coding framework built on top of OpenCode CLI.
It turns raw AI chat sessions into a disciplined engineering workflow with planning, execution gates, review loops, and cross-session memory.
If you have used AI coding tools before, you already know the pain this solves: each session forgets context, plans drift, and code can ship without proper review. This system adds structure so work stays traceable, testable, and shippable.
AI coding is powerful, but default usage has three recurring failures:
-
Stateless sessions
Every new session starts cold unless context is manually reconstructed. -
Plan drift
Work often diverges from the intended design over multiple sessions. -
Weak quality gates
Code is frequently produced quickly, then committed without systematic review.
This framework fixes those problems with:
- Mandatory planning artifacts in
.agents/features/{feature}/ - Session handoff state in
.agents/context/next-command.md - A pipeline state machine that tracks where the feature is
- Automated review/fix loops before commit and PR
The system has a clean split between static framework code and dynamic work artifacts.
-
Framework layer:
.opencode/
Commands, hooks, agent definitions, skills, and TypeScript pipeline logic. -
Working layer:
.agents/
Generated plan/task/report/review files and the cross-session handoff file.
This separation matters because framework behavior stays stable while feature work changes continuously.
The full lifecycle is:
/prime → /mvp → /prd → /pillars → /decompose → /planning → /execute → /code-loop → /commit → /pr
In practice, you run one command per session (except /commit followed by /pr as the final pair).
That keeps each context window focused and prevents accidental scope bleed.
Typical feature rhythm:
- Session 1:
/prime→/planning {feature}→ end - Session 2:
/prime→/execute plan.md(task 1) → end - Session 3:
/prime→/execute plan.md(task 2) → end - Session N:
/prime→/code-loop {feature}→ end - Session N+1:
/prime→/commit→/pr→ end
flowchart LR
A["/prime"] --> B["/mvp"]
B --> C["/prd"]
C --> D["/pillars"]
D --> E["/decompose"]
E --> F["/planning"]
F --> G["/execute<br/>task 1..N"]
G --> H["/code-loop"]
H --> I["/commit"]
I --> J["/pr"]
style A fill:#4A90D9,color:#fff
style F fill:#7B68EE,color:#fff
style G fill:#7B68EE,color:#fff
style H fill:#E8943A,color:#fff
style J fill:#27AE60,color:#fff
Session continuity is maintained by .agents/context/next-command.md.
Every command updates that handoff file when it completes. /prime reads it and tells you exactly what to do next.
The TypeScript state machine tracks the pipeline using these states:
| State | Meaning |
|---|---|
awaiting-execution |
Plan exists and is ready for /execute |
executing-tasks |
Task briefs are being implemented |
executing-series |
Multi-phase execution is active |
awaiting-review |
All tasks done, ready for /code-loop |
awaiting-fixes |
Review found issues that must be fixed |
awaiting-re-review |
Fixes were applied, review must run again |
ready-to-commit |
Review is clean, safe to commit |
ready-for-pr |
Commit created, ready to open PR |
pr-open |
Terminal success state |
blocked |
Manual intervention required |
These states are not cosmetic. They gate valid next actions and reduce guesswork in long feature cycles.
stateDiagram-v2
state "awaiting-execution" as ae
state "executing-tasks" as et
state "executing-series" as es
state "awaiting-review" as ar
state "awaiting-fixes" as af
state "awaiting-re-review" as arr
state "ready-to-commit" as rtc
state "ready-for-pr" as rfp
state "pr-open" as po
state "blocked" as bl
[*] --> ae : /planning done
ae --> et : /execute starts
ae --> es : multi-phase /execute
et --> ar : all briefs done
es --> ar : all phases done
ar --> af : review finds issues
ar --> rtc : review clean
af --> arr : fixes applied
arr --> af : still has issues
arr --> rtc : review clean
rtc --> rfp : /commit done
rfp --> po : /pr done
po --> [*]
ae --> bl : manual intervention
et --> bl : manual intervention
Run this first in every session.
It loads project context and answers three questions immediately: what stack this repo uses, what state the current feature is in, and what command should run next.
It checks dirty git state, reads memory and handoff files, detects language/framework/test setup, and surfaces pending work from both the handoff file and artifact scan.
Model: glm-5:cloud
opencode run "/prime"Interactive discovery for the product big idea. Runs a Socratic conversation to extract, pressure-test, and articulate the product vision. Produces mvp.md as the compass for everything downstream.
Model: claude-opus-4-5
Transforms MVP direction into a full Product Requirements Document: architecture, tech stack, API contracts, data models, and implementation phases.
Model: claude-opus-4-5
Extracts infrastructure pillars from the PRD and orders them with dependency gates. Tells you what must be built first and what criteria must pass before moving on.
Model: claude-opus-4-5
Per-pillar deep research that produces planning-ready spec files. Run once per pillar before feature-level planning starts.
Model: claude-opus-4-5
Mandatory planning command before any implementation. It runs a structured 7-phase process:
- Intent classification — what kind of work is this?
- Discovery interview — Socratic questions to surface scope and constraints
- Codebase research — explore/librarian agents search for patterns and integration points
- Design reasoning — synthesis, dependency analysis, risk assessment, approach decision
- Task decomposition — break work into atomic task briefs with wave/dependency mapping
- Gap analysis — Metis agent reviews for hidden assumptions and failure modes
- Plan preview — user approval gate before writing artifacts
Outputs:
.agents/features/{feature}/plan.md— overview, decisions, task index.agents/features/{feature}/task-{N}.md— one brief per task, each 700+ lines, self-contained
This is a hard gate. /execute requires a valid planning artifact and rejects ad-hoc prompts.
Implements one task brief per session from the planning artifacts.
Each run detects which task brief is next (based on which task-{N}.done.md files already exist), implements exactly that task, writes a report, and advances the pipeline state. It will not skip ahead or implement multiple tasks in one session.
Output: .agents/features/{feature}/report.md
Automated review → fix → re-review cycle. Runs until code is clean — meaning no Critical or Major findings remain. Minor issues can be accepted explicitly.
Each iteration:
- Runs code review and writes
review-{N}.md - Applies fixes in severity order (Critical first)
- Re-runs review
- Continues until clean or escape condition
Model: gpt-5.3-codex
Creates a conventional commit (type(scope): description) and performs an artifact sweep — renaming report.md to report.done.md so /prime correctly detects the committed state next session.
Scope stages only files relevant to the current feature. git add -A is forbidden.
Model: glm-5:cloud
Creates a feature branch if needed, pushes commits, and opens a GitHub PR with a generated description from commit history and plan artifacts.
Model: gpt-5.3-codex
Technical review only. Finds bugs, security issues, and quality problems. Classifies findings as Critical (blocks commit), Major (fix before merge), or Minor (consider fixing). Reports findings — does not apply fixes.
Model: gpt-5.3-codex
Applies fixes from a review file in severity order. Use when you want an explicit fix pass outside of the full /code-loop automation.
Human approval gate before commit. Use it when you want a final checkpoint before shipping.
Post-implementation meta-review. Compares the plan against the actual implementation, finds divergence, and suggests process improvements for future sessions.
Model: gpt-5.3-codex
Multi-perspective architecture discussion for difficult technical decisions. Standalone — invoke at any point when tradeoffs are unclear and you want structured analysis before committing to an approach.
Agents are registered in TypeScript with explicit model assignments and permission levels.
| Agent | Model | Role | Permissions |
|---|---|---|---|
| Sisyphus | claude-sonnet-4-6 |
Main orchestrator, routing and workflow control | Full |
| Oracle | openai/gpt-5.2 |
Read-only architecture consultant for hard decisions | Read-only |
| Metis | claude-sonnet-4-6 |
Pre-planning gap analyzer, finds hidden assumptions | Read-only |
| Momus | openai/gpt-5.2 |
Plan quality reviewer, rejects vague plans | Read-only |
| Hephaestus | openai/gpt-5.3-codex |
Deep autonomous worker for logic-heavy tasks | Full |
| Sisyphus-Junior | openai/gpt-5.3-codex |
Category-dispatched executor for task() calls | Full (no delegation) |
| Atlas | ollama/glm-5:cloud |
Todo and progress orchestration across sessions | Full (no delegation) |
| Explore | ollama/glm-5:cloud |
Internal codebase grep and pattern discovery | Read-only |
| Librarian | ollama/glm-5:cloud |
External documentation and OSS example search | Read-only |
| Multimodal-Looker | ollama/glm-5:cloud |
PDF, image, and diagram analysis | Vision-only |
Each agent is optimized for a specific job. Routing work to the right agent reduces token waste and improves output consistency. Explore and Librarian are cheap background agents — fire them in parallel for research. Oracle and Momus are expensive consultants — use them for decisions, not implementation. Hephaestus is the heavy implementation worker — use it when oracle has diagnosed an issue and actual code changes need to be made, or when a task is too complex for category dispatch alone.
graph TD
subgraph Orchestrators["🧠 Orchestrators"]
S[Sisyphus\nclaude-sonnet-4-6\nFull permissions]
end
subgraph Consultants["🔍 Consultants — Read-only"]
O[Oracle\ngpt-5.2\nArchitecture]
M[Metis\nclaude-sonnet-4-6\nGap analysis]
MO[Momus\ngpt-5.2\nPlan review]
end
subgraph Workers["⚙️ Workers — Full permissions"]
H[Hephaestus\ngpt-5.3-codex\nDeep tasks]
SJ[Sisyphus-Junior\ngpt-5.3-codex\nCategory dispatch]
AT[Atlas\nglm-5:cloud\nTodo tracking]
end
subgraph Research["🔎 Research — Read-only"]
EX[Explore\nglm-5:cloud\nInternal grep]
LB[Librarian\nglm-5:cloud\nExternal docs]
ML[Multimodal-Looker\nglm-5:cloud\nPDF / images]
end
S -->|consults| O
S -->|pre-planning| M
S -->|plan review| MO
S -->|category dispatch| SJ
S -->|hard tasks| H
S -->|research| EX
S -->|external docs| LB
Hooks run automatically and enforce discipline without user micromanagement.
todo-continuation— Preserves todos across context compaction so work does not disappear mid-sessionatlas— Boulder state tracking for task orchestration across sessionssession-recovery— Detects errors and provides recovery guidancecompaction-todo-preserver— Saves todo state before context windows compactbackground-notification— Routes background task completion events
agent-usage-reminder— Reminds to use Explore/Librarian before direct grep (saves tokens and gives better results)command-model-router— Routes slash commands to their configured models automatically
rules-injector— Injects.opencode/rulesinto every session (commit hygiene, anti-patterns, state management rules)comment-checker— Flags unnecessary AI-generated comments added to codedirectory-agents-injector— Injects AGENTS.md context when navigating directoriesdirectory-readme-injector— Injects directory README.md for local module context
Reserved for upcoming transforms.
category-skill-reminder— Ensures task dispatch includes relevant skill loading
pipeline-hook fires at session start, reads next-command.md, and emits a system reminder with the current pipeline state and next suggested command.
When opencode.json is present at the project root with an Archon remote MCP entry,
all commands that call rag_search_knowledge_base, manage_task, or other Archon tools
connect automatically. No manual setup is required beyond having the file in place.
The Archon endpoint is http://159.195.45.47:8051/mcp. All commands degrade gracefully
if Archon is unavailable.
Skills are markdown knowledge modules injected into agents via load_skills=["skill-name"] during task dispatch. They provide command-specific expertise and constrain agent behavior to domain-appropriate patterns.
Available skills mirror the command set:
agent-browser code-loop code-review code-review-fix
commit council decompose execute
mvp pillars planning-methodology pr
prd prime system-review validation
opencode-ai-coding-system/
├── .opencode/ # Framework (static, version-controlled)
│ ├── commands/ # 15 slash command specs (*.md)
│ ├── agents/ # Agent registry, resolver, permissions (TypeScript)
│ ├── hooks/ # 14 hooks organized by tier (TypeScript)
│ ├── skills/ # 16 loadable knowledge modules (*.md)
│ ├── pipeline/ # State machine, handoff, artifact logic (TypeScript)
│ ├── config/ # Zod schemas and category loader
│ ├── tests/ # 512 tests (Vitest)
│ ├── rules # Global rules injected every session
│ └── oh-my-opencode.jsonc # Agent and category model assignments
│
├── .agents/ # Working artifacts (dynamic, per-feature)
│ ├── context/
│ │ └── next-command.md # Cross-session pipeline handoff
│ └── features/{name}/
│ ├── plan.md # Feature plan, decisions, task index
│ ├── task-{N}.md # Task briefs (one per /execute session)
│ ├── report.md # Execution report
│ ├── review.md # Code review findings
│ └── *.done.md # Completed artifact (renamed suffix)
│
├── AGENTS.md # System instructions loaded by OpenCode
└── README.md
Artifacts in .agents/features/{feature}/ use a .done.md suffix to track completion state. This is machine-detectable, not just cosmetic — /prime and the pipeline logic rely on it to infer current position accurately.
plan.md → plan.done.md (all tasks completed)
task-1.md → task-1.done.md (task implemented)
report.md → report.done.md (committed)
review.md → review.done.md (findings addressed)
If you see a file without .done.md, that stage is still in progress.
flowchart LR
P[plan.md] -->|all tasks done| PD[plan.done.md]
T["task-{N}.md"] -->|task implemented| TD["task-{N}.done.md"]
R[report.md] -->|committed| RD[report.done.md]
RV[review.md] -->|findings addressed| RVD[review.done.md]
style PD fill:#27AE60,color:#fff
style TD fill:#27AE60,color:#fff
style RD fill:#27AE60,color:#fff
style RVD fill:#27AE60,color:#fff
Model mapping is defined in .opencode/oh-my-opencode.jsonc.
Agents:
sisyphus -> anthropic/claude-sonnet-4-6
oracle -> openai/gpt-5.2
momus -> openai/gpt-5.2
metis -> anthropic/claude-sonnet-4-6
hephaestus -> openai/gpt-5.3-codex
sisyphus-junior -> openai/gpt-5.3-codex
librarian -> ollama/glm-5:cloud
explore -> ollama/glm-5:cloud
atlas -> ollama/glm-5:cloud
multimodal-looker -> ollama/glm-5:cloud
Categories (task() dispatch):
visual-engineering -> ollama/glm-5:cloud
quick -> ollama/glm-5:cloud
writing -> ollama/glm-5:cloud
all other -> openai/gpt-5.3-codex
Core implementation in .opencode/, compiled with tsc, tested with Vitest.
| File | Purpose |
|---|---|
pipeline/state-machine.ts |
10 states, valid transitions enforced |
pipeline/handoff.ts |
Read/write next-command.md |
pipeline/artifacts.ts |
Discover task briefs, resolve next pending |
pipeline/commands.ts |
Map state to suggested next command |
agents/registry.ts |
Source of truth for all agent definitions |
agents/resolve-agent.ts |
Agent resolution with fallback chains |
config/category-schema.ts |
Zod validation for category dispatch config |
Status: 512 tests passing, TypeScript clean.
This is the part most documentation skips — the actual wiring that makes the system run. Understanding it is what separates using the system from understanding it.
Everything runs inside OpenCode CLI (opencode run). OpenCode is what handles model API calls, session management, tool execution, and slash command parsing. This framework is not a replacement for OpenCode — it is a configuration layer on top of it.
When you run opencode in this project directory, OpenCode automatically:
- Reads
AGENTS.mdat the project root and injects it as system context for every session - Loads
.opencode/as the framework configuration directory - Discovers slash commands from
.opencode/commands/*.md - Registers hooks from
.opencode/hooks/ - Loads
oh-my-opencode.jsoncfor agent and category model overrides
You do not wire any of this up manually. OpenCode's convention-based discovery handles it as long as the files exist in the right places.
AGENTS.md at the project root is the primary system prompt. It defines:
- The agent identity (Sisyphus — the main orchestrator)
- Behavioral rules: when to plan, when to delegate, when to ask
- The intent routing map: what different user requests should trigger
- The delegation system: how
task()calls work with categories and skills - Every hard rule the agent must follow (never commit without asking, never suppress type errors, etc.)
Every session starts with this injected as context. It is what gives the AI its personality, discipline, and workflow knowledge. Without it, OpenCode would just be a generic AI chat tool.
Each file in .opencode/commands/ is a slash command. The frontmatter at the top specifies which model it runs on:
---
description: Prime agent with project context and auto-detect tech stack
model: ollama/glm-5:cloud
---
# Prime: Load Project Context + Stack Detection
...When a user runs /prime, the command-model-router hook intercepts the message, reads the frontmatter model: field, and overrides the session model to glm-5:cloud for that command. This is how the model-tiering works automatically — cheap models for retrieval commands, expensive models for planning commands — without the user having to specify models manually.
The body of each command file is the actual instruction set the model follows when that command runs. It is not code — it is structured natural language that the model interprets as a workflow specification.
oh-my-opencode.jsonc in the project root is OpenCode's user configuration file. This framework uses it for two things:
1. Agent model overrides
These override the default model each named agent uses. The agent names here correspond directly to the agent definitions in .opencode/agents/registry.ts. When Sisyphus delegates work to Oracle via task(subagent_type="oracle"), OpenCode looks up oracle in this config and uses openai/gpt-5.2.
2. Category model assignments
{
"categories": {
"quick": { "model": "openai/gpt-5.3-codex", "provider": "openai" },
"deep": { "model": "openai/gpt-5.3-codex", "provider": "openai" },
"ultrabrain": { "model": "openai/gpt-5.3-codex", "provider": "openai" }
// ...
}
}Categories are the other delegation path. Instead of routing by agent name (subagent_type="oracle"), you route by domain (category="quick"). The category system maps task domains to models. When Sisyphus runs task(category="quick", ...), OpenCode looks up quick in this config and routes to gpt-5.3-codex.
The category config is loaded and validated by .opencode/config/load-categories.ts using Zod schemas. Default category definitions live in code; oh-my-opencode.jsonc merges on top as user overrides.
Hooks are TypeScript modules that OpenCode calls at specific lifecycle events. They hook into tool.execute.before, tool.execute.after, chat.message, and session events. The framework hooks enforce discipline automatically without user intervention:
rules-injector — When the agent reads any file (read, write, edit tools), this hook walks up the directory tree looking for AGENTS.md, .opencode/rules, and similar files. It appends their content to the file read output as <injected-context>. This means the agent always has project rules in context when it touches files, even if it did not explicitly load them.
command-model-router — Intercepts chat.message events. When it detects a slash command (/prime, /planning, etc.), it reads the corresponding commands/*.md frontmatter, extracts the model: field, and overrides the session model. This is the mechanism behind automatic model tiering.
todo-continuation — OpenCode compacts context windows when they get long. This hook fires before compaction, serializes the current todo list, and restores it after. Without this, in-progress task lists would disappear when context compresses.
category-skill-reminder — When the agent uses direct tools (edit, write, bash) instead of delegating via task(), this hook fires a system reminder: "You are doing delegatable work directly. Use task(category=..., load_skills=...) instead." This pushes the agent toward the cheaper, more appropriate delegation path.
agent-usage-reminder — Similar to category-skill-reminder. When orchestrator agents use grep or search tools directly, this hook reminds them to use the explore or librarian subagents instead. Token-efficient and higher quality.
pipeline-hook — At session start, reads .agents/context/next-command.md and emits a system reminder with the current pipeline state. This is what makes /prime able to tell you "you're at executing-tasks, task 2/4, run /execute next."
All hooks are registered through the hook registry in .opencode/hooks/index.ts and execute in priority tier order (Continuation → Session → Tool-Guard → Transform → Skill).
When Sisyphus calls task(category="deep", load_skills=["execute"], ...):
- OpenCode receives the
tasktool call - It looks up
"deep"inoh-my-opencode.jsonccategories → findsgpt-5.3-codexonopenai - It spawns a new subagent session with that model
- It loads the skills listed in
load_skills— readsexecute/SKILL.mdand prepends it to the subagent's context - The subagent runs with the prompt, model, and skills injected
- Results are returned to Sisyphus
The subagent is ephemeral — it exists only for that task. Skills are the knowledge it carries in. The category determines which model it runs on. This is the whole delegation system in one call.
sequenceDiagram
actor User
participant OC as OpenCode CLI
participant AGENTS as AGENTS.md
participant CMR as command-model-router<br/>(hook)
participant CMD as /planning command<br/>(claude-opus-4-5)
participant EX as explore subagent<br/>(glm-5:cloud)
participant MT as metis subagent<br/>(claude-sonnet-4-6)
participant FS as .agents/features/
participant PH as pipeline-hook
User->>OC: opencode run "/planning user-auth"
OC->>AGENTS: load → inject as system prompt
OC->>OC: discover .opencode/commands/, hooks/, oh-my-opencode.jsonc
OC->>CMR: chat.message fires
CMR->>CMD: read planning.md frontmatter → override model to claude-opus-4-5
CMD->>CMD: Phase 1-2: intent classification + discovery interview
CMD->>EX: task(subagent_type="explore") — search codebase
EX-->>CMD: findings returned
CMD->>MT: task(subagent_type="metis") — gap analysis
MT-->>CMD: gaps + risks returned
CMD->>CMD: Phase 5-7: decompose → preview → user approves
CMD->>FS: write plan.md + task-1.md … task-N.md
CMD->>FS: write next-command.md (status: awaiting-execution)
Note over User,PH: Next session starts
User->>OC: opencode run "/prime"
OC->>PH: session start fires
PH->>FS: reads next-command.md
PH-->>User: "Feature: user-auth | Status: awaiting-execution | Next: /execute"
This is the complete loop. OpenCode is the runtime. AGENTS.md is the brain. Commands are the workflow specs. Hooks enforce discipline automatically. oh-my-opencode.jsonc routes each piece of work to the right model. The .agents/ directory carries state between sessions.
Interactive mode is recommended for conversational commands like /planning, /mvp, and /prd.
# Open interactive TUI from project directory
opencode
# One-shot run
opencode run --model anthropic/claude-sonnet-4-6 "your message"
# One-shot with explicit working directory
opencode run --dir "/path/to/project" --model anthropic/claude-sonnet-4-6 "your message"
# Run a slash command directly
opencode run "/prime"
opencode run "/planning user-auth"
opencode run "/execute .agents/features/user-auth/plan.md"Use this exact sequence for a new project adopting the framework.
If you are an LLM agent setting up this system on a new machine, follow every step below in order. Each step is self-contained and tells you exactly what to do. Do not skip steps. Do not assume anything is already installed.
Platform assumptions: Arch Linux (or any Arch-based distro), WSL2 or native Linux. Adapt pacman commands to your package manager if different (apt, brew, etc.). All config paths use ~/.config/opencode/.
curl -fsSL https://opencode.ai/install | bashVerify: opencode --version should print a version number.
Fetch the installation guide and follow every step in it:
curl -s https://raw.githubusercontent.com/code-yeongyu/oh-my-opencode/refs/heads/dev/docs/guide/installation.mdThis will walk you through:
- Installing the plugin (
bunx oh-my-opencode install) - Connecting providers (Anthropic, OpenAI, Google, etc.)
- Configuring model assignments in
~/.config/opencode/oh-my-opencode.json
After install, verify the plugin is in ~/.config/opencode/opencode.json:
{ "plugin": ["oh-my-opencode@latest"] }bunx opencode-supermemory@latest install --no-tuiThen add your API key to ~/.config/opencode/supermemory.jsonc and add this to ~/.config/opencode/oh-my-opencode.json:
{ "disabled_hooks": ["anthropic-context-window-limit-recovery"] }Verify: cat ~/.config/opencode/supermemory.jsonc should show an sm_... API key.
This allows automatic switching between Claude accounts before rate limits hit.
4a. Add plugin to ~/.config/opencode/opencode.json:
{ "plugin": ["oh-my-opencode@latest", "opencode-supermemory", "oc-anthropic-multi-account@latest"] }4b. Disable the built-in Anthropic plugin (add to ~/.bashrc):
export OPENCODE_DISABLE_DEFAULT_PLUGINS=true4c. Install Linux-native bun (needed for the account CLI — do not use Windows bun in WSL):
# Install unzip first if missing
pacman -S --noconfirm unzip # or: apt install unzip
# Install Linux bun
curl -fsSL https://bun.sh/install | bash
# Binary is at ~/.bun/bin/bunAdd to ~/.bashrc:
export BUN_BINARY="$HOME/.bun/bin/bun"
export PATH="$HOME/.bun/bin:$PATH"4d. Clone the account management CLI:
git clone https://github.com/gaboe/oc-anthropic-multi-account.git ~/oc-anthropic-multi-account
cd ~/oc-anthropic-multi-account
~/.bun/bin/bun install
~/.bun/bin/bun add @effect/platform@latest @effect/cluster@latest @effect/printer@latest @effect/printer-ansi@latest4e. Add each Anthropic account via OAuth (one per Max subscription):
cd ~/oc-anthropic-multi-account
~/.bun/bin/bun src/cli.ts add primary # first account
~/.bun/bin/bun src/cli.ts add fallback1 # additional accountsCheck status:
~/.bun/bin/bun src/cli.ts usageTokens stored at: ~/.config/opencode/anthropic-multi-account-accounts.json
⚠️ Warning: This uses Google's internal IDE endpoint via OAuth. Multiple users have reported account bans. You accept all risk by proceeding.
This gives access to Claude Opus + Gemini 3 models via Google's internal quota, with multi-account rotation.
5a. Add plugin to ~/.config/opencode/opencode.json:
{
"plugin": ["oh-my-opencode@latest", "opencode-supermemory", "oc-anthropic-multi-account@latest", "opencode-antigravity-auth@latest"],
"provider": {
"google": {
"models": {
"antigravity-claude-opus-4-6-thinking": {
"name": "Claude Opus 4.6 Thinking (Antigravity)",
"limit": { "context": 200000, "output": 64000 },
"modalities": { "input": ["text", "image", "pdf"], "output": ["text"] },
"variants": {
"low": { "thinkingConfig": { "thinkingBudget": 8192 } },
"max": { "thinkingConfig": { "thinkingBudget": 32768 } }
}
},
"antigravity-claude-sonnet-4-6": {
"name": "Claude Sonnet 4.6 (Antigravity)",
"limit": { "context": 200000, "output": 64000 },
"modalities": { "input": ["text", "image", "pdf"], "output": ["text"] }
},
"antigravity-gemini-3-pro": {
"name": "Gemini 3 Pro (Antigravity)",
"limit": { "context": 1048576, "output": 65535 },
"modalities": { "input": ["text", "image", "pdf"], "output": ["text"] },
"variants": { "low": { "thinkingLevel": "low" }, "high": { "thinkingLevel": "high" } }
},
"antigravity-gemini-3-flash": {
"name": "Gemini 3 Flash (Antigravity)",
"limit": { "context": 1048576, "output": 65536 },
"modalities": { "input": ["text", "image", "pdf"], "output": ["text"] },
"variants": { "minimal": { "thinkingLevel": "minimal" }, "low": { "thinkingLevel": "low" }, "medium": { "thinkingLevel": "medium" }, "high": { "thinkingLevel": "high" } }
},
"gemini-3-flash-preview": {
"name": "Gemini 3 Flash Preview (Gemini CLI)",
"limit": { "context": 1048576, "output": 65536 },
"modalities": { "input": ["text", "image", "pdf"], "output": ["text"] }
},
"gemini-3-pro-preview": {
"name": "Gemini 3 Pro Preview (Gemini CLI)",
"limit": { "context": 1048576, "output": 65535 },
"modalities": { "input": ["text", "image", "pdf"], "output": ["text"] }
},
"gemini-3.1-pro-preview": {
"name": "Gemini 3.1 Pro Preview (Gemini CLI)",
"limit": { "context": 1048576, "output": 65535 },
"modalities": { "input": ["text", "image", "pdf"], "output": ["text"] }
}
}
}
}
}5b. Disable built-in Google auth in ~/.config/opencode/oh-my-opencode.json:
{ "google_auth": false }5c. Add each Google account (restart opencode first so the plugin installs, then):
opencode auth login # choose "OAuth with Google (Antigravity)"
opencode auth login # repeat for each additional Google accountOpenChamber provides a browser-based UI with diff viewer, GitHub integration, branchable chat timeline, and mobile access.
6a. Install prerequisites:
# Node.js 20+ required
node --version # must be 20+
# Install pnpm if missing
npm install -g pnpm
pnpm setup
source ~/.bashrc6b. Install OpenChamber:
pnpm add -g @openchamber/webIf you get "No global bin directory" error:
pnpm setup && source ~/.bashrc && pnpm add -g @openchamber/web6c. Rebuild native terminal module:
# Find and rebuild node-pty
find ~/.local/share/pnpm -name "node-pty" -type d | head -1 | xargs -I{} sh -c 'cd {} && npm rebuild'6d. Add startup helpers to ~/.bashrc:
# Linux-native bun (required — do not rely on Windows bun in PATH for WSL)
export BUN_BINARY="$HOME/.bun/bin/bun"
export PATH="$HOME/.bun/bin:$PATH"
# OpenChamber aliases
alias oc-ui='nohup bash -c "export BUN_BINARY=\$HOME/.bun/bin/bun; export OPENCODE_HOST=http://localhost:4096; export OPENCODE_SKIP_START=true; openchamber --port 3001" > /tmp/openchamber.log 2>&1 & echo "OpenChamber on http://localhost:3001 (PID: \$!)"'
alias oc-ui-stop='pkill -f "openchamber" && echo "stopped"'
alias oc-ui-log='tail -f /tmp/openchamber.log'6e. Start OpenChamber:
source ~/.bashrc
oc-uiThen open http://localhost:3001 in your browser (on WSL: use Windows browser, localhost is bridged automatically).
Note on daemon mode:
--daemonflag does not work in WSL due to IPC process detachment limitations. Use thenohupalias above instead.
git clone https://github.com/ryanjosebrosas/autonomous-coding-system-final.git
cd autonomous-coding-system-final
opencode # TUI with full system loaded
# or: oc-ui then open http://localhost:3001Inside opencode, run:
/supermemory-init
This indexes the codebase into persistent memory so every future session starts with context.
After completing all steps, verify:
opencode --version # opencode installed
openchamber --version # openchamber installed
cat ~/.config/opencode/opencode.json | grep plugin # plugins listed
cat ~/.config/opencode/oh-my-opencode.json | grep google_auth # google_auth: false
cat ~/.config/opencode/supermemory.jsonc # API key present
echo $OPENCODE_DISABLE_DEFAULT_PLUGINS # true
echo $BUN_BINARY # ~/.bun/bin/bun
curl -s http://localhost:3001/health # {"status":"ok",...} after oc-ui| File | Purpose |
|---|---|
~/.config/opencode/opencode.json |
Plugin list, providers, model definitions, MCP servers |
~/.config/opencode/oh-my-opencode.json |
Agent model assignments, category routing, tmux, hooks |
~/.config/opencode/supermemory.jsonc |
Supermemory API key |
~/.config/opencode/anthropic-multi-account-accounts.json |
Anthropic OAuth tokens (auto-created by CLI) |
~/.config/opencode/antigravity-accounts.json |
Google OAuth tokens (auto-created on login) |
~/.bashrc |
OPENCODE_DISABLE_DEFAULT_PLUGINS, BUN_BINARY, oc-ui alias |
~/oc-anthropic-multi-account/ |
Anthropic account management CLI (cloned repo) |
Step 1: Start every session with /prime
opencode run "/prime"Step 2: Define product direction (once per project)
opencode run "/mvp"
opencode run "/prd"
opencode run "/pillars"Step 3: Research each pillar
opencode run "/decompose <pillar-name>"Repeat for each pillar in dependency order.
Step 4: Plan a feature
opencode run "/planning user-auth"Go through the interactive interview. Approve the plan preview when it looks right.
Step 5: Execute task briefs (one per session)
opencode run "/execute .agents/features/user-auth/plan.md"Run this across multiple sessions until all task briefs are complete. The system picks the next undone brief automatically.
Step 6: Run the review loop
opencode run "/code-loop user-auth"Step 7: Commit and open PR
opencode run "/commit"
opencode run "/pr"Assume feature name: user-auth.
Session 1
- Run
/prime - Run
/planning user-auth - System runs discovery interview, researches codebase, reasons through design, generates plan
- Output:
plan.md+task-1.mdthroughtask-N.md - Handoff written: next command is
/execute
Sessions 2 through N (one per task brief)
- Run
/prime— handoff surfaces next task - Run
/execute .agents/features/user-auth/plan.md - System detects next undone brief, implements exactly that task
task-{N}.mdrenamed totask-{N}.done.md- Handoff updated
Review session
- Run
/prime - Run
/code-loop user-auth - Loop reviews, fixes, re-reviews until clean
- Pipeline state advances to
ready-to-commit
Ship session
- Run
/prime - Run
/commit - Run
/pr - Pipeline reaches
pr-open
Each session has a deterministic next action. The handoff file carries continuity so you never need to remember where you left off.
The fastest way to use this framework well is to treat it like a CI pipeline for AI sessions:
- Planning artifacts are the source of truth for what gets built
- The handoff file is the session bridge — do not edit it manually
- The state machine is the traffic controller — it tells you what is valid next
- The review loop is the safety net — nothing ships until it is clean
Follow that model and AI work stops feeling like ad hoc chat. It becomes reproducible engineering.
Always first: /prime
Project setup: /mvp -> /prd -> /pillars -> /decompose
Per feature:
Plan: /planning {feature}
Execute: /execute .agents/features/{feature}/plan.md (repeat per task)
Review: /code-loop {feature}
Ship: /commit then /pr
Quality tools:
/code-review
/code-review-fix {review.md}
/final-review
/system-review
/council {topic}
{ "agents": { "sisyphus": { "model": "anthropic/claude-sonnet-4-6" }, "oracle": { "model": "openai/gpt-5.2" }, "librarian": { "model": "ollama/glm-5:cloud" } // ... } }