Use your OpenCode Go subscription with Claude Code CLI.
Claude Code CLI ─── Anthropic format ──▶ opencode-go-proxy :3456 ─── OpenAI format ──▶ OpenCode Go
◀── Anthropic format ─── ◀── OpenAI format ────
Claude Code thinks it's talking to Anthropic. Your requests silently route through OpenCode Go's affordable open models instead.
-
Clone the repository:
git clone https://github.com/ahsanjamee/opencode-go-claude-proxy.git cd opencode-go-claude-proxy -
Build the image:
docker build -t opencode-proxy . -
Run the proxy (passing your API key):
docker run -d --name proxy -p 3456:3456 -e OPENCODE_API_KEY=sk-YOUR-KEY opencode-proxy
-
Clone the repository and install dependencies:
git clone https://github.com/ahsanjamee/opencode-go-claude-proxy.git cd opencode-go-claude-proxy npm install -
Create an
.envfile from the example:cp .env.example .env
Edit
.envand add yourOPENCODE_API_KEY. -
Start the proxy:
# For development (hot-reload): npm run dev # For production: npm run build npm start
Create or edit the Claude Code settings file (usually at ~/.claude/settings.json) to point it to the proxy:
{
"env": {
"ANTHROPIC_BASE_URL": "http://localhost:3456",
"ANTHROPIC_AUTH_TOKEN": "sk-test",
"ANTHROPIC_API_KEY": "",
"ANTHROPIC_MODEL": "deepseek-v4-pro"
},
"theme": "dark-ansi",
"effortLevel": "high"
}Tip: You can change
ANTHROPIC_MODELto any model listed in the Supported Models table (likeminimax-m2.7ordeepseek-v4-pro). The proxy will automatically route your requests to the correct backend.
| Flag | Environment variable | Default | Description |
|---|---|---|---|
--api-key <key> |
OPENCODE_API_KEY |
— | Required. Your OpenCode Go API key |
--port <number> |
PORT |
3456 |
Port to listen on |
--base-url <url> |
PROXY_BASE_URL |
https://opencode.ai/zen/go/v1 |
Upstream base URL |
--timeout <ms> |
PROXY_TIMEOUT_MS |
60000 |
Upstream idle timeout (resets on each chunk for streams) |
--config <path> |
PROXY_CONFIG_PATH |
— | Path to config.json |
--help |
— | — | Show help |
| Model | Backend |
|---|---|
kimi-k2.6, kimi-k2.5 |
OpenAI compat |
glm-5.1, glm-5 |
OpenAI compat (thinking) |
deepseek-v4-pro, deepseek-v4-flash |
OpenAI compat (thinking) |
qwen3.6-plus, qwen3.5-plus |
Alibaba compat |
mimo-v2-pro, mimo-v2-omni |
OpenAI compat |
minimax-m2.7, minimax-m2.5 |
Anthropic native (long context) |
| Method | Path | Description |
|---|---|---|
POST |
/v1/messages |
Main chat endpoint |
POST |
/v1/messages/count_tokens |
Token count estimate |
GET |
/v1/models |
Available model list |
GET |
/health |
Health check |
Error: OPENCODE_API_KEY is required
→ The key must be passed as --api-key sk-... or via the OPENCODE_API_KEY env var.
→ When using npm run start, you must use -- --api-key sk-... (double dash before flags).
Tools/skills not working in Claude Code
→ Ensure you're on v1.1.0+. Earlier versions had a wrong SSE delta type that broke all tools.
Reasoning text showing in Claude Code output
→ Ensure you're on v1.1.0+. Reasoning is now properly emitted as hidden thinking blocks.
400 errors from beta headers
→ Set CLAUDE_CODE_DISABLE_EXPERIMENTAL_BETAS=1 before running claude.
Stream times out / AbortError during long agent runs
→ The proxy uses an idle timeout for streaming: the timer resets every time a chunk arrives from the model. If no data arrives for PROXY_TIMEOUT_MS (default 60 s), the connection is aborted.
→ For long agent tasks (e.g. Kimi k2.6 reasoning for minutes), increase the timeout in your .env:
PROXY_TIMEOUT_MS=300000 # 5 minutes→ The proxy also sends SSE keepalive comments every 3 s to keep the Claude Code side alive, but this does not affect the upstream timeout.
| Anthropic | OpenAI |
|---|---|
system string/block array |
messages[0] with role: "system" |
| Text content blocks | content string |
thinking blocks |
reasoning_content |
tool_use blocks |
tool_calls |
tool_result blocks |
messages with role: "tool" |
| OpenAI | Anthropic |
|---|---|
finish_reason: "stop" |
stop_reason: "end_turn" |
finish_reason: "tool_calls" |
stop_reason: "tool_use" |
finish_reason: "length" |
stop_reason: "max_tokens" |
delta.content text |
text_delta in content_block_delta |
delta.reasoning_content |
thinking_delta in content_block_delta |
delta.tool_calls[].function.arguments |
input_json_delta in content_block_delta |
Claude Code internally uses claude-* models (like claude-haiku-* for background tasks). OpenCode Go doesn't natively support these IDs.
The proxy automatically maps all Claude models to their closest OpenCode equivalents:
claude-haiku-*→qwen3.5-plusclaude-sonnet-*→qwen3.6-plusclaude-opus-*→qwen3.6-plus
If Claude Code requests an entirely unknown model, it safely falls back to qwen3.6-plus to prevent 401/400 crashes.
You can override these mappings by editing ~/.opencode-proxy/config.json:
{
"modelAliases": {
"claude-haiku-*": "kimi-k2.5",
"claude-sonnet-*": "deepseek-v4-pro"
},
"global": {
"defaultModel": "kimi-k2.5"
}
}This project is an unofficial community tool and is not affiliated with, endorsed by, or associated with Anthropic or OpenCode Go. Use it at your own risk.
Created by ahsanjamee.
MIT