Summary
When Pilot is configured with the OpenCode backend, GitHub issue pickup works up to branch creation and OpenCode server startup, but execution fails immediately on the first send to OpenCode.
Environment
- Pilot: 2.100.1
- OpenCode: 1.4.6
- OS: Linux amd64
- Backend: opencode
- Working OpenCode CLI model:
openai/gpt-5.4-mini
Reproduction
- Configure Pilot with
executor.type: opencode.
- Configure OpenCode server, for example:
server_url: http://127.0.0.1:44096
server_command: opencode serve --hostname 127.0.0.1 --port 44096
model: openai/gpt-5.4-mini
provider: openai
- Enable GitHub polling for a labeled issue.
- Start Pilot with
pilot start --replace.
- Let Pilot pick up a GitHub issue.
Expected
Pilot should create a branch, send the task prompt to OpenCode, and start backend execution.
Actual
Pilot reaches all setup stages, including OpenCode server startup, then fails immediately on the first backend send.
Observed sequence:
- issue picked up
- branch selected/created
Starting OpenCode server
OpenCode server ready
- immediate failure on send
Representative log lines:
OpenCode server ready
Backend execution failed ... error="failed to send message: message failed ..."
In an earlier run, the nested validation payload included:
Invalid input: expected object, received string
for the model field path.
Additional Notes
opencode run --model openai/gpt-5.4-mini hello works on the same machine, so the provider/model itself is usable from OpenCode CLI.
- The failure seems specific to Pilot -> OpenCode server integration, not OpenCode server startup.
- We also confirmed Pilot reaches
OpenCode server ready, so this does not look like a connection-refused/startup issue.
Why this looks like a Pilot/OpenCode integration issue
Direct OpenCode CLI usage works, but Pilot fails when sending the task payload to the OpenCode server endpoint. That suggests request shape/protocol drift between Pilot and OpenCode server APIs.
If useful, I can provide a fuller redacted log excerpt, but the key failure is reproducible at the first send step after server readiness.
Summary
When Pilot is configured with the OpenCode backend, GitHub issue pickup works up to branch creation and OpenCode server startup, but execution fails immediately on the first send to OpenCode.
Environment
openai/gpt-5.4-miniReproduction
executor.type: opencode.server_url: http://127.0.0.1:44096server_command: opencode serve --hostname 127.0.0.1 --port 44096model: openai/gpt-5.4-miniprovider: openaipilot start --replace.Expected
Pilot should create a branch, send the task prompt to OpenCode, and start backend execution.
Actual
Pilot reaches all setup stages, including OpenCode server startup, then fails immediately on the first backend send.
Observed sequence:
Starting OpenCode serverOpenCode server readyRepresentative log lines:
In an earlier run, the nested validation payload included:
for the
modelfield path.Additional Notes
opencode run --model openai/gpt-5.4-mini helloworks on the same machine, so the provider/model itself is usable from OpenCode CLI.OpenCode server ready, so this does not look like a connection-refused/startup issue.Why this looks like a Pilot/OpenCode integration issue
Direct OpenCode CLI usage works, but Pilot fails when sending the task payload to the OpenCode server endpoint. That suggests request shape/protocol drift between Pilot and OpenCode server APIs.
If useful, I can provide a fuller redacted log excerpt, but the key failure is reproducible at the first send step after server readiness.