Context
The original CompanionAgent example communicates with OpenAI via raw fetch() calls to https://api.openai.com/v1/chat/completions. There are no provider implementations for Anthropic or Google Gemini.
Problem
- Manual HTTP handling. Error parsing, response normalization, header management, and retry logic are all hand-written. Native SDKs handle this automatically.
- No Anthropic or Gemini support. Users who want Claude or Gemini must write their own integration from scratch, including format conversion (Anthropic's top-level system parameter, Gemini's Content parts format).
- Inconsistent error handling. Raw fetch errors are generic — there is no distinction between auth failures, rate limits, network errors, and timeouts. MCP clients receive opaque error messages.
- No token usage reporting. The raw fetch approach does not extract or normalize token usage from provider responses.
Expected behavior
- Built-in providers for OpenAI, Anthropic, and Google Gemini using their official native SDKs.
- Normalized response format (text, stopReason, usage) regardless of which provider generated it.
- Typed error classification (AUTH, UPSTREAM, TRANSPORT, TIMEOUT) for all providers.
Context
The original
CompanionAgentexample communicates with OpenAI via rawfetch()calls tohttps://api.openai.com/v1/chat/completions. There are no provider implementations for Anthropic or Google Gemini.Problem
Expected behavior