Context
Currently the only built-in agent example (CompanionAgent) is hardwired to a single OpenAI endpoint via raw fetch(). If a user wants to use Anthropic or Gemini, they have to write an entirely separate agent from scratch. There is no shared abstraction.
Problem
- One agent = one provider. There is no interface or registry that lets an agent talk to multiple LLM backends. A user who needs both GPT-4o and Claude in the same server must duplicate the entire agent implementation.
- No provider switching at runtime. Once a session is created, the provider is baked in. An MCP client cannot say "use Anthropic for this session" — the choice is hardcoded at server startup.
- No provider discovery.
agents_discover returns agent IDs and capabilities, but nothing about which LLM providers or models are available behind the agent. Clients have no way to know what they can route to.
Expected behavior
- A single agent should be able to delegate to any registered AI provider.
- MCP clients should be able to select a provider per session and discover available providers and models.
Context
Currently the only built-in agent example (
CompanionAgent) is hardwired to a single OpenAI endpoint via rawfetch(). If a user wants to use Anthropic or Gemini, they have to write an entirely separate agent from scratch. There is no shared abstraction.Problem
agents_discoverreturns agent IDs and capabilities, but nothing about which LLM providers or models are available behind the agent. Clients have no way to know what they can route to.Expected behavior