Skip to content

Agent supports only one AI provider — no way to combine OpenAI + Anthropic + Gemini #2

@morozow

Description

@morozow

Context

Currently the only built-in agent example (CompanionAgent) is hardwired to a single OpenAI endpoint via raw fetch(). If a user wants to use Anthropic or Gemini, they have to write an entirely separate agent from scratch. There is no shared abstraction.

Problem

  1. One agent = one provider. There is no interface or registry that lets an agent talk to multiple LLM backends. A user who needs both GPT-4o and Claude in the same server must duplicate the entire agent implementation.
  2. No provider switching at runtime. Once a session is created, the provider is baked in. An MCP client cannot say "use Anthropic for this session" — the choice is hardcoded at server startup.
  3. No provider discovery. agents_discover returns agent IDs and capabilities, but nothing about which LLM providers or models are available behind the agent. Clients have no way to know what they can route to.

Expected behavior

  • A single agent should be able to delegate to any registered AI provider.
  • MCP clients should be able to select a provider per session and discover available providers and models.

Metadata

Metadata

Assignees

Labels

enhancementNew feature or request

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions