feat(ai): add SmartRouter for unified multi-provider LLM routing#477
feat(ai): add SmartRouter for unified multi-provider LLM routing#477louisrodriguez1990-crypto wants to merge 1 commit into
Conversation
Implements SmartRouter class that automatically detects the correct LLM provider from model name prefix, routes requests, and falls back through a configurable chain when a provider fails. Supports OpenAI, Gemini, Llama, and Cohere with token usage tracking and messages array input. Closes arakoodev#286 Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
|
CLA Assistant Lite bot: Thank you for your submission, we really appreciate it. Before we can accept your contribution, we ask that you sign the Arakoo Contributor License Agreement. You can sign the CLA by adding a new comment to this pull request and pasting exactly the following text. I have read the Arakoo CLA Document and I hereby sign the CLA You can retrigger this bot by commenting recheck in this Pull Request |
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 56502b60c5
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| for (const provider of order) { | ||
| if (!this.keys[provider]) continue; | ||
| try { | ||
| return await this.callProvider(provider, options.model, messages, options); |
There was a problem hiding this comment.
Normalize model IDs for fallback providers
When fallback kicks in, chat() always forwards the original options.model to every provider, so cross-provider fallbacks often send invalid model IDs (for example, OpenAI model gpt-4o to Cohere/Llama). In those cases the alternate provider fails even when it is healthy and configured, which breaks the advertised “automatic fallback chain” behavior outside the OpenAI→Gemini path covered by tests.
Useful? React with 👍 / 👎.
| { | ||
| model, | ||
| messages, | ||
| max_tokens: options.max_tokens ?? 256, |
There was a problem hiding this comment.
Send o-series-safe token limit field to OpenAI
Provider detection explicitly routes o1-*/o3-* models to OpenAI, but the request body always uses max_tokens. For o-series chat completions, max_tokens is not supported and requests should use max_completion_tokens, so these models can fail despite being recognized as supported.
Useful? React with 👍 / 👎.
|
I have read the Arakoo CLA Document and I hereby sign the CLA |
Summary
Implements
SmartRouter— a unified LLM routing class that automatically detects the provider from model name prefix and falls back through a configurable chain on failure.Closes #286
What's included
gpt-*→ OpenAI,gemini-*→ Gemini,llama-*/meta-llama/→ Llama,command-*→ Cohere){ prompt_tokens, completion_tokens, total_tokens }fallbackChainconstructor optionpromptstringimport { SmartRouter } from "@arakoodev/edgechains-ai"Test coverage (Jest,
jest.mock("axios"))detectProviderfor all 4 providers and unknown modelisModelSupported/listProvidersSmartRouter: all providers failedfallbackChainordermessagesarraypromptnormessagesprovidedUsage
/claim #286