Skip to content

feat(ai): add SmartRouter for unified multi-provider LLM routing#477

Open
louisrodriguez1990-crypto wants to merge 1 commit into
arakoodev:tsfrom
louisrodriguez1990-crypto:feat/smart-router
Open

feat(ai): add SmartRouter for unified multi-provider LLM routing#477
louisrodriguez1990-crypto wants to merge 1 commit into
arakoodev:tsfrom
louisrodriguez1990-crypto:feat/smart-router

Conversation

@louisrodriguez1990-crypto
Copy link
Copy Markdown

Summary

Implements SmartRouter — a unified LLM routing class that automatically detects the provider from model name prefix and falls back through a configurable chain on failure.

Closes #286

What's included

  • Provider auto-detection via model name prefix (gpt-* → OpenAI, gemini-* → Gemini, llama-*/meta-llama/ → Llama, command-* → Cohere)
  • Automatic fallback chain — if the primary provider fails (rate limit, network error, etc.), the router tries the next provider in sequence
  • All four providers supported: OpenAI, Gemini, Llama, Cohere (Cohere was missing from competing PR feat: add SmartRouter for unified LLM routing #453)
  • Token usage tracking across all providers, normalized to { prompt_tokens, completion_tokens, total_tokens }
  • Skip providers with no API key — never calls a provider that isn't configured
  • Custom fallback chain order via fallbackChain constructor option
  • Messages array support in addition to plain prompt string
  • Exported from package indeximport { SmartRouter } from "@arakoodev/edgechains-ai"

Test coverage (Jest, jest.mock("axios"))

  • detectProvider for all 4 providers and unknown model
  • isModelSupported / listProviders
  • Primary routing per provider (OpenAI, Gemini, Llama, Cohere)
  • Fallback: OpenAI fails → Gemini succeeds
  • All providers fail → throws SmartRouter: all providers failed
  • Skips providers with no API key configured
  • Respects custom fallbackChain order
  • Accepts messages array
  • Throws when neither prompt nor messages provided

Usage

import { SmartRouter } from "@arakoodev/edgechains-ai";

const router = new SmartRouter({
  openaiApiKey: process.env.OPENAI_API_KEY,
  geminiApiKey: process.env.GEMINI_API_KEY,
});

const result = await router.chat({ model: "gpt-4o", prompt: "Hello!" });
console.log(result.content);   // "Hello! How can I help you?"
console.log(result.provider);  // "openai"
console.log(result.usage);     // { prompt_tokens: 9, completion_tokens: 12, total_tokens: 21 }

/claim #286

Implements SmartRouter class that automatically detects the correct LLM
provider from model name prefix, routes requests, and falls back through
a configurable chain when a provider fails. Supports OpenAI, Gemini,
Llama, and Cohere with token usage tracking and messages array input.

Closes arakoodev#286

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
@github-actions
Copy link
Copy Markdown

CLA Assistant Lite bot: Thank you for your submission, we really appreciate it. Before we can accept your contribution, we ask that you sign the Arakoo Contributor License Agreement. You can sign the CLA by adding a new comment to this pull request and pasting exactly the following text.


I have read the Arakoo CLA Document and I hereby sign the CLA


You can retrigger this bot by commenting recheck in this Pull Request

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 56502b60c5

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

for (const provider of order) {
if (!this.keys[provider]) continue;
try {
return await this.callProvider(provider, options.model, messages, options);
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Normalize model IDs for fallback providers

When fallback kicks in, chat() always forwards the original options.model to every provider, so cross-provider fallbacks often send invalid model IDs (for example, OpenAI model gpt-4o to Cohere/Llama). In those cases the alternate provider fails even when it is healthy and configured, which breaks the advertised “automatic fallback chain” behavior outside the OpenAI→Gemini path covered by tests.

Useful? React with 👍 / 👎.

{
model,
messages,
max_tokens: options.max_tokens ?? 256,
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Send o-series-safe token limit field to OpenAI

Provider detection explicitly routes o1-*/o3-* models to OpenAI, but the request body always uses max_tokens. For o-series chat completions, max_tokens is not supported and requests should use max_completion_tokens, so these models can fail despite being recognized as supported.

Useful? React with 👍 / 👎.

@louisrodriguez1990-crypto
Copy link
Copy Markdown
Author

I have read the Arakoo CLA Document and I hereby sign the CLA

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

BOUNTY: Convert the endpoints to a smart router like litellm does in python

1 participant