A lightweight CLI for interacting with OpenRouter models and AI services. Provides chat capabilities, model listing, and code assistance directly from your terminal. This is an independent tool that works with OpenRouter's API, not the official OpenRouter CLI.
- Chat Interface: Engage in conversations with various AI models
- Code Assistance: Generate, explain, fix, review, or compare code
- Model Discovery: List and search available AI models
- Multiple Provider Support: Works with OpenRouter and OpenAI APIs
- Streaming Responses: Real-time output for faster feedback
- File Output: Save AI responses directly to files
- Configurable Models: Use your preferred AI model for tasks
Install globally using npm:
npm install -g routerxYou'll need an API key from OpenRouter or OpenAI. Add it to your .env file:
OPENROUTER_API_KEY=your_api_key_here
# or
OPENAI_API_KEY=your_api_key_hererouterx chat "Hello, how are you?"Options:
--model <model>: Specify model name (default: openai/gpt-4o-mini)--base-url <url>: Override API base URL--save <file>: Save the response to a file
routerx modelsOptions:
--free: Show only free models--search <keyword>: Filter models by keyword
routerx code [mode] [target...]Modes:
generate: Generate code (default)explain: Explain codefix: Fix codereview: Review codediff: Compare two files
Options:
--model <model>: Force specific model ID--save <path>: Save output to file--context <dir>: Add folder context--free: Force only free model fallback--prefer <keyword>: Bias model selection
Chat with default settings:
routerx chat "What is the weather like today?"Use a specific model:
routerx chat --model "mistralai/mistral-7b-instruct" "Write a short poem"Get code help:
routerx code explain mycode.jsGenerate code:
routerx code generate "Create a function to calculate factorial in Python"Compare two files:
routerx code diff file1.js file2.jsSave chat response to file:
routerx chat --save response.txt "Explain quantum computing"Search for specific models:
routerx models --search "gpt-4"List only free models:
routerx models --freeRouterX can be configured using multiple methods:
Create a .env file in your project root or home directory:
OPENROUTER_API_KEY=your_openrouter_api_key
OPENAI_API_KEY=your_openai_api_keyRouterX will use OPENROUTER_API_KEY if available, otherwise it falls back to OPENAI_API_KEY.
RouterX supports a JSON configuration file to set default values. Create a config.json file in your home directory or in the current working directory:
{
"defaultModel": "openai/gpt-4o-mini",
"defaultBaseUrl": "https://openrouter.ai/api/v1",
"defaultSavePath": "./outputs",
"maxRetries": 3,
"timeout": 30000
}To create your own config file, copy the example:
cp node_modules/routerx/config.example.json ~/routerx-config.jsonSettings in the config file will override the default values but can be overridden by command-line options.
- API Key Issues: Ensure your API keys are properly set in the environment
- Model Not Found: Check that the model ID exists in the available models list
- Rate Limiting: If you encounter rate limit errors, try using different models or wait before retrying
Contributions are welcome! Please read our Contributing Guidelines for details on how to get started.
Please read and follow our Code of Conduct to keep our community approachable and respectful.
See our Changelog for a history of changes and releases.
MIT