A simple terminal-based AI chat application that supports multiple providers including Ollama and OpenRouter with streaming response support.
This is a monorepo managed with pnpm workspaces:
- packages/cli - Terminal application
- packages/web - Next.js web interface
- packages/shared - Shared types and storage abstractions
- π Simple REPL interface
- π Streaming responses (display tokens as they arrive)
- π¬ Conversation history within a session
- π Session Management - Create, resume, and navigate between multiple chat sessions
- π Support for multiple providers (Ollama and OpenRouter)
- πΎ Pluggable storage backends (in-memory or Redis)
- π₯οΈ Pluggable output/view layer (stdout, Redis pub/sub, composite)
- π Real-time web streaming - View conversations in a web browser via SSE
- π― Web UI with session navigation - Browse all sessions with sidebar navigation
- βοΈ JSON-based configuration
- π³ Docker support for Redis and web app
- π¨ Colored terminal output
- Node.js 18+ (for native fetch support)
- pnpm package manager
-
Clone the repository or download the source code
-
Install dependencies for all packages:
pnpm install- Build all packages:
pnpm buildOr build specific packages:
pnpm build:cli
pnpm build:webThe project includes Docker support for running Redis and the web streaming application.
# Start all services (Redis + Web app)
docker compose up -d
# Start only Redis
docker compose up -d redis
# Check services are running
docker compose ps
# View logs
docker compose logs -f # All services
docker compose logs -f redis # Redis only
docker compose logs -f web # Web app only
# Stop all services
docker compose downThe Redis container will:
- Run on port 6379
- Persist data in a Docker volume named
redis-data - Use AOF (Append Only File) persistence for durability
The web container will:
- Run on port 3000
- Connect to Redis for message streaming
- Provide SSE endpoint at
/api/stream
The application will create a default config file on first run at one of these locations:
./config.json(local directory, checked first)~/.config/ai-chat/config.json(global config)
You can also copy config.example.json to config.json and modify it:
{
"ollama": {
"baseUrl": "http://localhost:11434",
"model": "llama2"
},
"openrouter": {
"baseUrl": "https://openrouter.ai/api/v1",
"model": "anthropic/claude-3.5-sonnet",
"apiKey": "sk-or-v1-YOUR_API_KEY_HERE"
},
"defaults": {
"provider": "ollama"
}
}- ollama.baseUrl: The URL of your Ollama instance (default:
http://localhost:11434) - ollama.model: The Ollama model to use (e.g.,
llama2,mistral,codellama) - openrouter.baseUrl: The OpenRouter API URL (default:
https://openrouter.ai/api/v1) - openrouter.model: The OpenRouter model to use (e.g.,
anthropic/claude-3.5-sonnet) - openrouter.apiKey: Your OpenRouter API key (required for OpenRouter)
- defaults.provider: Default provider to use (
ollamaoropenrouter)
- redis.enabled: Enable Redis for persistent conversation history (default:
false) - redis.host: Redis server hostname (default:
localhost) - redis.port: Redis server port (default:
6379) - redis.password: Redis password (optional)
- redis.username: Redis username for ACL (optional, Redis 6+)
- redis.database: Redis database number 0-15 (default:
0) - redis.sessionName: Unique name for the conversation session (default:
default-session)- If set to
default-sessionand noREDIS_SESSION_NAMEenv var, a unique session ID is auto-generated - To use named sessions, either set a custom value in config or use
REDIS_SESSION_NAMEenv var
- If set to
- redis.ttl: Time-to-live in seconds for conversation data (optional, no expiration if not set)
All Redis settings can be overridden using environment variables:
REDIS_ENABLED,REDIS_HOST,REDIS_PORT,REDIS_PASSWORDREDIS_USERNAME,REDIS_DATABASE,REDIS_SESSION_NAME,REDIS_TTL
See .env.example for a complete list with descriptions.
Run the CLI in development mode with hot-reload:
pnpm dev:cliRun the web app:
pnpm dev:webAfter building, run the CLI:
cd packages/cli
pnpm startOr link the CLI globally:
cd packages/cli
pnpm link --global
ai-chat# Use a specific provider
ai-chat --provider ollama
ai-chat --provider openrouter
# Use a specific model (overrides config)
ai-chat --provider ollama --model codellama
ai-chat --provider openrouter --model anthropic/claude-3-opus
# Resume or create a named session
ai-chat --session my-project
ai-chat --session frontend-work
# Combine options
ai-chat -p ollama -m llama2 -s research
# Short options
ai-chat -p ollama -m llama2 -s my-sessionThe --session (or -s) flag allows you to create, resume, and manage multiple chat sessions. See the Session Management guide for detailed information.
Once in the chat:
- Type your message and press Enter to send
/quitor/exit- Exit the application/clear- Clear conversation history/help- Show available commands
Make sure Ollama is running locally:
ollama serveThen start the chat:
ai-chat --provider ollamaMake sure you have set your API key in the config file, then:
ai-chat --provider openrouter- Start Redis using Docker:
docker compose up -d- Enable Redis in your
config.json:
{
"redis": {
"enabled": true,
"host": "localhost",
"port": 6379,
"sessionName": "my-conversation"
}
}- Run the CLI:
pnpm dev:cliYour conversation history will now persist in Redis. Stop and restart the application to see your history restored.
Alternatively, use environment variables without modifying config:
REDIS_ENABLED=true REDIS_SESSION_NAME=my-session pnpm dev:cliStream your CLI conversation to a web browser in real-time using Server-Sent Events (SSE).
For setup instructions and configuration details, see the Web Streaming guide.
ISC