Omni-CLI is a Dockerized SSH workspace that bundles multiple AI CLIs in one place. Connect once and use Gemini, Codex, Copilot, Claude, and Crush without installing anything on your laptop. Your tools and configs live in a single remote workspace, so you do not need to log in on every device.
Think of it as an all-in-one AI CLI cockpit you can reach from anywhere over SSH.
- 🔑 SSH-First Workflow: Connect remotely and launch AI CLIs immediately.
- 🤖 Preinstalled Agents: Gemini, Codex, Copilot, Claude, and Crush are ready out of the box.
- 💾 Persistent Workspace: Projects and auth/config live in Docker volumes, not on each device.
- 🧭 Unified Menu: The
omni-clidashboard navigates nested folders, shows breadcrumbs, and launches tools. - 🔒 Isolated Runtime: Everything runs in Docker, keeping your host clean.
- 👤 Smart UID/GID Mapping: Avoids permission issues on mounted volumes.
- 🧩 Crush CLI: Preinstalled via
npm install -g @charmland/crush. - 🗝️ API Key Status: Quick status panel for configured API keys.
- 🔌 OpenAI-Style API: Local HTTP server that proxies chat completions to Codex or Gemini.
- Docker Engine installed on your machine.
- Docker Compose (Optional, but recommended for easier management).
Get up and running in seconds.
Create a docker-compose.yml file (or clone the repo) and start the service:
# Start the container in the background
docker-compose up -dOr pull the prebuilt image and run it directly:
docker pull ghcr.io/mabelisle/omni-cli:main
docker run -d \
--name omni-cli \
-p 2222:22 \
-v $(pwd)/omni-data:/data \
-v $(pwd)/omni-config:/config \
-e PUID=$(id -u) \
-e PGID=$(id -g) \
ghcr.io/mabelisle/omni-cli:mainAccess the environment via SSH (Password: changeme):
ssh omni@localhost -p 2222🎉 You are now inside Omni-CLI. The AI menu launches automatically and tools are ready to use.
You can customize the environment by setting environment variables in your docker-compose.yml or docker run command.
| Variable | Default | Description |
|---|---|---|
PUID |
1000 |
User ID. Set this to your host user's UID (run id -u) to ensure you have write access to mounted volumes. |
PGID |
1000 |
Group ID. Set this to your host user's GID (run id -g). |
USER_PASS |
changeme |
SSH Password. The password for the omni user (only effective if set during build via --build-arg). |
TZ |
UTC |
Timezone. Set container timezone (e.g., America/New_York). |
CODEX_PASSTHROUGH_PORT |
8000 |
API Server Port. Port exposed by the Codex/Gemini passthrough server. |
CODEX_TIMEOUT_SECONDS |
300 |
API Timeout. Max runtime for Codex/Gemini requests. |
| Volume | Internal Path | Description |
|---|---|---|
data |
/data |
Workspace Storage. Maps to your local project directory. |
config |
/config |
Tool Configs. Persists npm caches, auth tokens, and CLI settings. |
Create a docker-compose.yml file:
version: '3.8'
services:
omni-cli:
build: .
container_name: omni-cli
environment:
- PUID=1000 # Change to $(id -u)
- PGID=1000 # Change to $(id -g)
- TZ=UTC
volumes:
- ./omni-data:/data
- ./omni-config:/config
ports:
- "2222:22"
restart: unless-stoppeddocker run -d \
--name omni-cli \
-p 2222:22 \
-v $(pwd)/omni-data:/data \
-v $(pwd)/omni-config:/config \
-e PUID=$(id -u) \
-e PGID=$(id -g) \
ghcr.io/mabelisle/omni-cli:maindocker build -t omni-cli .
docker run -d \
--name omni-cli \
-p 2222:22 \
-v $(pwd)/omni-data:/data \
-v $(pwd)/omni-config:/config \
-e PUID=$(id -u) \
-e PGID=$(id -g) \
omni-cliThe entrypoint.sh script is the brain of the container initialization:
- Permission Fix: It checks the
PUIDandPGIDenv vars and modifies the internalomniuser to match them. - Key Gen: Generates SSH host keys if they are missing.
- Privilege Drop: While it runs as
rootto perform setup, it executes the final command (or starts the SSH daemon) as the unprivilegedomniuser (or drops privileges appropriately) to ensure security.
When you log in, omni-cli.sh is sourced. It provides an ASCII-art menu to:
- Navigate folders and subfolders in
/datawith breadcrumb paths. - Create/Delete folders.
- Launch context-aware AI sessions within those folders (Gemini, Codex, Copilot, Claude, Crush).
- View API key status.
Important: Omni-CLI provides the environment and tools, but you must provide the access.
Each AI CLI (Gemini, Codex, Copilot, Claude, Crush) is pre-installed software that requires its own authentication. When you launch a tool for the first time, you will typically be prompted to login or provide an API key.
There are plenty of ways to get free AI access using these tools! Since you have all of them at your fingertips:
- Start with your preferred agent.
- If you hit a rate limit or a free tier cap, simply switch to the next one in the menu.
- Cycle through Gemini, Codex, Copilot, Claude, and Crush to maximize your productivity without needing a paid subscription for every single service.
Omni-CLI starts a lightweight Node.js HTTP server (api.js) inside the container that proxies chat completions to your installed CLI tools (Codex or Gemini). It exposes fully OpenAI-compatible endpoints, allowing you to use the same API structure as OpenAI's official API, but with your local models.
The API server acts as a translation layer:
- Receives OpenAI-formatted requests (
POST /v1/chat/completions) - Translates the request into CLI commands
- Executes the appropriate CLI tool (
codexorgemini) - Returns an OpenAI-compatible response
Translation Flow:
OpenAI Request → api.js → CLI execution → Response parsing → OpenAI Response
Key Features:
- Model Aliasing:
codex-defaultandgemini-defaultaliases - OpenRouter Integration: Automatically discovers and verifies OpenRouter models when
OPENROUTER_API_KEYis set - Smart Routing: Automatically routes to Codex or Gemini based on model name pattern
- Temporary Workspaces: Runs each request in isolated temporary directories
- Timeout Protection: Configurable timeout (default 300s) prevents hanging requests
| Endpoint | Method | Description |
|---|---|---|
/ |
GET | Health check |
/v1/models |
GET | Model catalog (list of available models) |
/v1/chat/completions |
POST | Chat completions (OpenAI-compatible) |
Map the default port 8000 (or the value of CODEX_PASSTHROUGH_PORT) when you run the container:
docker run -d \
--name omni-cli \
-p 2222:22 \
-p 8000:8000 \
-v $(pwd)/omni-data:/data \
-v $(pwd)/omni-config:/config \
-e PUID=$(id -u) \
-e PGID=$(id -g) \
ghcr.io/mabelisle/omni-cli:mainDocker Compose:
ports:
- "2222:22"
- "8000:8000"curl http://localhost:8000/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "codex-default",
"messages": [
{ "role": "user", "content": "Write a haiku about SSH." }
]
}'Response:
{
"id": "chatcmpl-abc123",
"object": "chat.completion",
"created": 1234567890,
"model": "codex-default",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "Secure shell flows,\nEncrypted tunnel connects,\nRemote access blooms."
},
"finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": 0,
"completion_tokens": 0,
"total_tokens": 0
}
}curl http://localhost:8000/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "gemini-2.5-pro",
"messages": [
{ "role": "user", "content": "What is Docker?" },
{ "role": "assistant", "content": "Docker is a platform for developing, shipping, and running applications in containers." },
{ "role": "user", "content": "What is the difference between Docker and Kubernetes?" }
]
}'curl http://localhost:8000/v1/modelsResponse:
{
"object": "list",
"data": [
{
"id": "gpt-4o",
"object": "model",
"created": 1234567890,
"owned_by": "codex"
},
{
"id": "gemini-2.5-pro",
"object": "model",
"created": 1234567890,
"owned_by": "gemini"
}
]
}The API comes with pre-configured model IDs:
Codex Models:
gpt-5.2-codex,gpt-5.1-codex-max,gpt-5.1-codex-minigpt-5.2,gpt-3.5-turbo,gpt-4,gpt-4ocodex-default(alias for the best available Codex model)
Gemini Models:
gemini-3-pro-preview,gemini-3-flash-previewgemini-2.5-pro,gemini-2.5-flash,gemini-2.5-flash-litegemini-2.0-flash-exp,gemini-2.0-flashgemini-1.5-pro-latest,gemini-1.5-flash-latestgemini-default(alias for the best available Gemini model)
When you set OPENROUTER_API_KEY (or OR_API_KEY), the API will:
- Fetch the OpenRouter model catalog every 24 hours
- Verify each model by making a test request
- Add working models to the catalog
- Include OpenRouter models in
/v1/modelsresponse
Environment Variables:
OPENROUTER_API_KEY/OR_API_KEY: Your OpenRouter API keyOPENROUTER_MODELS_URL: Custom models endpoint (default:https://openrouter.ai/api/v1/models)OPENROUTER_REFRESH_MS: Refresh interval in milliseconds (default: 24 hours)OPENROUTER_VERIFY_TIMEOUT_MS: Verification timeout (default: 20000ms)OPENROUTER_VERIFY_CONCURRENCY: Concurrent verifications (default: 3)OPENROUTER_VERIFY_PROMPT: Test prompt (default: "Reply with OK.")
The API server automatically uses the CLI tools' authentication. You don't need to set separate API keys for the passthrough server.
Supported Provider Environment Variables:
| Provider | Environment Variables |
|---|---|
| Anthropic | ANTHROPIC_API_KEY |
| OpenAI | OPENAI_API_KEY, OPENAI_API_BASE |
| Google Gemini | GEMINI_API_KEY, GOOGLE_API_KEY |
| Google VertexAI | VERTEXAI_PROJECT, VERTEXAI_LOCATION |
| OpenRouter | OPENROUTER_API_KEY, OR_API_KEY |
| Groq | GROQ_API_KEY |
| Cerebras | CEREBRAS_API_KEY |
| Huggingface | HF_TOKEN, HUGGINGFACE_API_KEY |
| AWS Bedrock | AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, AWS_REGION, AWS_PROFILE, AWS_BEARER_TOKEN_BEDROCK |
| Azure OpenAI | AZURE_OPENAI_API_KEY, AZURE_OPENAI_API_ENDPOINT, AZURE_OPENAI_API_VERSION, AZURE_API_KEY, AZURE_API_BASE, AZURE_API_VERSION |
Setting Auth in Docker:
docker run -d \
--name omni-cli \
-p 2222:22 \
-p 8000:8000 \
-v $(pwd)/omni-data:/data \
-v $(pwd)/omni-config:/config \
-e PUID=$(id -u) \
-e PGID=$(id -g) \
-e OPENAI_API_KEY="sk-..." \
-e GEMINI_API_KEY="AIza..." \
-e OPENROUTER_API_KEY="sk-or-..." \
ghcr.io/mabelisle/omni-cli:mainThe API uses temporary workspaces for each request. You can specify a custom base directory:
node api.js -C /path/to/workspaceAdjust the maximum execution time for Codex/Gemini requests:
docker run -d \
--name omni-cli \
-e CODEX_TIMEOUT_SECONDS=600 \ # 10 minutes
ghcr.io/mabelisle/omni-cli:mainChange the API server port:
docker run -d \
--name omni-cli \
-e CODEX_PASSTHROUGH_PORT=9000 \
-p 9000:9000 \
ghcr.io/mabelisle/omni-cli:mainSince the API is fully compatible with OpenAI's API structure, you can use it with any OpenAI-compatible client:
Python with openai library:
import openai
# Point to your Omni-CLI API
openai.api_base = "http://localhost:8000/v1"
# Make a request
response = openai.ChatCompletion.create(
model="codex-default",
messages=[{"role": "user", "content": "Hello!"}]
)
print(response.choices[0].message.content)curl with custom base URL:
curl http://localhost:8000/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{"model":"gemini-default","messages":[{"role":"user","content":"Hi"}]}'The API server logs are available at /tmp/omni-codex-server.log inside the container:
docker exec -it omni-cli cat /tmp/omni-codex-server.logFor real-time monitoring:
docker logs -f omni-cliThe API returns standard HTTP status codes:
200: Success400: Invalid request (bad JSON, missing messages)404: Endpoint not found500: CLI execution failed504: Request timeout
Error responses include a detail field with the error message:
{
"detail": "Codex execution failed: API key not found"
}Contributions are welcome! Please feel free to submit a Pull Request.
- Fork the Project
- Create your Feature Branch (
git checkout -b feature/AmazingFeature) - Commit your Changes (
git commit -m 'Add some AmazingFeature') - Push to the Branch (
git push origin feature/AmazingFeature) - Open a Pull Request
This project stands on the shoulders of giants. A huge thank you to the teams behind these amazing tools:
- Gemini: google-gemini/gemini-cli
- Codex: openai/codex
- Copilot: github/copilot-cli
- Claude: anthropics/claude-code
- Crush: @charmland/crush
Generated with ❤️ by Gemini

