Vibe-code from anywhere. Orchestrate multiple AI agents on your projects.
Keep developing your project remotely with official AI CLIs (Claude, Copilot, Codex, Gemini) via Telegram or Slack β one Docker container per project, zero context switching. Run multiple specialised agents in the same workspace and let them collaborate automatically through built-in multi-agent orchestration.
β Works with Telegram | β Works with Slack | β Tested on Synology NAS
Tested AI backends: β Anthropic Claude CLI Β· β GitHub Copilot CLI Β· β OpenAI Codex CLI Β· β Google Gemini CLI
Not yet fully field-tested:
β οΈ Direct API (OpenAI Β· Anthropic Β· Ollama) β implemented and unit-tested, but not validated end-to-end in production. Feedback and bug reports are very welcome!
Multi-agent orchestration in action β a single docs command triggers GateDocs (Gemini), which automatically delegates sub-tasks to GateCode (Codex) and GateSec (Copilot). Each agent works independently and reports back β no human routing needed.
- π Repo-aware β clones your project on startup; AI runs in that directory
- ποΈ Full CLI pass-through β
/init,/plan,/fixand any prompt forwarded directly to the AI - π³ One container per project β fully isolated, all config via env vars
- π± Develop from anywhere β phone, tablet, any device with Telegram or Slack
- π€ Pluggable AI backends β Claude, Copilot, Codex, Gemini, OpenAI, Anthropic, Ollama
- π Agent delegation β agents route sub-tasks to teammates automatically via
[DELEGATE]protocol - π’ Broadcast β
<!here>sends to all agents simultaneously; each responds independently - π‘οΈ Secure orchestration β blocked dangerous commands, flood limits, one-hop maximum
- π¬ Multi-platform β Telegram or Slack (Socket Mode); choose via
PLATFORM=telegram|slack - β‘ Streaming responses β live message updates as the AI types
- π§ Thinking indicator β "π€ Thought for Xs" after every response; final answer posted as a new message
- π¬ Conversation history β per-chat SQLite store, injected as context for multi-turn sessions
- π Request cancellation β stop an in-progress AI call with
gate cancel(or the Slack "β Cancel" button) - π Secure β non-root container, allowlist by chat/user ID, confirmation for destructive shell commands
Create a docker-compose.yml:
services:
bot:
image: ghcr.io/agigante80/agentgate:latest
restart: unless-stopped
environment:
- PLATFORM=telegram
- TG_BOT_TOKEN=your-telegram-bot-token
- TG_CHAT_ID=your-telegram-chat-id
- GITHUB_REPO_TOKEN=github_pat_...
- COPILOT_GITHUB_TOKEN=github_pat_...
- GITHUB_REPO=owner/repo
- AI_CLI=copilot
volumes:
- ./repo:/repo
- ./data:/datadocker compose up -dThe bot sends a π’ Ready message to your chat when it's up.
cp .env.example .env
# fill in TG_BOT_TOKEN, TG_CHAT_ID, GITHUB_REPO_TOKEN, GITHUB_REPO, AI_CLI
cp docker-compose.yml.example docker-compose.yml
docker compose up -d| Branch/event | GHCR | Docker Hub |
|---|---|---|
Push to develop |
ghcr.io/agigante80/agentgate:develop |
agigante80/agentgate:develop |
Push to main |
ghcr.io/agigante80/agentgate:latest |
agigante80/agentgate:latest |
| Version release | ghcr.io/agigante80/agentgate:X.Y.Z |
agigante80/agentgate:X.Y.Z |
# Docker Hub (stable)
docker pull agigante80/agentgate:latest
docker pull agigante80/agentgate:develop # latest dev build
# GHCR mirror
docker pull ghcr.io/agigante80/agentgate:latest
docker pull ghcr.io/agigante80/agentgate:develop # latest dev buildEvery message you send is forwarded to the AI β including /commands.
This means you can use your AI CLI's native commands directly from your phone or Slack:
| What you type | What happens |
|---|---|
explain the auth module |
sent to AI as a prompt |
/init |
forwarded to Copilot CLI as /init |
/plan add OAuth login |
forwarded to Copilot CLI as /plan add OAuth login |
/fix the login bug |
forwarded to Copilot CLI as /fix the login bug |
@copilot review this PR |
forwarded verbatim |
Slack note: Slack intercepts messages starting with
/as native slash commands. Prefix with a space (/init) to send them to the AI instead.
AgentGate utility commands use a configurable prefix (BOT_CMD_PREFIX, default gate) so they never collide with your AI CLI's own commands:
| Command | Description |
|---|---|
/gate run <cmd> |
Run a shell command in the repo |
/gate sync |
git pull |
/gate git |
git status + last 3 commits |
/gate status |
Show active AI requests |
/gate cancel |
Cancel the current in-progress AI request |
/gate clear |
Clear conversation history |
/gate restart |
Restart the AI backend session |
/gate info |
Repo, branch, AI backend, uptime |
/gate help |
Full command reference + version |
Destructive shell commands (push, merge, rm, force) require inline confirmation.
- Validate env vars β fail fast with a clear error
- Clone
GITHUB_REPOβ/repo(skipped if already present) - Auto-install deps:
package.jsonβnpm install,pyproject.tomlβpip install,go.modβgo mod download - Initialize conversation history DB (
/data/history.db) - Start AI backend session
- Start Telegram/Slack bot β send π’ Ready message
Copy .env.example β it documents every variable with examples.
| Variable | Default | Description |
|---|---|---|
PLATFORM |
telegram |
telegram | slack β selects the messaging platform |
| Variable | Description |
|---|---|
TG_BOT_TOKEN |
Bot token from @BotFather |
TG_CHAT_ID |
Your Telegram chat/group ID β bot ignores all others |
| Variable | Description |
|---|---|
SLACK_BOT_TOKEN |
Bot OAuth token (xoxb-β¦) from your Slack App |
SLACK_APP_TOKEN |
App-level token (xapp-β¦) for Socket Mode |
| Variable | Description |
|---|---|
GITHUB_REPO_TOKEN |
PAT with repo scope β used for git clone/push |
GITHUB_REPO |
owner/repo format |
| Variable | Default | Description |
|---|---|---|
AI_CLI |
copilot |
copilot | codex | claude | api | gemini |
COPILOT_GITHUB_TOKEN |
β | Fine-grained PAT with Copilot Requests permission (required for copilot backend) |
GEMINI_API_KEY |
β | API key for the gemini backend (from AI Studio). Required when AI_CLI=gemini; no fallback. |
AI_MODEL |
β | Model for any backend (e.g. gpt-4o for Copilot, o3 for Codex, claude-3-5-sonnet-20241022 for API). Codex defaults to o3 when unset. /gate info β if unset, only the backend name is shown (e.g. copilot instead of copilot (claude-sonnet-4.6)). |
COPILOT_MODEL |
β | Per-backend model for copilot; falls back to AI_MODEL when empty |
AI_PROVIDER |
β | For api: openai | anthropic | ollama | openai-compat |
OPENAI_API_KEY |
β | Required when AI_CLI=codex or AI_CLI=api + AI_PROVIDER=openai. Standard OpenAI env var. |
ANTHROPIC_API_KEY |
β | Required when AI_CLI=api + AI_PROVIDER=anthropic. Optional when AI_CLI=claude β omit to use OAuth credentials (claude login). |
CODEX_MODEL |
β | Per-backend model for codex; falls back to AI_MODEL then o3 |
CLAUDE_MODEL |
β | Per-backend model for claude; falls back to AI_MODEL when empty |
AI_BASE_URL |
β | Base URL for Ollama or compatible endpoints |
AI_CLI_OPTS |
β | Raw options passed verbatim to the CLI subprocess. Empty (default) = full-auto per backend (Copilot: --allow-all; Codex: --approval-mode full-auto; Gemini: --non-interactive). When set, replaces the defaults entirely β must include full-auto flags if still needed (e.g. --allow-all --allow-url github.com). Ignored (with a warning) when AI_CLI=api. |
COPILOT_SKILLS_DIRS |
β | Colon-separated paths to extra Copilot skills directories (mount via Docker volume, e.g. /skills) |
SYSTEM_PROMPT_FILE |
β | Path to a markdown file loaded as the AI system message (AI_CLI=api only). Must not be inside REPO_DIR; mount via a separate Docker volume. |
| Variable | Default | Description |
|---|---|---|
BOT_CMD_PREFIX |
gate |
Prefix for utility commands |
MAX_OUTPUT_CHARS |
3000 |
Truncate/summarize output beyond this length |
HISTORY_ENABLED |
true |
Set false to disable conversation history storage |
HISTORY_TURNS |
10 |
Number of past exchanges injected per AI prompt (stateless backends only); 0 = disable injection, history still stored |
STREAM_RESPONSES |
true |
Set false to wait for full response before sending |
STREAM_THROTTLE_SECS |
1.0 |
Seconds between streaming message edits |
CONFIRM_DESTRUCTIVE |
true |
Set false to skip confirmation for destructive shell commands |
SKIP_CONFIRM_KEYWORDS |
β | Comma-separated keywords that bypass destructive confirmation (e.g. push,rm) |
SHELL_ALLOWLIST |
β | Comma-separated command prefixes permitted by gate run (e.g. git,ls,cat). Empty = allow all. Shell metacharacters are always rejected regardless of this setting. |
SHELL_READONLY |
false |
When true, restrict gate run to a read-only command set (ls, cat, head, tail, grep, find, git read-only subcommands). Mutually exclusive with SHELL_ALLOWLIST when both are set, SHELL_READONLY is checked first. |
PREFIX_ONLY |
false |
When true, ignore messages that don't start with the bot prefix β useful in multi-agent Slack workspaces |
SYSTEM_PROMPT |
β | Optional text prepended to every AI prompt (inline). Use SYSTEM_PROMPT_FILE for file-based prompts. |
SLACK_DELETE_THINKING |
true |
Delete the β³ placeholder after posting the final AI response (Slack only). |
SLACK_THREAD_REPLIES |
false |
When true, post AI responses and bot output as thread replies to the triggering message (Slack only). |
AI_TIMEOUT_SECS |
0 |
Hard timeout for any AI backend in seconds (0 = no timeout) |
CANCEL_TIMEOUT_SECS |
5 |
Seconds to wait for graceful cancel before forcing backend close |
ALLOW_SECRETS |
false |
When false (default), secrets are redacted from outgoing messages and git commit messages. Set true to allow secrets (dangerous). |
THINKING_SLOW_THRESHOLD_SECS |
15 |
Seconds of silence before first "Still thinkingβ¦" update |
THINKING_UPDATE_SECS |
30 |
Seconds between subsequent elapsed-time updates |
AI_TIMEOUT_WARN_SECS |
60 |
Seconds before hard timeout to include a cancellation warning |
THINKING_SHOW_ELAPSED |
true |
When true, update the "π€ Thinkingβ¦" placeholder to "π€ Thought for Xs" after AI responds; final response posted as a new message |
IMAGE_TAG |
β | Docker image tag; shown in the ready message. Set by docker-compose. |
GIT_SHA |
β | Short commit hash (7 chars). When set alongside a non-latest IMAGE_TAG, shown as v{ver}-dev-{sha} in the ready message. Auto-resolved from git if unset. |
| Variable | Default | Description |
|---|---|---|
LOG_LEVEL |
INFO |
Log verbosity: DEBUG | INFO | WARNING | ERROR |
LOG_DIR |
β | Directory for rotating log files (empty = stdout only). Logs rotate daily, kept 14 days, gzip compressed. Mount a host volume to persist across restarts. |
Full logging guide: docs/logging.md
# Example: persist logs on host
volumes:
- ./logs:/data/logs
environment:
- LOG_DIR=/data/logs| Variable | Default | Description |
|---|---|---|
WHISPER_PROVIDER |
none |
none | openai β enables Telegram voice message transcription |
WHISPER_API_KEY |
β | Required (no fallback) when WHISPER_PROVIDER=openai. |
WHISPER_MODEL |
whisper-1 |
Whisper model name |
| Variable | Default | Description |
|---|---|---|
AUDIT_ENABLED |
true |
Set false to disable audit logging to /data/audit.db |
AUDIT_BACKEND |
sqlite |
Audit log backend: sqlite (default) or null (disabled in-process) |
STORAGE_BACKEND |
sqlite |
Conversation history backend: sqlite (default) or memory (in-process, non-persistent) |
| Variable | Description |
|---|---|
ALLOWED_USERS |
Comma-separated Telegram user IDs (extra allowlist, Telegram only) |
SLACK_CHANNEL_ID |
Required for the π’ Ready message. Channel where the bot posts its startup notification and listens by default (e.g. C0123456789). Without this, the bot starts silently. |
SLACK_ALLOWED_USERS |
JSON array of Slack user IDs allowed to use the bot (e.g. ["U111","U222"]) |
TRUSTED_AGENT_BOT_IDS |
Slack bot IDs (or Name:prefix pairs) that bypass the normal user filter for agent-to-agent messaging (e.g. B012,GateCode:dev) |
BRANCH |
Git branch to clone (default: main) |
REPO_HOST_PATH |
Host directory to bind-mount as /repo β persists across rebuilds |
Full step-by-step guide: docs/guides/slack-setup.md
Running multiple AI agents in one Slack workspace? See docs/guides/multi-agent-slack.md
Quick summary:
- api.slack.com/apps β Create New App β From scratch
- OAuth & Permissions β Bot Token Scopes:
chat:write,channels:history,groups:history,im:history,mpim:history,files:read - Socket Mode β Enable β Generate token (
connections:writescope) βSLACK_APP_TOKEN(xapp-β¦) - Event Subscriptions β Enable β Subscribe to bot events:
message.channels,message.groups,message.im,message.mpimβ Save - OAuth & Permissions β Install to Workspace β copy Bot OAuth Token β
SLACK_BOT_TOKEN(xoxb-β¦) - In Slack:
/invite @YourBotNamein a channel β copy Channel ID β set asSLACK_CHANNEL_ID - Set
PLATFORM=slackin.envand restart
β οΈ SLACK_CHANNEL_IDis required for the bot to post its π’ Ready message on startup. Without it the bot connects silently and you won't know it's alive.
β οΈ After any scope or event change, reinstall the app (step 5) to get a fresh token.
β οΈ Do not use/prefix in Slack β Slack intercepts/cmdas a native slash command. Usegate cmdinstead (gate help,gate sync, etc.). If you need to send a message starting with/, prepend a space:/init.
Each project is its own Docker Compose stack with its own .env:
projects/
βββ vpn-sentinel/
β βββ docker-compose.yml
β βββ .env β TG_BOT_TOKEN, GITHUB_REPO=owner/vpn-sentinel
βββ my-api/
βββ docker-compose.yml
βββ .env β TG_BOT_TOKEN, GITHUB_REPO=owner/my-api
Run them side by side β fully isolated, separate Telegram bots.
You can run several AgentGate containers in the same Slack workspace, each with a different AI backend and a unique prefix β for example:
| Agent | Prefix | Backend | Role |
|---|---|---|---|
| GateCode | dev |
Codex CLI | Code + commits |
| GateSec | sec |
Copilot CLI | Security review |
| GateDocs | docs |
Gemini CLI | Documentation |
Users address each agent with its prefix (dev <message>, sec <message>, docs <message>). Agents can also delegate to each other via TRUSTED_AGENT_BOT_IDS.
π Full setup guide,
.envexamples, backend-specific file requirements, and advice on switching backends safely: docs/guides/multi-agent-slack.md
Set REPO_HOST_PATH in .env to a directory on your machine:
REPO_HOST_PATH=/home/me/projects/VPNSentinelDocker bind-mounts it to /repo. The bot clones once, reuses forever.
Requires a fine-grained PAT with the Copilot Requests permission. Classic ghp_ tokens are not supported.
AI_CLI=copilot
COPILOT_GITHUB_TOKEN=github_pat_...AI_CLI=codex
OPENAI_API_KEY=sk-...
AI_MODEL=o3This backend is implemented and unit-tested but has not been validated end-to-end in a live production deployment. If you try it, please open an issue with your findings β improvements and fixes are very welcome!
AI_CLI=api
AI_PROVIDER=anthropic
ANTHROPIC_API_KEY=sk-ant-...
AI_MODEL=claude-3-5-sonnet-20241022AI_CLI=api
AI_PROVIDER=openai
OPENAI_API_KEY=sk-...
AI_MODEL=gpt-4oAI_CLI=api
AI_PROVIDER=ollama
AI_MODEL=llama3.2
AI_BASE_URL=http://host.docker.internal:11434Supports two authentication modes:
API Key mode β billed to your Anthropic API account:
AI_CLI=claude
ANTHROPIC_API_KEY=sk-ant-...
AI_MODEL=claude-sonnet-4-6OAuth mode (Pro / Max subscription) β omit ANTHROPIC_API_KEY and authenticate via claude login:
AI_CLI=claude
AI_MODEL=claude-sonnet-4-6
# No ANTHROPIC_API_KEY β uses OAuth credentials from claude loginSee Claude CLI OAuth Setup below for Docker authentication instructions.
Requires an API key from Google AI Studio.
AI_CLI=gemini
GEMINI_API_KEY=AIza...
AI_MODEL=gemini-2.5-pro # optional β omit to use CLI defaultUse this when you have a Claude Pro or Max subscription and want to skip the pay-per-use API key.
-
Start the container normally (without
ANTHROPIC_API_KEYin your.env):docker compose up -d
-
Exec into the running container:
docker exec -it <container_name> claude
-
When the CLI prompts you to log in and a browser cannot open, select the login URL displayed in the terminal and copy it (
Ctrl+CorCmd+Cdepending on your OS). -
Paste the URL into a browser on any device where you are logged into your claude.ai account.
-
Click Authorize. The browser displays an authentication code. Copy it and paste it back into the Claude Code terminal session.
-
Exit the Claude CLI (
Ctrl+Cor/exit). No container restart needed β use the/gate restartbot command from Telegram, then/gate initto reinitialize.
Note: OAuth credentials are stored in
/root/.claude/inside the container. They persist as long as the container's filesystem is intact. Adocker compose downfollowed byupwill lose them β use Option B for persistence across container recreations.
-
On your host machine, run
claudeand complete the OAuth login (browser or clipboard method). -
Mount your
~/.claude/directory into the container. Add todocker-compose.yml:volumes: - ~/.claude:/root/.claude:ro
-
Start the container without
ANTHROPIC_API_KEY:docker compose up -d
The Claude CLI inside the container will use the mounted OAuth credentials.
- Ensure
ANTHROPIC_API_KEYis unset in your.envfile. If present, the CLI will use it (API billing) instead of OAuth credentials. - The
/gate runbot command is not suitable forclaude loginβ the login flow is interactive and requires a TTY. - OAuth tokens expire periodically. If the bot starts returning auth errors, re-run the login flow.
- Bot responds only to
TG_CHAT_ID ALLOWED_USERSadds per-user filtering inside the allowed chat- Destructive shell ops require confirmation
- Non-root user inside container
- Fine-grained GitHub token scoped to one repo
The AI_API_KEY master-fallback and CODEX_API_KEY alias were removed. Each backend now has its own explicit key:
| Old env var | New env var | When |
|---|---|---|
AI_API_KEY (used with AI_CLI=codex) |
OPENAI_API_KEY |
Always |
AI_API_KEY (used with AI_CLI=api + AI_PROVIDER=openai) |
OPENAI_API_KEY |
Always |
AI_API_KEY (used with AI_CLI=api + AI_PROVIDER=anthropic) |
ANTHROPIC_API_KEY |
Always |
CODEX_API_KEY |
OPENAI_API_KEY |
Always |
WHISPER_API_KEY relying on AI_API_KEY fallback |
WHISPER_API_KEY (set it explicitly) |
If previously omitted |
v1.0.0 behaviour: old vars are still accepted but a startup warning is emitted:
WARNING: AI_API_KEY is deprecated and will be removed in v1.1.0.
Set OPENAI_API_KEY, ANTHROPIC_API_KEY, or the backend-specific key instead.
See GitHub issue #24 (`AI Provider Explicit Validation`) for migration context.
Update your .env or docker-compose.yml before upgrading to v1.1.0.
MIT
