Skip to content

agigante80/AgentGate

Repository files navigation

AgentGate

Vibe-code from anywhere. Orchestrate multiple AI agents on your projects.

Keep developing your project remotely with official AI CLIs (Claude, Copilot, Codex, Gemini) via Telegram or Slack β€” one Docker container per project, zero context switching. Run multiple specialised agents in the same workspace and let them collaborate automatically through built-in multi-agent orchestration.

βœ… Works with Telegram | βœ… Works with Slack | βœ… Tested on Synology NAS

Tested AI backends: βœ… Anthropic Claude CLI Β· βœ… GitHub Copilot CLI Β· βœ… OpenAI Codex CLI Β· βœ… Google Gemini CLI

Not yet fully field-tested: ⚠️ Direct API (OpenAI Β· Anthropic Β· Ollama) β€” implemented and unit-tested, but not validated end-to-end in production. Feedback and bug reports are very welcome!


GateDocs (Gemini), GateCode (Codex) and GateSec (Copilot) introducing themselves to each other in a Slack thread after a single command

Multi-agent orchestration in action β€” a single docs command triggers GateDocs (Gemini), which automatically delegates sub-tasks to GateCode (Codex) and GateSec (Copilot). Each agent works independently and reports back β€” no human routing needed.


Features

Remote Development

  • πŸ“ Repo-aware β€” clones your project on startup; AI runs in that directory
  • πŸ–ŠοΈ Full CLI pass-through β€” /init, /plan, /fix and any prompt forwarded directly to the AI
  • 🐳 One container per project β€” fully isolated, all config via env vars
  • πŸ“± Develop from anywhere β€” phone, tablet, any device with Telegram or Slack

Multi-Agent Orchestration (Slack)

  • πŸ€– Pluggable AI backends β€” Claude, Copilot, Codex, Gemini, OpenAI, Anthropic, Ollama
  • πŸ”€ Agent delegation β€” agents route sub-tasks to teammates automatically via [DELEGATE] protocol
  • πŸ“’ Broadcast β€” <!here> sends to all agents simultaneously; each responds independently
  • πŸ›‘οΈ Secure orchestration β€” blocked dangerous commands, flood limits, one-hop maximum

Platform & Session

  • πŸ’¬ Multi-platform β€” Telegram or Slack (Socket Mode); choose via PLATFORM=telegram|slack
  • ⚑ Streaming responses β€” live message updates as the AI types
  • 🧠 Thinking indicator β€” "πŸ€– Thought for Xs" after every response; final answer posted as a new message
  • πŸ’¬ Conversation history β€” per-chat SQLite store, injected as context for multi-turn sessions
  • πŸ›‘ Request cancellation β€” stop an in-progress AI call with gate cancel (or the Slack "❌ Cancel" button)
  • πŸ”’ Secure β€” non-root container, allowlist by chat/user ID, confirmation for destructive shell commands

Quick Start

Minimal Docker Compose (Telegram + Copilot)

Create a docker-compose.yml:

services:
  bot:
    image: ghcr.io/agigante80/agentgate:latest
    restart: unless-stopped
    environment:
      - PLATFORM=telegram
      - TG_BOT_TOKEN=your-telegram-bot-token
      - TG_CHAT_ID=your-telegram-chat-id
      - GITHUB_REPO_TOKEN=github_pat_...
      - COPILOT_GITHUB_TOKEN=github_pat_...
      - GITHUB_REPO=owner/repo
      - AI_CLI=copilot
    volumes:
      - ./repo:/repo
      - ./data:/data
docker compose up -d

The bot sends a 🟒 Ready message to your chat when it's up.

From source

cp .env.example .env
# fill in TG_BOT_TOKEN, TG_CHAT_ID, GITHUB_REPO_TOKEN, GITHUB_REPO, AI_CLI
cp docker-compose.yml.example docker-compose.yml
docker compose up -d

Pre-built image tags

Branch/event GHCR Docker Hub
Push to develop ghcr.io/agigante80/agentgate:develop agigante80/agentgate:develop
Push to main ghcr.io/agigante80/agentgate:latest agigante80/agentgate:latest
Version release ghcr.io/agigante80/agentgate:X.Y.Z agigante80/agentgate:X.Y.Z
# Docker Hub (stable)
docker pull agigante80/agentgate:latest
docker pull agigante80/agentgate:develop   # latest dev build

# GHCR mirror
docker pull ghcr.io/agigante80/agentgate:latest
docker pull ghcr.io/agigante80/agentgate:develop   # latest dev build

Talking to the AI

Every message you send is forwarded to the AI β€” including /commands.

This means you can use your AI CLI's native commands directly from your phone or Slack:

What you type What happens
explain the auth module sent to AI as a prompt
/init forwarded to Copilot CLI as /init
/plan add OAuth login forwarded to Copilot CLI as /plan add OAuth login
/fix the login bug forwarded to Copilot CLI as /fix the login bug
@copilot review this PR forwarded verbatim

Slack note: Slack intercepts messages starting with / as native slash commands. Prefix with a space ( /init) to send them to the AI instead.

AgentGate utility commands use a configurable prefix (BOT_CMD_PREFIX, default gate) so they never collide with your AI CLI's own commands:

Command Description
/gate run <cmd> Run a shell command in the repo
/gate sync git pull
/gate git git status + last 3 commits
/gate status Show active AI requests
/gate cancel Cancel the current in-progress AI request
/gate clear Clear conversation history
/gate restart Restart the AI backend session
/gate info Repo, branch, AI backend, uptime
/gate help Full command reference + version

Destructive shell commands (push, merge, rm, force) require inline confirmation.


Startup Sequence

  1. Validate env vars β€” fail fast with a clear error
  2. Clone GITHUB_REPO β†’ /repo (skipped if already present)
  3. Auto-install deps: package.json β†’ npm install, pyproject.toml β†’ pip install, go.mod β†’ go mod download
  4. Initialize conversation history DB (/data/history.db)
  5. Start AI backend session
  6. Start Telegram/Slack bot β†’ send 🟒 Ready message

Environment Variables

Copy .env.example β€” it documents every variable with examples.

Platform

Variable Default Description
PLATFORM telegram telegram | slack β€” selects the messaging platform

Required β€” Telegram (PLATFORM=telegram)

Variable Description
TG_BOT_TOKEN Bot token from @BotFather
TG_CHAT_ID Your Telegram chat/group ID β€” bot ignores all others

Required β€” Slack (PLATFORM=slack)

Variable Description
SLACK_BOT_TOKEN Bot OAuth token (xoxb-…) from your Slack App
SLACK_APP_TOKEN App-level token (xapp-…) for Socket Mode

Shared / Always Required

Variable Description
GITHUB_REPO_TOKEN PAT with repo scope β€” used for git clone/push
GITHUB_REPO owner/repo format

AI Backend

Variable Default Description
AI_CLI copilot copilot | codex | claude | api | gemini
COPILOT_GITHUB_TOKEN β€” Fine-grained PAT with Copilot Requests permission (required for copilot backend)
GEMINI_API_KEY β€” API key for the gemini backend (from AI Studio). Required when AI_CLI=gemini; no fallback.
AI_MODEL β€” Model for any backend (e.g. gpt-4o for Copilot, o3 for Codex, claude-3-5-sonnet-20241022 for API). Codex defaults to o3 when unset. ⚠️ Set this so the model name appears in the startup message and /gate info β€” if unset, only the backend name is shown (e.g. copilot instead of copilot (claude-sonnet-4.6)).
COPILOT_MODEL β€” Per-backend model for copilot; falls back to AI_MODEL when empty
AI_PROVIDER β€” For api: openai | anthropic | ollama | openai-compat
OPENAI_API_KEY β€” Required when AI_CLI=codex or AI_CLI=api + AI_PROVIDER=openai. Standard OpenAI env var.
ANTHROPIC_API_KEY β€” Required when AI_CLI=api + AI_PROVIDER=anthropic. Optional when AI_CLI=claude β€” omit to use OAuth credentials (claude login).
CODEX_MODEL β€” Per-backend model for codex; falls back to AI_MODEL then o3
CLAUDE_MODEL β€” Per-backend model for claude; falls back to AI_MODEL when empty
AI_BASE_URL β€” Base URL for Ollama or compatible endpoints
AI_CLI_OPTS β€” Raw options passed verbatim to the CLI subprocess. Empty (default) = full-auto per backend (Copilot: --allow-all; Codex: --approval-mode full-auto; Gemini: --non-interactive). When set, replaces the defaults entirely β€” must include full-auto flags if still needed (e.g. --allow-all --allow-url github.com). Ignored (with a warning) when AI_CLI=api.
COPILOT_SKILLS_DIRS β€” Colon-separated paths to extra Copilot skills directories (mount via Docker volume, e.g. /skills)
SYSTEM_PROMPT_FILE β€” Path to a markdown file loaded as the AI system message (AI_CLI=api only). Must not be inside REPO_DIR; mount via a separate Docker volume.

Bot Behaviour

Variable Default Description
BOT_CMD_PREFIX gate Prefix for utility commands
MAX_OUTPUT_CHARS 3000 Truncate/summarize output beyond this length
HISTORY_ENABLED true Set false to disable conversation history storage
HISTORY_TURNS 10 Number of past exchanges injected per AI prompt (stateless backends only); 0 = disable injection, history still stored
STREAM_RESPONSES true Set false to wait for full response before sending
STREAM_THROTTLE_SECS 1.0 Seconds between streaming message edits
CONFIRM_DESTRUCTIVE true Set false to skip confirmation for destructive shell commands
SKIP_CONFIRM_KEYWORDS β€” Comma-separated keywords that bypass destructive confirmation (e.g. push,rm)
SHELL_ALLOWLIST β€” Comma-separated command prefixes permitted by gate run (e.g. git,ls,cat). Empty = allow all. Shell metacharacters are always rejected regardless of this setting.
SHELL_READONLY false When true, restrict gate run to a read-only command set (ls, cat, head, tail, grep, find, git read-only subcommands). Mutually exclusive with SHELL_ALLOWLIST when both are set, SHELL_READONLY is checked first.
PREFIX_ONLY false When true, ignore messages that don't start with the bot prefix β€” useful in multi-agent Slack workspaces
SYSTEM_PROMPT β€” Optional text prepended to every AI prompt (inline). Use SYSTEM_PROMPT_FILE for file-based prompts.
SLACK_DELETE_THINKING true Delete the ⏳ placeholder after posting the final AI response (Slack only).
SLACK_THREAD_REPLIES false When true, post AI responses and bot output as thread replies to the triggering message (Slack only).
AI_TIMEOUT_SECS 0 Hard timeout for any AI backend in seconds (0 = no timeout)
CANCEL_TIMEOUT_SECS 5 Seconds to wait for graceful cancel before forcing backend close
ALLOW_SECRETS false When false (default), secrets are redacted from outgoing messages and git commit messages. Set true to allow secrets (dangerous).
THINKING_SLOW_THRESHOLD_SECS 15 Seconds of silence before first "Still thinking…" update
THINKING_UPDATE_SECS 30 Seconds between subsequent elapsed-time updates
AI_TIMEOUT_WARN_SECS 60 Seconds before hard timeout to include a cancellation warning
THINKING_SHOW_ELAPSED true When true, update the "πŸ€– Thinking…" placeholder to "πŸ€– Thought for Xs" after AI responds; final response posted as a new message
IMAGE_TAG β€” Docker image tag; shown in the ready message. Set by docker-compose.
GIT_SHA β€” Short commit hash (7 chars). When set alongside a non-latest IMAGE_TAG, shown as v{ver}-dev-{sha} in the ready message. Auto-resolved from git if unset.

Logging

Variable Default Description
LOG_LEVEL INFO Log verbosity: DEBUG | INFO | WARNING | ERROR
LOG_DIR β€” Directory for rotating log files (empty = stdout only). Logs rotate daily, kept 14 days, gzip compressed. Mount a host volume to persist across restarts.

Full logging guide: docs/logging.md

# Example: persist logs on host
volumes:
  - ./logs:/data/logs
environment:
  - LOG_DIR=/data/logs

Voice Transcription

Variable Default Description
WHISPER_PROVIDER none none | openai β€” enables Telegram voice message transcription
WHISPER_API_KEY β€” Required (no fallback) when WHISPER_PROVIDER=openai.
WHISPER_MODEL whisper-1 Whisper model name

Audit

Variable Default Description
AUDIT_ENABLED true Set false to disable audit logging to /data/audit.db
AUDIT_BACKEND sqlite Audit log backend: sqlite (default) or null (disabled in-process)
STORAGE_BACKEND sqlite Conversation history backend: sqlite (default) or memory (in-process, non-persistent)

Optional

Variable Description
ALLOWED_USERS Comma-separated Telegram user IDs (extra allowlist, Telegram only)
SLACK_CHANNEL_ID Required for the 🟒 Ready message. Channel where the bot posts its startup notification and listens by default (e.g. C0123456789). Without this, the bot starts silently.
SLACK_ALLOWED_USERS JSON array of Slack user IDs allowed to use the bot (e.g. ["U111","U222"])
TRUSTED_AGENT_BOT_IDS Slack bot IDs (or Name:prefix pairs) that bypass the normal user filter for agent-to-agent messaging (e.g. B012,GateCode:dev)
BRANCH Git branch to clone (default: main)
REPO_HOST_PATH Host directory to bind-mount as /repo β€” persists across rebuilds

Slack Setup

Full step-by-step guide: docs/guides/slack-setup.md

Running multiple AI agents in one Slack workspace? See docs/guides/multi-agent-slack.md

Quick summary:

  1. api.slack.com/apps β†’ Create New App β†’ From scratch
  2. OAuth & Permissions β†’ Bot Token Scopes: chat:write, channels:history, groups:history, im:history, mpim:history, files:read
  3. Socket Mode β†’ Enable β†’ Generate token (connections:write scope) β†’ SLACK_APP_TOKEN (xapp-…)
  4. Event Subscriptions β†’ Enable β†’ Subscribe to bot events: message.channels, message.groups, message.im, message.mpim β†’ Save
  5. OAuth & Permissions β†’ Install to Workspace β†’ copy Bot OAuth Token β†’ SLACK_BOT_TOKEN (xoxb-…)
  6. In Slack: /invite @YourBotName in a channel β†’ copy Channel ID β†’ set as SLACK_CHANNEL_ID
  7. Set PLATFORM=slack in .env and restart

⚠️ SLACK_CHANNEL_ID is required for the bot to post its 🟒 Ready message on startup. Without it the bot connects silently and you won't know it's alive.

⚠️ After any scope or event change, reinstall the app (step 5) to get a fresh token.

⚠️ Do not use / prefix in Slack β€” Slack intercepts /cmd as a native slash command. Use gate cmd instead (gate help, gate sync, etc.). If you need to send a message starting with /, prepend a space: /init.


One Bot per Project

Each project is its own Docker Compose stack with its own .env:

projects/
β”œβ”€β”€ vpn-sentinel/
β”‚   β”œβ”€β”€ docker-compose.yml
β”‚   └── .env            ← TG_BOT_TOKEN, GITHUB_REPO=owner/vpn-sentinel
└── my-api/
    β”œβ”€β”€ docker-compose.yml
    └── .env            ← TG_BOT_TOKEN, GITHUB_REPO=owner/my-api

Run them side by side β€” fully isolated, separate Telegram bots.


Multi-agent Slack

You can run several AgentGate containers in the same Slack workspace, each with a different AI backend and a unique prefix β€” for example:

Agent Prefix Backend Role
GateCode dev Codex CLI Code + commits
GateSec sec Copilot CLI Security review
GateDocs docs Gemini CLI Documentation

Users address each agent with its prefix (dev <message>, sec <message>, docs <message>). Agents can also delegate to each other via TRUSTED_AGENT_BOT_IDS.

πŸ“– Full setup guide, .env examples, backend-specific file requirements, and advice on switching backends safely: docs/guides/multi-agent-slack.md


Persistent Repo (no re-cloning on restart)

Set REPO_HOST_PATH in .env to a directory on your machine:

REPO_HOST_PATH=/home/me/projects/VPNSentinel

Docker bind-mounts it to /repo. The bot clones once, reuses forever.


AI Backends

GitHub Copilot CLI (default) βœ… Tested

Requires a fine-grained PAT with the Copilot Requests permission. Classic ghp_ tokens are not supported.

AI_CLI=copilot
COPILOT_GITHUB_TOKEN=github_pat_...

OpenAI Codex CLI βœ… Tested

AI_CLI=codex
OPENAI_API_KEY=sk-...
AI_MODEL=o3

Direct API β€” OpenAI / Anthropic / Ollama ⚠️ Not fully field-tested

This backend is implemented and unit-tested but has not been validated end-to-end in a live production deployment. If you try it, please open an issue with your findings β€” improvements and fixes are very welcome!

AI_CLI=api
AI_PROVIDER=anthropic
ANTHROPIC_API_KEY=sk-ant-...
AI_MODEL=claude-3-5-sonnet-20241022
AI_CLI=api
AI_PROVIDER=openai
OPENAI_API_KEY=sk-...
AI_MODEL=gpt-4o
AI_CLI=api
AI_PROVIDER=ollama
AI_MODEL=llama3.2
AI_BASE_URL=http://host.docker.internal:11434

Anthropic Claude CLI βœ… Tested

Supports two authentication modes:

API Key mode β€” billed to your Anthropic API account:

AI_CLI=claude
ANTHROPIC_API_KEY=sk-ant-...
AI_MODEL=claude-sonnet-4-6

OAuth mode (Pro / Max subscription) β€” omit ANTHROPIC_API_KEY and authenticate via claude login:

AI_CLI=claude
AI_MODEL=claude-sonnet-4-6
# No ANTHROPIC_API_KEY β€” uses OAuth credentials from claude login

See Claude CLI OAuth Setup below for Docker authentication instructions.

Google Gemini CLI βœ… Tested

Requires an API key from Google AI Studio.

AI_CLI=gemini
GEMINI_API_KEY=AIza...
AI_MODEL=gemini-2.5-pro  # optional β€” omit to use CLI default

Claude CLI OAuth Setup

Use this when you have a Claude Pro or Max subscription and want to skip the pay-per-use API key.

Option A: Authenticate inside the container (recommended)

  1. Start the container normally (without ANTHROPIC_API_KEY in your .env):

    docker compose up -d
  2. Exec into the running container:

    docker exec -it <container_name> claude
  3. When the CLI prompts you to log in and a browser cannot open, select the login URL displayed in the terminal and copy it (Ctrl+C or Cmd+C depending on your OS).

  4. Paste the URL into a browser on any device where you are logged into your claude.ai account.

  5. Click Authorize. The browser displays an authentication code. Copy it and paste it back into the Claude Code terminal session.

  6. Exit the Claude CLI (Ctrl+C or /exit). No container restart needed β€” use the /gate restart bot command from Telegram, then /gate init to reinitialize.

Note: OAuth credentials are stored in /root/.claude/ inside the container. They persist as long as the container's filesystem is intact. A docker compose down followed by up will lose them β€” use Option B for persistence across container recreations.

Option B: Mount host credentials

  1. On your host machine, run claude and complete the OAuth login (browser or clipboard method).

  2. Mount your ~/.claude/ directory into the container. Add to docker-compose.yml:

    volumes:
      - ~/.claude:/root/.claude:ro
  3. Start the container without ANTHROPIC_API_KEY:

    docker compose up -d

    The Claude CLI inside the container will use the mounted OAuth credentials.

Important

  • Ensure ANTHROPIC_API_KEY is unset in your .env file. If present, the CLI will use it (API billing) instead of OAuth credentials.
  • The /gate run bot command is not suitable for claude login β€” the login flow is interactive and requires a TTY.
  • OAuth tokens expire periodically. If the bot starts returning auth errors, re-run the login flow.

Security

  • Bot responds only to TG_CHAT_ID
  • ALLOWED_USERS adds per-user filtering inside the allowed chat
  • Destructive shell ops require confirmation
  • Non-root user inside container
  • Fine-grained GitHub token scoped to one repo

Upgrading from v0.x to v1.0

API key changes (v1.0.0 β†’ v1.1.0)

The AI_API_KEY master-fallback and CODEX_API_KEY alias were removed. Each backend now has its own explicit key:

Old env var New env var When
AI_API_KEY (used with AI_CLI=codex) OPENAI_API_KEY Always
AI_API_KEY (used with AI_CLI=api + AI_PROVIDER=openai) OPENAI_API_KEY Always
AI_API_KEY (used with AI_CLI=api + AI_PROVIDER=anthropic) ANTHROPIC_API_KEY Always
CODEX_API_KEY OPENAI_API_KEY Always
WHISPER_API_KEY relying on AI_API_KEY fallback WHISPER_API_KEY (set it explicitly) If previously omitted

v1.0.0 behaviour: old vars are still accepted but a startup warning is emitted:

WARNING: AI_API_KEY is deprecated and will be removed in v1.1.0.
Set OPENAI_API_KEY, ANTHROPIC_API_KEY, or the backend-specific key instead.
See GitHub issue #24 (`AI Provider Explicit Validation`) for migration context.

Update your .env or docker-compose.yml before upgrading to v1.1.0.


License

MIT

About

Keep developing your project remotely with official CLI (Claude, Copilot, Codex, Gemini) via Telegram or Slack - one container per project.

Resources

License

Stars

Watchers

Forks

Packages

 
 
 

Contributors

Languages