Conversation
…mple and update bootstrap script
There was a problem hiding this comment.
Pull request overview
Adds Slack (Socket Mode) configuration to the DigitalOcean Terraform deployment, wiring Slack tokens through Terraform into the OpenClaw bootstrap config.
Changes:
- Introduces Terraform variables and example tfvars entries for Slack app/bot tokens.
- Passes Slack tokens into droplet
user_dataand conditionally enables the Slack extension inopenclaw.json. - Updates docs/examples to mention Slack and adds Slack env var plumbing.
Reviewed changes
Copilot reviewed 7 out of 7 changed files in this pull request and generated 7 comments.
Show a summary per file
| File | Description |
|---|---|
terraform/digitalOcean/variables.tf |
Adds slack_app_token / slack_bot_token Terraform variables. |
terraform/digitalOcean/terraform.tfvars.example |
Documents optional Slack tokens and Telegram owner ID. |
terraform/digitalOcean/main.tf |
Passes Slack variables into templatefile() for bootstrap rendering. |
terraform/digitalOcean/bootstrap.sh |
Installs Slack deps, conditionally loads Slack extension, writes Slack tokens to env/config, plus additional bootstrap behavior changes. |
terraform/digitalOcean/.envrc |
Exports Slack tokens into TF_VAR_* for Terraform. |
README.md |
Mentions Slack support and Slack env vars; removes model alias table. |
.env.example |
Adds SLACK_APP_TOKEN / SLACK_BOT_TOKEN placeholders. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| "entries": { | ||
| "telegram": { "enabled": true }, | ||
| "telegram": { "enabled": true }%{if slack_app_token != "" && slack_bot_token != ""}, | ||
| "slack": { "enabled": true }%{endif}, |
There was a problem hiding this comment.
The Slack conditional in plugins.entries leaves a dangling comma when Slack tokens are empty: the comma after %{endif} is outside the conditional, so the rendered JSON becomes invalid (a standalone , line) when Slack is disabled. Move the comma inside the conditional or restructure so the JSON is valid in both branches (e.g., include the comma with the Slack entry or generate entries via jsonencode).
| "slack": { "enabled": true }%{endif}, | |
| "slack": { "enabled": true },%{endif} |
|
|
||
| # ── Telegram (Optional) ───────────────────────────────────── | ||
| # Your Telegram numeric user ID (get from @userinfobot) | ||
| # Grants /model and other privileged commands. Leave empty ("") to disable |
There was a problem hiding this comment.
The comment for telegram_owner_id says leaving it empty disables Telegram, but the config still enables the Telegram channel unconditionally; an empty owner ID only removes the allowFrom restriction for privileged commands. Please update the comment to reflect the actual behavior (optional privileged access, not Telegram enable/disable).
| # Grants /model and other privileged commands. Leave empty ("") to disable | |
| # Grants /model and other privileged commands. Leave empty ("") to skip owner-based restriction |
| - Web search via Brave Search (falls back to DuckDuckGo) | ||
| - 8 switchable free LLM models via `/model <alias>` | ||
| - Secrets managed via `.env` — never committed |
There was a problem hiding this comment.
The README claims "8 switchable free LLM models", but the bootstrap config now defines many more models and the primary model is openrouter/openai/gpt-4o-mini (not marked :free). Please update the README to match the actual configured models/free-tier expectations, or adjust the config to keep the README statement true.
| @@ -26,6 +28,8 @@ One-command deployment of an [OpenClaw](https://openclaw.bot) AI agent as a Tele | |||
| - DigitalOcean account + API token | |||
| - OpenRouter API key | |||
| - Telegram bot token (from [@BotFather](https://t.me/BotFather)) | |||
There was a problem hiding this comment.
Slack tokens are listed under "Prerequisites", but the Terraform variables default to empty strings and the bootstrap only enables Slack when both tokens are set. Consider marking these as optional prerequisites (or move them into an optional section) to avoid implying Slack is required for a Telegram-only deployment.
| - Telegram bot token (from [@BotFather](https://t.me/BotFather)) | |
| - Telegram bot token (from [@BotFather](https://t.me/BotFather)) | |
| ### Optional (for Slack support) | |
| Only required if you want to enable Slack in addition to Telegram: |
| In Telegram, use `/model <alias>`: | ||
|
|
There was a problem hiding this comment.
The "Switching Models" section now ends with In Telegram, use /model <alias>: but no longer includes the alias/model list or any guidance, leaving the instructions incomplete. Either restore the table/list (updated to the new aliases) or revise the section so it's actionable.
| In Telegram, use `/model <alias>`: | |
| In Telegram, use `/model <alias>` to switch the active model for your chat. | |
| Replace `<alias>` with one of the model aliases configured for your bot deployment. For example: | |
| - `/model llama` | |
| - `/model mistral` | |
| If an alias is not recognized, check the model alias configuration used by your deployment and then retry with a valid alias. |
| "model": { | ||
| "primary": "openrouter/meta-llama/llama-3.3-70b-instruct:free", | ||
| "primary": "openrouter/openai/gpt-4o-mini", | ||
| "fallbacks": [ | ||
| "openrouter/anthropic/claude-haiku-4.5", | ||
| "openrouter/deepseek/deepseek-v3.2", | ||
| "openrouter/meta-llama/llama-3.3-70b-instruct:free", | ||
| "openrouter/auto" | ||
| ] | ||
| }, | ||
| "models": { | ||
| "openrouter/meta-llama/llama-3.3-70b-instruct:free": {"alias": "llama"}, | ||
| "openrouter/cognitivecomputations/dolphin-mistral-24b-venice-edition:free": {"alias": "uncensored"}, | ||
| "openrouter/google/gemma-4-31b-it:free": {"alias": "gemma"}, | ||
| "openrouter/nousresearch/hermes-3-llama-3.1-405b:free": {"alias": "hermes"}, | ||
| "openrouter/nvidia/nemotron-3-super-120b-a12b:free": {"alias": "nemotron"}, | ||
| "openrouter/openai/gpt-oss-120b:free": {"alias": "gpt"}, | ||
| "openrouter/qwen/qwen3-coder:free": {"alias": "coder"}, | ||
| "openrouter/auto": {"alias": "auto"} | ||
| "openrouter/anthropic/claude-opus-4.6": {"alias": "opus"}, | ||
| "openrouter/anthropic/claude-sonnet-4.6": {"alias": "sonnet"}, | ||
| "openrouter/anthropic/claude-haiku-4.5": {"alias": "haiku"}, | ||
| "openrouter/openai/gpt-5.4": {"alias": "gpt5"}, | ||
| "openrouter/openai/gpt-4o": {"alias": "gpt4o"}, | ||
| "openrouter/openai/gpt-4o-mini": {"alias": "mini"}, | ||
| "openrouter/google/gemini-2.5-pro": {"alias": "gemini-pro"}, | ||
| "openrouter/google/gemini-2.5-flash": {"alias": "flash"}, | ||
| "openrouter/deepseek/deepseek-v3.2": {"alias": "deepseek"}, | ||
| "openrouter/deepseek/deepseek-r1": {"alias": "r1"}, | ||
| "openrouter/mistralai/devstral-small": {"alias": "devstral"}, | ||
| "openrouter/meta-llama/llama-3.3-70b-instruct:free": {"alias": "llama"}, | ||
| "openrouter/nvidia/nemotron-3-super-120b-a12b:free": {"alias": "nemotron"}, | ||
| "openrouter/qwen/qwen3-coder:free": {"alias": "coder"}, | ||
| "openrouter/cognitivecomputations/dolphin-mistral-24b-venice-edition:free": {"alias": "uncensored"}, | ||
| "openrouter/auto": {"alias": "auto"} | ||
| }, | ||
| "compaction": { | ||
| "mode": "safeguard", | ||
| "reserveTokensFloor": 20000 | ||
| } | ||
| "compaction": { "mode": "safeguard", "reserveTokensFloor": 4000 } | ||
| } |
There was a problem hiding this comment.
This PR is titled "Slack support", but bootstrap.sh also changes the default model, model alias set, compaction settings, onboarding behavior, and approval automation timing/logic. Consider splitting these unrelated operational changes into separate PRs (or updating the PR description/title) to keep review and rollback risk manageable.
| write_config() { | ||
| cat > /root/.openclaw/openclaw.json << JSONEOF | ||
| { | ||
| "gateway": { | ||
| "bind": "lan", | ||
| "auth": { | ||
| "mode": "token", | ||
| "token": "${openclaw_gateway_token}" | ||
| }, | ||
| "auth": { "mode": "token", "token": "${openclaw_gateway_token}" }, | ||
| "mode": "local", | ||
| "remote": { | ||
| "token": "${openclaw_gateway_token}" | ||
| } | ||
| "remote": { "token": "${openclaw_gateway_token}" } |
There was a problem hiding this comment.
write_config() uses an unquoted heredoc (<< JSONEOF), so bash will still perform $VAR/command substitutions on the rendered JSON at runtime. If any injected token ever contains $, backticks, or $(...), the resulting openclaw.json can be corrupted. Use a quoted heredoc delimiter (e.g., <<'JSONEOF') to prevent shell expansion inside the JSON payload.
No description provided.