🗜️ OpenCompress
for OpenClaw
OpenCompress is an OpenClaw plugin that optimizes LLM input and output using a state-of-the-art multi-stage compression pipeline. It reduces token usage and improves response quality, automatically, on every call. Works with any provider you already use: Anthropic, OpenAI, Google, OpenRouter, and any OpenAI-compatible API.
We don't sell tokens. We don't resell API access.
You use your own keys, your own models, your own account. Billed directly by Anthropic, OpenAI, or whoever you choose. We compress the traffic so you get charged less and your agent thinks clearer.
Compression doesn't just save money. It removes the noise. Leaner prompts mean the model focuses on what matters. Shorter context, better answers, better code.
No vendor lock-in. Uninstall anytime. Everything goes back to exactly how it was.
┌──────────────────────────────┐
│ Your OpenClaw Agent │
│ │
│ model: opencompress/auto │
└──────────────┬───────────────┘
│
▼
┌──────────────────────────────┐
│ Local Proxy (:8401) │
│ │
│ reads your provider key │
│ from OpenClaw config │
└──────────────┬───────────────┘
│
▼
┌──────────────────────────────┐
│ opencompress.ai │
│ │
│ compress → forward │
│ your key in header │
│ never stored │
└──────────────┬───────────────┘
│
▼
┌──────────────────────────────┐
│ Your LLM Provider │
│ (Anthropic / OpenAI) │
│ │
│ sees fewer tokens │
│ charges you less │
└──────────────────────────────┘
openclaw plugins install @opencompress/openclaw
openclaw onboard opencompress
openclaw gateway restartSelect opencompress/auto as your model. Done.
Every provider you already have gets a compressed mirror:
opencompress/auto → your default, compressed
opencompress/anthropic/claude-sonnet-4 → Claude Sonnet, compressed
opencompress/anthropic/claude-opus-4-6 → Claude Opus, compressed
opencompress/openai/gpt-5.4 → GPT-5.4, compressed
Switch back to the original model anytime to disable compression.
/compress-stats view savings, balance, token metrics
/compress show status and available models
|
Your keys are yours. We read your API key from OpenClaw's config at runtime, pass it in a per-request header, and discard it immediately. We never store, log, or cache your provider credentials. Ever. |
Your prompts are yours. Prompts are compressed in-memory and forwarded. Nothing is stored, logged, or used for training. The only thing we record is token counts for billing, original vs compressed. That's it. |
|
Zero lock-in. We don't replace your provider. We don't wrap your billing. If you uninstall, your agents keep working exactly as before. Same keys, same models, same everything. |
Failure is invisible. If our service goes down, your requests fall back directly to your provider. No errors, no downtime, no interruption. You just temporarily lose the compression savings. |
Anthropic Claude Sonnet, Opus, Haiku anthropic-messages
OpenAI GPT-5.x, o-series openai-completions
Google Gemini openai-compat
OpenRouter 400+ models openai-completions
Any OpenAI-compatible endpoint openai-completions
Free credit on signup. No credit card. Pay only for the tokens you save.
opencompress.ai · npm · github
MIT License · OpenCompress