Skip to content

open-compress/opencompress-openclaw

Repository files navigation



🗜️ OpenCompress
for OpenClaw

Your keys. Your models. Fewer tokens. Better quality.


npm   stars   license   OpenClaw



OpenCompress is an OpenClaw plugin that optimizes LLM input and output using a state-of-the-art multi-stage compression pipeline. It reduces token usage and improves response quality, automatically, on every call. Works with any provider you already use: Anthropic, OpenAI, Google, OpenRouter, and any OpenAI-compatible API.



We don't sell tokens. We don't resell API access.

You use your own keys, your own models, your own account. Billed directly by Anthropic, OpenAI, or whoever you choose. We compress the traffic so you get charged less and your agent thinks clearer.

Compression doesn't just save money. It removes the noise. Leaner prompts mean the model focuses on what matters. Shorter context, better answers, better code.

No vendor lock-in. Uninstall anytime. Everything goes back to exactly how it was.




How it works

              ┌──────────────────────────────┐
              │     Your OpenClaw Agent      │
              │                              │
              │   model: opencompress/auto   │
              └──────────────┬───────────────┘
                             │
                             ▼
              ┌──────────────────────────────┐
              │     Local Proxy (:8401)      │
              │                              │
              │   reads your provider key    │
              │   from OpenClaw config       │
              └──────────────┬───────────────┘
                             │
                             ▼
              ┌──────────────────────────────┐
              │     opencompress.ai          │
              │                              │
              │   compress → forward         │
              │   your key in header         │
              │   never stored               │
              └──────────────┬───────────────┘
                             │
                             ▼
              ┌──────────────────────────────┐
              │     Your LLM Provider        │
              │     (Anthropic / OpenAI)     │
              │                              │
              │   sees fewer tokens          │
              │   charges you less           │
              └──────────────────────────────┘



Install

openclaw plugins install @opencompress/openclaw
openclaw onboard opencompress
openclaw gateway restart

Select opencompress/auto as your model. Done.


Models

Every provider you already have gets a compressed mirror:

opencompress/auto                          → your default, compressed
opencompress/anthropic/claude-sonnet-4     → Claude Sonnet, compressed
opencompress/anthropic/claude-opus-4-6     → Claude Opus, compressed
opencompress/openai/gpt-5.4               → GPT-5.4, compressed

Switch back to the original model anytime to disable compression.


Commands

/compress-stats    view savings, balance, token metrics
/compress          show status and available models



What we believe

Your keys are yours.

We read your API key from OpenClaw's config at runtime, pass it in a per-request header, and discard it immediately. We never store, log, or cache your provider credentials. Ever.

Your prompts are yours.

Prompts are compressed in-memory and forwarded. Nothing is stored, logged, or used for training. The only thing we record is token counts for billing, original vs compressed. That's it.

Zero lock-in.

We don't replace your provider. We don't wrap your billing. If you uninstall, your agents keep working exactly as before. Same keys, same models, same everything.

Failure is invisible.

If our service goes down, your requests fall back directly to your provider. No errors, no downtime, no interruption. You just temporarily lose the compression savings.




Supported providers

Anthropic    Claude Sonnet, Opus, Haiku          anthropic-messages
OpenAI       GPT-5.x, o-series                   openai-completions
Google       Gemini                               openai-compat
OpenRouter   400+ models                          openai-completions
Any          OpenAI-compatible endpoint           openai-completions

Pricing

Free credit on signup. No credit card. Pay only for the tokens you save.

Dashboard →




opencompress.ai   ·   npm   ·   github

MIT License · OpenCompress


About

OpenCompress plugin for OpenClaw — automatic 5-layer prompt compression for any LLM

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors