archmcp - MCP Architectural Snapshot Server and Knowledge Graph
-
Updated
Mar 6, 2026 - Go
archmcp - MCP Architectural Snapshot Server and Knowledge Graph
Claude省钱版 — 效果基本一致的情况下可以节约Token消耗约90%,超级省Token,支持OpenAI及国产大模型(DeepSeek/GLM/通义千问)的AI编程助手。本项目基于ClaudeCode源码实现。
Guardian Agent and Token Savings for Claude Code
ContextCore: An MCP server for Claude (or any AI tool) that enables massive token saving through hybrid search (BM25 + Embeddings)
Turn any OpenAPI spec into a native CLI binary. No MCP, no bloat, no runtime dependencies, ONLY CLI.
Local-first Trello cache with Git-style sync — built and optimised for AI agent workflows.
P2P token recycling network for AI Agents. Put unused tokens into a piggy bank, then cashing them out as help when you need it. 💸
Project-agnostic dual-memory MCP CLI for Claude Code, Cursor, and OpenCode (Qdrant tuned hybrid retrieval + structural memory hooks)
Auto model-switching plugin for Claude Code — routes prompts to haiku/sonnet/opus (or any custom tier) to save API tokens
Stop hitting Claude Pro's weekly limit by Wednesday. Delegate routine & complex tasks from Claude Opus 4.7 to DeepSeek – slash commands included.
🎭 Talk to Claude in 100% emoji. A stylized communication mode for Claude Code.
🛡️ Stop AI agents from burning tokens. AI 插件调用逻辑门控协议。
Zero-dependency Q&A cache for OpenClaw - SQLite-based, no Redis/Embedding API needed. Reduce token consumption with keyword matching + edit distance.
Html技术革命,代替Markdown。低 Token AI context workbook / Low-token protocol for agent-readable, human-readable artifacts
Auto-trigger graphify knowledge-graph queries on every LLM prompt + MCP shell delegation for Claude Code / Cowork agents.
Smart Code Library is a portable, single-file command manager built to optimize multi-language workflows (R, Python, SQL, Shell). It transforms snippets into dynamic templates with variable handling, enabling rapid script adaptation while reducing repetitive AI dependency and token consumption.
Smart Model Selection for AI Agents. Route tasks to the right model, slash costs up to 70%. Task classification, fallback chains, cost tracking — pure markdown, zero dependencies.
Local LLM co-processor for AI coding assistants (Claude Code, Codex, Aider). Offload token-heavy tasks to Ollama.
Teaches AI coding agents OS platform awareness (Windows vs Linux shell/path) and social restraint around git operations. Saves tokens by preventing wrong-command retries and premature commit prompts.
lowfat - slim your command output. strips noise, saves tokens.
Add a description, image, and links to the token-saving topic page so that developers can more easily learn about it.
To associate your repository with the token-saving topic, visit your repo's landing page and select "manage topics."