The "Cloudflare for AI Agents" Is Here (MCP Security Proxy) #8
Harshit-J004
announced in
Announcements
Replies: 1 comment 2 replies
-
|
An MCP security proxy is compelling because it shifts the problem toward mediation instead of assuming safer prompts will be enough. That architectural move matters a lot. Once a separate layer can inspect, constrain, and deny risky actions, the whole system becomes easier to reason about. If anyone wants to dig into similar ideas, feel free to click my profile avatar. |
Beta Was this translation helpful? Give feedback.
2 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
The Model Context Protocol (MCP) by Anthropic, OpenAI, and Google is revolutionizing how LLMs talk to databases and external tools. But there’s a massive problem: The protocol has zero native security. It’s entirely unguarded JSON-RPC traffic.
Today we are releasing ToolGuard v5.0.0 — the first transparent, runtime security proxy for the entire MCP ecosystem.
We built a 6-layer interception firewall that sits perfectly between ANY MCP client (Claude, Gemini, etc.) and ANY MCP server:
The 6-Layer Interceptor Pipeline
execute_code).drop_database) and require human approval.Universal Compatibility
ToolGuard proxy operates strictly at the raw JSON-RPC 2.0 transport layer. Zero vendor coupling. You don't need to rewrite your agent. It seamlessly protects MCP servers written in Python, TypeScript, Go, or Rust.
Before (unguarded):
claude --mcp-server "python database_server.py"
After (guarded by ToolGuard):
toolguard proxy --upstream "python database_server.py" --policy security.yaml
The 10-Framework Integration Milestone
Alongside the MCP Proxy, ToolGuard cross the 10-integration mark. We have added native, zero-config adapters for the OpenAI Agents SDK and the Google Agent Development Kit (ADK).
Terminal Elite Web Dashboard ("Obsidian")
ToolGuard now ships with a zero-dependency, real-time web dashboard. Run
toolguard dashboardand instantly monitor every agent tool call as it happens:Verified With Live LLM Traffic (Gemini 2.0 Flash)
We did not just build this in theory. We connected a live Google Gemini 2.0 Flash API to the proxy and ran a deep 6-layer stress test:
delete_database— instant deny.shutdown_server.[SYSTEM OVERRIDE]prompt injection in nested args.DROP TABLE usersfrom a LIVE Gemini function call.read_filepassed all 6 layers and logged to execution DAG.Every layer. Every attack vector. Zero mocks. All verified against real LLM-generated payloads.
We didn't just build an adapter. We built security infrastructure. ToolGuard is now officially the execution-layer firewall for the intelligence boom.
Repository: https://github.com/Harshit-J004/toolguard
Documentation: https://github.com/Harshit-J004/toolguard#readme
Install:
pip install py-toolguardThis discussion was created from the release The "Cloudflare for AI Agents" Is Here (MCP Security Proxy).
Beta Was this translation helpful? Give feedback.
All reactions