LLM-based Agent ("claws") aggressively crawl internet-scale information during task execution, potentially collecting sensitive data. Restricting such automated crawling while preserving normal human browsing has become an urgent need!
git clone https://github.com/AI45Lab/escape-llm-claw.git
cd escape-llm-claw
bash runs/install.sh
bash runs/elc.shThis enables you to set up a web service at local port http://127.0.0.1:5000. After deploying the web service to a pulicly accessible server, you can further verify whether LLM agent is able to craw the website. For preliminary testing, we recommend using Ngrok to temporarily expose your local machine to external LLMs. For demonstration purposes, we provide an ELC-enabled web service which is publicly accessible at: https://kinolee.pythonanywhere.com/.
ELC reduces agent crawling success rate by 93% on average across major frontier models:
| Model | Direct Access | Crawl (no tools) | Crawl (with tools) |
|---|---|---|---|
| GPT-5.2-High | 89.0% | 0.0% | 2.0% |
| Gemini-3-Pro | 83.0% | 0.0% | 1.2% |
| DeepSeek-R1 | 100.0% | 0.0% | 20.5% |
| Qwen-Flash | 100.0% | 0.0% | 20.0% |
Key Insight: Agents crawl server-side rendered content to minimize latency, while humans consume client-side rendered content and tolerate delays.
ELC exploits this gap:
- Server-side AES-GCM encryption protects data from agent claws
- Client-side JavaScript decryption reveals content to human users
- Zero changes needed to your existing content workflow
MIT License - see LICENSE for details.


