Universal AI Agent Safety Control - Emergency kill switch for ANY AI agent
Runtime Fence is a comprehensive safety ecosystem for ALL AI agents. Whether it's a coding assistant, autonomous agent, trading bot, or data analyst - instantly stop any agent, block dangerous actions, and monitor everything in real-time.
Works with: LangChain • AutoGPT • Copilot • Cursor • Aider • CrewAI • BabyAGI • Custom Agents • Trading Bots • Email Bots • Web Scrapers • Data Analysts
🌐 Live Demo: runtimefence.com
- 🔴 Kill Switch - Instantly stop any AI agent with one click
- 🚫 Action Blocking - Define what actions agents cannot take
- 🛡️ Target Protection - Block access to sensitive files, APIs, or systems
- 💰 Spending Limits - Control how much an agent can spend
- 📊 Risk Scoring - Automatic risk assessment of every action
- 📝 Audit Logging - Complete trail of all agent activity
- 📧 Email/SMS Alerts - Get notified of suspicious behavior
- 🖥️ Cross-Platform - Windows, macOS, and Linux support
- 🔒 Runtime Fence - Real-time action validation and monitoring
- ☠️ Kill Switch - SIGTERM → SIGKILL emergency termination
- 📝 Audit Logging - Complete cryptographic audit trail
- ⚡ Sub-Second Response - Kill signals under 100ms
- 🛡️ Offline Mode - Works without external dependencies
- 🎯 Risk Scoring - Automatic threat assessment (0-100)
Runtime Fence includes SHA-256 hash verification of all critical security modules. For production deployments, freeze hashes at build time:
cd packages/python
python freeze_hashes.py > runtime_fence/_frozen_hashes.pyThis generates _frozen_hashes.py containing SHA-256 hashes of all 9 security modules. At runtime, bypass_protection.py compares live file hashes against frozen values — any mismatch triggers a tamper alert.
CI/CD Integration:
Add to your build pipeline (after tests pass, before packaging):
# GitHub Actions example
- name: Freeze security hashes
run: |
cd packages/python
python freeze_hashes.py > runtime_fence/_frozen_hashes.py
- name: Verify frozen hashes
run: python -c "from runtime_fence._frozen_hashes import FROZEN_HASHES; print(f'{len(FROZEN_HASHES)} modules frozen')"Important: Re-run
freeze_hashes.pyafter any change to security modules. Without frozen hashes, tamper detection falls back to runtime-computed hashes with a warning log.
pip install runtime-fenceOr clone and install:
git clone https://github.com/RunTimeAdmin/runtime-fence-ai.git
cd runtime-fence-ai/packages/python
pip install -e .from runtime_fence import RuntimeFence, FenceConfig
# Create a fence
fence = RuntimeFence(FenceConfig(
agent_id="my-agent",
blocked_actions=["delete", "exec", "sudo"],
blocked_targets=[".env", "production", "wallet"],
spending_limit=100.0
))
# Validate an action
result = fence.validate("read", "document.txt")
if result.allowed:
# Proceed with action
pass
else:
print(f"Blocked: {result.reasons}")
# Kill switch
fence.kill("Emergency stop")@fence.wrap_function("api_call", "external_service")
def call_external_api(data):
return requests.post("https://api.example.com", json=data)
# Now the function goes through the fence automatically
call_external_api({"key": "value"})install_fence.batchmod +x install_fence.sh
./install_fence.shLook for the shield icon in your system tray. Right-click for options.
fence version --check # Check for updates
fence update # Upgrade to latest version
fence status # Show fence status
fence scan # Detect AI agents on your system
fence test # Run quick validation test
fence start # Launch tray appRuntime Fence works with ANY Python-based AI agent. Here are real-world examples:
Agents: GitHub Copilot, Cursor, Aider, Cody
Risks Blocked: Executing shell commands, deleting files, modifying system configs
Example:
fence = RuntimeFence(FenceConfig(
agent_id="cursor-assistant",
blocked_actions=["exec", "shell", "rm", "sudo"],
blocked_targets=[".git/", ".env", "~/.ssh/"]
))Agents: AutoGPT, BabyAGI, AgentGPT, SuperAGI
Risks Blocked: Self-modification, spawning agents, unrestricted API calls
Example:
fence = RuntimeFence(FenceConfig(
agent_id="autogpt",
blocked_actions=["spawn_agent", "modify_self", "execute_code"],
spending_limit=50.0
))Agents: LangChain data agents, Pandas AI, custom ETL bots
Risks Blocked: Deleting databases, exporting PII, dropping tables
Example:
fence = RuntimeFence(FenceConfig(
agent_id="data-analyst",
blocked_actions=["delete", "drop_table", "export_pii"],
blocked_targets=["production_db", "customer_data"]
))Agents: Selenium bots, Playwright agents, web scrapers
Risks Blocked: Form submissions, purchases, credential theft
Example:
fence = RuntimeFence(FenceConfig(
agent_id="web-scraper",
blocked_actions=["login", "purchase", "submit_form"],
blocked_targets=["payment", "checkout", "admin"]
))Agents: Gmail automation, email marketing bots, support agents
Risks Blocked: Bulk sending, forwarding all emails, exporting contacts
Example:
fence = RuntimeFence(FenceConfig(
agent_id="email-bot",
blocked_actions=["send_bulk", "forward_all", "export_contacts"],
spending_limit=100.0
))Agents: Crypto trading bots, stock trading agents, DeFi automation
Risks Blocked: High-value transfers, unauthorized withdrawals
Example:
fence = RuntimeFence(FenceConfig(
agent_id="trading-bot",
blocked_actions=["withdraw", "transfer"],
spending_limit=1000.0,
blocked_targets=["wallet_private_key"]
))Any LangChain agent with tools
Full integration example: See langchain_integration.py
from langchain_integration import create_fenced_agent, Preset
agent = create_fenced_agent(
preset=Preset.CODING_ASSISTANT,
agent_id="langchain-coder"
)┌─────────────────────────────────────────────────────────┐
│ Your AI Agent │
└─────────────────────────┬───────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────┐
│ RUNTIME FENCE │
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │
│ │ Validator │ │ Risk Scorer │ │ Kill Switch │ │
│ └─────────────┘ └─────────────┘ └─────────────┘ │
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │
│ │ Audit Log │ │ Alerts │ │ Settings │ │
│ └─────────────┘ └─────────────┘ └─────────────┘ │
└─────────────────────────┬───────────────────────────────┘
│
▼ (if allowed)
┌─────────────────────────────────────────────────────────┐
│ External World │
│ (APIs, Files, Databases, Network) │
└─────────────────────────────────────────────────────────┘
from runtime_fence import RuntimeFence, FenceConfig, RiskLevel
# Initialize
fence = RuntimeFence(FenceConfig(
agent_id="my-agent",
blocked_actions=["delete"],
spending_limit=100.0
))
# Validate actions
result = fence.validate(action="delete", target="file.txt")
print(result.allowed) # False
print(result.risk_score) # 50
print(result.reasons) # ["Action 'delete' is blocked"]
# Kill switch
fence.kill("Emergency stop")
# Resume operations
fence.resume()
# Get status
status = fence.get_status()
print(status.is_killed) # True/False
print(status.total_validations) # Countfence test # Run safety tests
fence status # Show fence status
fence version # Display versionFor REST API documentation (coming soon), see API-Reference.md.
# Agent identification
FENCE_AGENT_ID=my-agent
# Logging
FENCE_LOG_LEVEL=INFO # DEBUG, INFO, WARNING, ERRORfrom runtime_fence import RuntimeFence, FenceConfig, RiskLevel
config = FenceConfig(
agent_id="my-agent",
blocked_actions=["delete", "exec"],
blocked_targets=[".env", "production"],
spending_limit=100.0,
risk_threshold=RiskLevel.MEDIUM, # LOW, MEDIUM, HIGH, CRITICAL
auto_kill_on_critical=True,
offline_mode=True
)
fence = RuntimeFence(config)# Clone repo
git clone https://github.com/RunTimeAdmin/runtime-fence-ai.git
cd runtime-fence-ai/packages/python
# Install in development mode
pip install -e .
# Run tests
fence testFor detailed integration guides, see:
runtime-fence-ai/
├── packages/
│ └── python/ # Runtime Fence Python package
│ ├── runtime_fence.py # Core safety engine
│ ├── cli.py # Command-line interface
│ └── langchain_integration.py # LangChain helpers
├── docs/ # Integration guides
│ ├── Integration-Guide.md
│ └── Troubleshooting-FAQ.md
└── wiki/ # Documentation
├── Quick-Start.md
└── Configuration.md
Built by RunTimeAdmin | David Cooper | CCIE #14019
Why Runtime Fence Matters:
As AI agents become more autonomous, the risk of unintended actions increases exponentially. Runtime Fence provides the safety layer that:
- Prevents agents from deleting critical files
- Blocks unauthorized API calls
- Limits spending and resource usage
- Provides complete audit trails
- Enables instant emergency shutdowns
Use Cases:
- Coding assistants that modify your codebase
- Autonomous agents with system access
- Data processing agents handling sensitive information
- Web automation with payment capabilities
- Any AI agent that needs guardrails
- GitHub: github.com/RunTimeAdmin/runtime-fence-ai
- PyPI Package: pypi.org/project/runtime-fence
- Documentation: GitHub Wiki
- Issues: Report bugs or request features
- Twitter: @protocol14019
MIT License - see LICENSE for details.
from runtime_fence import RuntimeFence, FenceConfig
fence = RuntimeFence(FenceConfig(
agent_id="cursor-assistant",
blocked_actions=["delete", "rm"]
))
result = fence.validate("delete", "important_code.py")
# Returns: {"allowed": False, "reasons": ["Action 'delete' is blocked"]}fence = RuntimeFence(FenceConfig(
agent_id="autogpt",
blocked_actions=["modify_self", "spawn_agent"]
))
result = fence.validate("modify_self", "autogpt_config.json")
# Returns: {"allowed": False, "risk_score": 85, "reasons": ["Action 'modify_self' is blocked"]}fence = RuntimeFence(FenceConfig(
agent_id="data-analyst",
blocked_actions=["export_pii"],
blocked_targets=["customer_emails", "ssn"]
))
result = fence.validate("export", "customer_emails.csv")
# Returns: {"allowed": False, "reasons": ["Target 'customer_emails' is blocked"]}fence.kill("Suspicious behavior detected")
# All agent operations halted immediately across ALL agents🛡️ Protect ANY AI Agent. Before it's too late.
We welcome contributions! Please see CONTRIBUTING.md for guidelines.
- Fork the repository
- Create your feature branch (
git checkout -b feature/AmazingFeature) - Commit your changes (
git commit -m 'Add some AmazingFeature') - Push to the branch (
git push origin feature/AmazingFeature) - Open a Pull Request
Core Features (Production Ready):
- Kill switch engine (sub-100ms response)
- Python SDK (
pip install runtime-fence) - Action blocking & target protection
- Risk scoring & audit logging
- CLI tools (
fence test,fence status) - Offline mode (no API required)
- LangChain integration
Coming Soon:
- TypeScript SDK
- Web dashboard
- REST API service
- Mobile app
- ✅ Python package on PyPI
- ✅ Universal agent support
- ✅ Offline-first architecture
- 🚧 TypeScript/JavaScript SDK
- LangSmith integration
- CrewAI native support
- Anthropic Claude tools
- OpenAI Assistants API
- Multi-agent orchestration
- Centralized dashboard
- Team collaboration
- Advanced analytics
- GitHub: github.com/RunTimeAdmin/runtime-fence-ai
- PyPI Package: pypi.org/project/runtime-fence
- Documentation: GitHub Wiki
- Issues: Report bugs or request features
- Twitter: @protocol14019
🛡️ Runtime Fence - Because every AI needs an off switch.