Skip to content
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -114,6 +114,7 @@ Projects and technologies that credibly separate (isolate) credentials and token
| [clawshell](https://github.com/clawshell/clawshell) | proxy, virtual-keys, DLP, Unix-permissions | Drop-in sidecar proxy for OpenClaw that maps virtual API keys to real provider credentials (stored in a Unix-permission-protected config), with regex-based DLP scanning that can block or redact PII in request/response bodies before they reach upstream LLM APIs. |
| [prxlocal](https://github.com/vladimirkras/prxlocal) | proxy, secret-injection | Simple proxy-based technique for separating secrets from agent execution by intercepting requests and injecting credentials externally. |
| [secretless-ai](https://github.com/opena2a-org/secretless-ai) | hooks, secret-injection, keychain | Keeps credentials out of AI context windows, esp. for Claude Code it installs a `PreToolUse` hook that intercepts every file read, grep, glob, bash, write, and edit before execution. Supports multiple secret backends (local AES-256-GCM, OS keychain, 1Password). |
| [llm-safe-haven](https://github.com/pleasedodisturb/llm-safe-haven) | hooks, fail-closed, multi-agent, audit, npm | `npx llm-safe-haven` auto-detects 14 AI coding agents and installs fail-closed PreToolUse hooks (bash firewall blocks exfiltration/destructive commands, secret guard blocks credential writes) plus PostToolUse JSONL audit logger. SHA256 hook integrity verification. Per-agent ignore files for secret isolation. Also ships threat model (26+ incidents), hardening guides, and credential proxy architecture docs. |
| [enject](https://github.com/GreatScott/enject) | secret-isolation, CLI, subprocess-injection | Rust CLI (formerly enveil) that replaces `.env` plaintext values with `en://` placeholder references while real values are stored in an Argon2id-derived AES-256-GCM encrypted local store. Decrypts, resolves references, injects real values into the subprocess environment, then zeroizes key material. Deliberately omits `get`/`export` commands to prevent AI-readable secret leakage. |
| [airut masked secrets](https://github.com/airutorg/airut/blob/main/doc/network-sandbox.md#masked-secrets-token-replacement) | proxy, masked-secrets, network-allowlist, AWS-SigV4 | mitmproxy transparently intercepts all HTTPS traffic, generates format-preserving surrogate tokens, injects them into the container's environment, and the proxy swaps surrogate → real value in outgoing request headers only for scoped hosts. |
| [Tailscale Aperture](https://tailscale.com/docs/features/aperture) | gateway, credential-injection, Tailscale, observability | Alpha LLM API gateway running on a tailnet, extracts the model name from each request body, routes to the correct provider and injects provider authentication headers server-side. |
Expand Down