"Universal AI security framework - Protect LLM applications from prompt injection, jailbreaks, and adversarial attacks. Works with OpenAI, Anthropic, LangChain, and any LLM."
ai jailbreak ai-safety security-framework ai-defense ai-security prompt-injection llm-security promptfoo prompt-security llm-guard llm-guardrails ai-hacking garak llm-protection openai-security chatgpt-security hacking-tools-ai lakeraguard calypsoai-moderator
-
Updated
Mar 7, 2026 - Python