agentic ai red-teaming #1734
sundi133
started this conversation in
Project onboardings
Replies: 1 comment
-
|
Started onboarding! |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
https://github.com/sundi133/wb-red-team
Project Name
Votal AI Red-Teaming
Project Description
Red-Team AI is an open-source, automated white-box security testing framework purpose-built for agentic AI applications. It discovers vulnerabilities by combining static source code analysis with dynamic LLM-powered attack
generation and execution across 47 attack categories and 101 delivery strategies, producing scored security reports with full traceability.
As AI agents increasingly gain access to tools, data sources, and autonomous decision-making capabilities, the attack surface expands far beyond traditional prompt injection. Red-Team AI addresses this gap by providing a
comprehensive, extensible, and fully automated testing pipeline that covers the breadth of agentic AI threats — from tool-chain hijacking and cross-session injection to multi-turn privilege escalation and rogue agent behavior.
Problem Statement
Agentic AI systems — LLM-powered applications with tool access, memory, multi-step reasoning, and role-based permissions — introduce a new class of security risks that existing testing tools do not adequately cover:
security scanners lack.
Solution
Red-Team AI implements a 5-stage automated pipeline:
(source→sink attack paths).
judgment), and refines partial successes for the next round.
Key Capabilities
Broad Attack Coverage
bias exploitation, and more.
with attack categories for thousands of unique vectors.
Intelligent Automation
Multi-Turn and Multi-Step Testing
Multi-Provider LLM Support
Extensible Architecture
AttackModule) with seed attacks and an LLM generation prompt.Technical Details
openai(SDK),jose(JWT),glob(file matching) — minimal footprintArchitecture
Scoring Methodology
Security scores start at 100 and are decremented based on vulnerability severity:
Differentiation
Community and Governance
Use Cases
Alignment with Linux Foundation Goals
Red-Team AI aligns with the Linux Foundation's mission to build sustainable ecosystems around open-source projects that address critical infrastructure needs:
Links
Red-Team AI - Automated security testing for the agentic AI era by Votal AI
Beta Was this translation helpful? Give feedback.
All reactions