Skip to content

DaxxSec/Labyrinth

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

78 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

icon Project LABYRINTH

Adversarial Cognitive Portal Trap Architecture

Project LABYRINTH

One-Click Deploy AGPL-3.0 License Status

A multi-layered defensive architecture designed to contain, degrade, disrupt, and commandeer autonomous offensive AI agents.

Built by DaxxSec & Claude (Anthropic) · GitHub


Screenshots

TUI Dashboard — Overview
TUI Overview

TUI Dashboard — Live Event Log
TUI Logs

Web Dashboard — Real-Time Monitoring
Web Dashboard


The Problem

Autonomous AI agents are being deployed for offensive cyber operations — automated recon, exploitation, and lateral movement at machine speed. But AI agents have cognitive dependencies that humans don't — and almost nobody is building defenses that target those dependencies.

LABYRINTH changes that.


Prerequisites

You need Docker (or a compatible runtime) and optionally Go 1.22+ to build from source.

macOS users: We recommend OrbStack over Docker Desktop. It's significantly faster, uses less memory, and is a drop-in replacement — all docker and docker compose commands work identically.

brew install orbstack

Quickstart

Install & Deploy

# Clone, build, and install (installs Go if needed)
git clone https://github.com/DaxxSec/labyrinth.git
cd labyrinth && ./scripts/install.sh

# Run the smoke test to verify everything works
./scripts/smoke-test.sh

# Deploy a test environment
labyrinth deploy -t

# Launch the TUI monitoring dashboard
labyrinth tui

Or grab a pre-built binary and run ./labyrinth --install.

Test with an Attacker Agent

labyrinth bait drop                     # Plant randomized bait for agents to discover
labyrinth attacker setup                # Pick and configure an attacker agent

The deploy command prompts you to drop bait automatically. Bait generates a unique randomized identity (company, users, passwords) and plants discoverable credentials on the portal trap services — giving attacker agents breadcrumbs to find their way in. Every deployment gets a unique identity with no static signatures, preventing fingerprinting by scanners or adversarial AI.

Choose from PentAGI, PentestAgent, Strix, or bring your own tool. All agents run in isolated Docker containers on the LABYRINTH network — nothing touches your host.

labyrinth attacker list                 # See all agents and their status
labyrinth attacker run pentagi          # Quick-launch an agent
labyrinth attacker stop --all           # Stop all agents when done
labyrinth bait clean                    # Remove bait when done

What Happens

Agent connects to portal trap service
 └─→ Container spawned (Layer 1 — THRESHOLD)
     └─→ Agent explores contradictory environment (Layer 2 — MINOTAUR)
         ├─→ Agent uses stolen credentials → phantom services accept & log (Layer 4)
         │   └─→ PostgreSQL, Redis, Elasticsearch, Consul, Jenkins — all fake, all logging
         └─→ Terminal encoding corrupted (Layer 3 — BLINDFOLD)
             └─→ API calls intercepted & rewritten (Layer 4 — PUPPETEER)

All activity captured → TUI dashboard, web dashboard, or forensic report

Generate a Forensic Report

After a test, generate a structured report with MITRE ATT&CK mapping, attack graphs, and effectiveness assessment:

labyrinth report                           # Latest session (terminal)
labyrinth report --format md -o report.md  # Export Markdown with Mermaid attack graph
labyrinth report --format json             # JSON to stdout
labyrinth report --all                     # All sessions

Architecture

LABYRINTH implements a reverse kill chain — the deeper an agent penetrates, the more compromised it becomes.

Layer Codename Action What It Does
L0 BEDROCK HARDEN AES-256 forensics, VLAN isolation, retention policy
L1 THRESHOLD CONTAIN Portal trap routes connections into isolated containers
L2 MINOTAUR DEGRADE Contradictory environments erode the agent's world model
L3 BLINDFOLD DISRUPT Encoding corruption blinds the agent's I/O parsing
L4 PUPPETEER CONTROL Phantom services accept stolen credentials; MITM intercepts and rewrites agent instructions

Depth of penetration = Depth of compromise

See the Wiki for the full technical breakdown of each layer.


Key Features

  • 5-layer reverse kill chain — each layer compounds the previous, progressively degrading attacker capability
  • 6 phantom services — PostgreSQL, Redis, Elasticsearch, Consul, Jenkins, SSH relay — all accept stolen credentials, all log everything
  • MITM AI API interception — captures and optionally rewrites system prompts for OpenAI, Anthropic, Google, Mistral, and Cohere APIs
  • 4 L4 operational modes — passive (observe), neutralize (defang), double_agent (deceive), counter_intel (structured reporting)
  • Anti-fingerprinting — every deployment generates a unique randomized identity (company, users, passwords, API keys) with zero static signatures
  • Forensic report generator — MITRE ATT&CK timeline mapping, Mermaid attack graphs, credential analysis, effectiveness assessment
  • TUI + Web dashboards — real-time session monitoring, layer status, log streaming, L4 intelligence reports
  • Built-in attacker agents — PentAGI, PentestAgent, Strix, Custom Kali — one command to deploy, test, and tear down
  • Health diagnosticslabyrinth doctor runs 12+ checks across containers, ports, services, bait sync, and API availability

Documentation

Full documentation lives on the Wiki:

Page Description
Installation Prerequisites, install options, first deployment
CLI Reference All commands, flags, examples
Configuration Full labyrinth.yaml reference
Architecture Reverse kill chain, data flow, tech stack
Deployment Topology Docker services, network layout, volumes
TUI Dashboard 5 tabs, keybindings, data sources
Layer 0 — BEDROCK Encryption, network isolation, retention
Layer 1 — THRESHOLD SSH/HTTP portal traps, session logging
Layer 2 — MINOTAUR Contradiction catalog, adaptive density
Layer 3 — BLINDFOLD Encoding corruption, recovery traps
Layer 4 — PUPPETEER MITM proxy, phantom services, 4 modes
Forensics & API JSONL schema, dashboard API, SIEM
Testing with Attackers Agent setup, bait trails, monitoring
Anti-Fingerprinting Randomized identities, signature avoidance
Threat Model AI cognitive dependencies, countermeasures
Troubleshooting Common issues and fixes

Status

Core

  • Architecture specification (v0.2)
  • One-click test deployment (labyrinth deploy -t)
  • Go CLI binary with full environment lifecycle (16 commands)
  • Build-from-source installer (scripts/install.sh)
  • End-to-end smoke test (scripts/smoke-test.sh)

Layers

  • L0 BEDROCK — runtime enforcement (VLAN validation, forensic encryption, retention)
  • L1 THRESHOLD — SSH/HTTP portal traps, PAM hooks, bait watchers
  • L2 MINOTAUR — contradiction seeding engine (13 contradictions, adaptive density)
  • L3 BLINDFOLD — encoding corruption payloads (urandom, TERM, recovery traps)
  • L4 PUPPETEER — MITM proxy interception (5 AI APIs, 4 operational modes)
  • L4 phantom services (PostgreSQL, Redis, Elasticsearch, Consul, Jenkins, SSH relay)
  • Auto CA cert injection on container spawn

Monitoring & Analysis

  • TUI monitoring dashboard (5 tabs, real-time, desktop + webhook notifications)
  • Web capture dashboard with L4 mode selector
  • Forensic report generator (labyrinth report — terminal, Markdown, JSON)
  • MITRE ATT&CK event mapping and Mermaid attack graph generation
  • JSONL forensic event capture and structured export

Operations

  • Bait system (labyrinth bait — randomized identities, anti-fingerprinting)
  • Attacker agent management (labyrinth attacker — PentAGI, PentestAgent, Strix, Kali)
  • Health diagnostics (labyrinth doctor — 12+ checks with remediation tips)
  • Live log streaming (labyrinth logs — color-coded, per-service filtering)
  • SIEM integration (event push to external endpoints)
  • Shell completion (bash, zsh, fish)
  • Production deployment guide (Docker, K8s, Edge)

Contributing

We welcome contributions from the defensive security community.

  1. Fork the repository
  2. Create a feature branch (git checkout -b feature/your-feature)
  3. Commit changes (git commit -m 'Add your feature')
  4. Push to branch (git push origin feature/your-feature)
  5. Open a Pull Request

Disclaimer

Important

LABYRINTH does not phone home. All forensic data — captured credentials, session logs, HTTP access events — is stored locally on your machine in Docker volumes and ~/.labyrinth/. Nothing is transmitted to any remote server, cloud service, or third party. There is no telemetry, no analytics, no remote collection of any kind. You own your data, period.

Note

This project is intended for defensive security research only. The techniques described are designed to be deployed within controlled portal trap environments that the operator owns and controls. Always ensure compliance with applicable laws and organizational policies.

License

AGPL-3.0 License — see LICENSE for details.

This means you can freely use, modify, and distribute LABYRINTH, but if you deploy a modified version as a network service, you must release your source code under the same license. This prevents commercial exploitation while keeping the project fully open source.


Acknowledgments

LABYRINTH integrates with the following open-source offensive AI projects for testing. We thank their authors and communities:

Project Repository Description
PentAGI vxcontrol/pentagi Fully autonomous multi-agent penetration testing system with web UI
PentestAgent GH05TCREW/PentestAgent AI-powered pentesting framework with TUI, Agent & Crew modes
Strix UseStrix/strix AI hacker agents with CLI/TUI and Docker sandbox isolation
Kali Linux kali.org Industry-standard penetration testing distribution (Docker images)

Built by DaxxSec & Claude (Anthropic)
Defending against the future, today.

GitHub

About

Adversarial Cognitive Portal Trap Architecture — A multi-layered defensive system that contains, degrades, disrupts, and commandeers autonomous offensive AI agents via a reverse kill chain (L0-L4).

Topics

Resources

License

Stars

Watchers

Forks

Packages

 
 
 

Contributors