Skip to content

A local, end-to-end demo of deterministic AI governance with explicit intent, policy decisions, and observable execution.

Notifications You must be signed in to change notification settings

lukaspfisterch/dbl-stack

Folders and files

NameName
Last commit message
Last commit date

Latest commit

ย 

History

3 Commits
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 

Repository files navigation

DBL Demo Stack (One-Go)

The DBL Demo Stack is a local, end-to-end demonstration of AI governance.

It shows how LLM usage can be wrapped in a deterministic boundary: every request becomes an explicit intent, every decision is recorded, and every execution is observable.

This stack is not a product and not a framework. It is a reference environment to explore what happens when AI systems are treated like critical infrastructure instead of opaque services.

DBL Demo Stack

Includes:

  • DBL Gateway (lukaspfister/dbl-gateway) - Multi-Provider Normative Layer (OpenAI, Anthropic, Ollama)
  • DBL Observer (lukaspfister/dbl-observer) - The Visibility Layer
  • Chat Client (lukaspfister/dbl-chat-client) - The User Experience

๐Ÿš€ Quick Start

Prerequisite: Docker Desktop must be running.

  1. Get the Code

    git clone https://github.com/lukaspfisterch/dbl-stack.git
    cd dbl-stack
  2. Configure API Key Rename .env.example to .env and add your OpenAI Key:

    OPENAI_API_KEY=sk-...

    (If you don't have one, the stack starts but LLM execution will fail safely)

  3. Run Windows:

    .\start_demo.ps1

    Mac/Linux:

    docker compose up -d
    # Open http://localhost:5173 manually

That's it. The stack will launch and open the Chat Client for you.


๐Ÿ› ๏ธ Components

Service Local URL Role
Chat Client http://localhost:5173 Interactive AI Chat (Proxies requests to Gateway)
Gateway http://localhost:8010/docs Enforces Policy & Records Ledger
Observer http://localhost:8020/docs Passive Witness & Projections

๐Ÿ”Œ Extended Setup (Multi-Provider)

The stack supports OpenAI, Anthropic, and Local Ollama simultaneously.

  1. Edit .env Add keys for the providers you want to enable:

    ANTHROPIC_API_KEY=sk-ant-...
    OLLAMA_HOST=http://host.docker.internal:11434  # For local Ollama

    (Restart the stack with docker compose restart gateway after changes)

  2. Available Models The Chat Client will automatically list all available models from connected providers:

    • OpenAI: gpt-5.2, gpt-4o-mini, gpt-4.1 (Configurable via ENV)
    • Anthropic: claude-3-haiku, claude-3-5-sonnet
    • Ollama: Dynamically discovers models (e.g. llama3, deepseek-coder:6.7b, mistral).

๐Ÿ“š Learn More

About

A local, end-to-end demo of deterministic AI governance with explicit intent, policy decisions, and observable execution.

Resources

Stars

Watchers

Forks

Packages

No packages published