TriadLLM is a terminal chat application for multi-stage LLM work.
Each user turn follows a fixed workflow:
processorgenerates the primary answervalidatorchecks that answer against the original request and gathered evidenceorchestratorconsolidates both into the final user-facing reply
The repository language is English, and fresh installs default to English in the app. Spanish is also supported at runtime with /lang es.
TriadLLM is not a "two independent opinions" system.
The intended workflow is:
- generate a primary answer
- validate it against the original request
- consolidate the answer and validation into a final response
That makes the second model a grounded review layer rather than a second parallel solver.
- scrollable transcript with fixed bottom composer
- retro ASCII splash screen on startup, dismissed by any key or after 5 seconds
- two-line composer with
Enterto send,Ctrl+Jfor a newline, andCtrl+Efor an expanded editor - FIFO turn queue when new prompts are submitted while another turn is running
- cancel button and
/cancelsupport for the active turn - proposal, validation, and consolidation pipeline
- full visible conversation history passed back to agents on each turn
- clarification loop when the processor or validator needs more data
- local tools with permission prompts
- toggleable reasoning display with
/reasoning on|off - toggleable tool request/result display with
/toolresults on|off - slash commands for runtime control
- JSONL session persistence
- structured rotating logs
- English and Spanish locales
- Python
3.13environment managed withuv - official OpenAI and Mistral SDKs where possible
- OpenAI-compatible local backend support
See the official installer:
https://docs.astral.sh/uv/getting-started/installation/
git clone https://github.com/ibitato/TriadLLM.git
cd TriadLLM
uv python install 3.13
uv syncRun the app once to create the local config directories:
uv run triadThen copy the example profile file into your user config directory.
Typical locations:
- Linux:
~/.config/TriadLLM/profiles.yaml - macOS:
~/Library/Application Support/TriadLLM/profiles.yaml - Windows:
%APPDATA%\\TriadLLM\\profiles.yaml
Example on Linux:
mkdir -p ~/.config/TriadLLM
cp src/triadllm/examples/profiles.yaml ~/.config/TriadLLM/profiles.yamlExamples:
export OPENAI_API_KEY=...
export MISTRAL_API_KEY=...TriadLLM reads credentials from the shell environment. It does not auto-load .env.
./run_triadllm.shThis script automatically checks prerequisites, sets up the environment if needed, and launches the application.
Alternative entrypoints:
uv run triaduv run triadllmEntersends the current draftCtrl+Jinserts a newline in the bottom composerCtrl+Eopens the expanded composer modal- inside the expanded composer,
Ctrl+Ssends andEsccancels - if a turn is already running, new non-command prompts are queued automatically
- the
Cancelbutton or/cancelstops the active turn and allows the queue to continue
If you want the shortest path to a first successful run:
- copy the example
profiles.yaml - export
OPENAI_API_KEY - keep the example
default_profile: openai_default - launch
uv run triad
That runs all three roles through the same OpenAI profile until you decide to split roles across different providers.
After cloning the repo, verify:
- Python
3.13is installed withuv uv synccompleted successfullyprofiles.yamlexists in the user config directory- the required API keys are exported in the shell
uv run triadstarts the TUI/modelsshows the configured profiles/statusshows the expected language, permissions, and log path
TriadLLM stores runtime state outside the repo:
settings.json: language, permission mode, log settings, UI toggles, role assignmentsprofiles.yaml: provider/model definitionssessions/*.jsonl: persisted session eventstriadllm.log: structured runtime log
On first launch, TriadLLM automatically reuses legacy local config from MultiBrainLLM if it finds existing settings, profiles, sessions, or logs.
Repository examples:
- sample profiles:
src/triadllm/examples/profiles.yaml - sample settings:
src/triadllm/examples/settings.json
Available local tools:
shell_execread_filewrite_filelist_dirsearch_filesget_envpwd
Execution modes:
ask: every tool request requires approvalyolo: tool requests run immediately
Use /permissions ask or /permissions yolo to switch modes at runtime.
/help/status/config/permissions ask|yolo/lang es|en/models/model set <orchestrator|processor|validator> <profile>/tools/reasoning on|off/toolresults on|off/new/clear/quit
- installation guide:
docs/INSTALLATION.md - configuration reference:
docs/CONFIGURATION.md - provider setup examples:
docs/PROVIDERS.md - architecture guide:
docs/ARCHITECTURE.md - troubleshooting guide:
docs/TROUBLESHOOTING.md - FAQ:
docs/FAQ.md - public launch checklist:
docs/PUBLIC_REPO_CHECKLIST.md - roadmap:
ROADMAP.md - changelog:
CHANGELOG.md - release process:
docs/RELEASING.md - contributing guide:
CONTRIBUTING.md - code of conduct:
CODE_OF_CONDUCT.md - security policy:
SECURITY.md - support guide:
SUPPORT.md - coding-agent maintenance guide:
AGENTS.md
uv sync --dev
uv run pytest -q
uv run python -m compileall src tests docs
uv buildThis project is source-available under PolyForm Noncommercial 1.0.0. Commercial use is not allowed without separate permission.