Director for the vtuber- program — long-term memory, skill router, tool use, and per-turn persona prompt generation. Receives context, chooses persona, emits text_prompt and voice_prompt to vtuber-voice (PersonaPlex Performer) via ConversationDirective.*
[ English | ภาษาไทย | 日本語 | 简体中文 ]
vtuber-brain is the Director half of the Director/Performer split (ADR-001 in repo_plus.yml). It receives context (chat, audience signal, internal state), runs reasoning and memory retrieval, decides which persona to wear and which skill to invoke, then emits a ConversationDirective containing text_prompt and voice_prompt to vtuber-voice via gRPC. Skills (game/sing/policy/strategy) register as typed tools and brain dispatches via tool-use protocol. Long-term memory lives in Postgres with pgvector; character lore loads from vtuber-commons at startup. Mojo handles hot inference kernels (RAG re-rank, intent classification); Python serves the underlying LLM (Ollama or vLLM).
- 🚀 Feature 1 — Director loop: receive context → choose persona → emit text_prompt and voice_prompt to vtuber-voice via gRPC ConversationDirective
- 🛡️ Feature 2 — Skill router: dispatch tool calls to game/sing/policy/strategy services via typed gRPC contracts from vtuber-contracts
- 📊 Feature 3 — Tiered memory: short-term (per-conversation in-process), long-term (Postgres with pgvector), character lore (loaded from vtuber-commons at startup)
# Install Rust toolchain (rustup), Python 3.12+, and Postgres 16+ with the pgvector extension, then run cargo build and pip install -r python/requirements.txt. Set OLLAMA_HOST or VLLM_URL in .env to point at your LLM server, then cargo run to start the brain on port 8081.- 🏗️ Architecture — Core design and components.
- 📅 Roadmap — Project timeline and milestones.
- 🤝 Contributing — How to join and help.
- 🌳 Project Structure — Full file map.