SIMULATE. OBSERVE. UNDERSTAND.
ZELL is a self-hosted knowledge intelligence and multi-agent simulation platform. It gives teams a command-center experience for running synthetic societies, tracking agent behavior over time, and exploring relationship graphs and search-driven insights, all inside your own infrastructure.
If you want open-control-room energy with full data ownership, this is it.
Docs · Self-Hosting Guide · API Reference · Quick Start · Contributing · Security
ZELL is built for those who want full control over their simulations. It is completely self-hosted by design, keeping all your data, prompts, agent memories, and outputs securely within your own infrastructure with zero telemetry or cloud dependency. With native support for Ollama and LocalAI, it delivers powerful multi-agent orchestration and synthetic society simulations while ensuring complete data sovereignty and privacy.
- Agent Bootstrap: generate rich personas and seed a synthetic world with a single API call.
- Simulation Orchestration: run multi-cycle scenarios with configurable events and time horizons.
- Decision Persistence: every agent decision is stored and queryable across simulation runs.
- Semantic + Fuzzy Search: hybrid search across all agent responses and run histories.
- Graph Relationship Extraction: visualize agent connections in the workbench and atlas views.
- API-first Architecture: build custom frontends or integrate with your existing tooling.
Runtime: Docker (recommended) or Python 3.11+ / Node 18+.
Full setup guide here
git clone https://github.com/kushvinth/zell
cd zell
docker compose up --buildServices start at:
- Frontend: http://localhost:3000
- Backend API: http://localhost:8000
Backend:
cd backend
uv sync --all-groups
uv run uvicorn main:app --reload --host 0.0.0.0 --port 8000Frontend:
cd frontend
npm ci
npm run devDev URLs:
- Frontend: http://localhost:5173
- Backend API: http://localhost:8000
Minimal .env (LLM + defaults):
LLM_PROVIDER=ollama
LLM_BASE_URL=http://host.docker.internal:11434
LLM_MODEL=qwen2.5:1.5b-instruct
CORS_ORIGINS=http://localhost:5173,http://127.0.0.1:5173Full configuration reference (all keys + examples).
- FastAPI backend with simulation orchestration, agent runtime, storage engine, and search index.
- React + Vite frontend: dashboard, atlas workbench, search UI, and relationship graph explorer.
- Agent bootstrap engine: persona generation pipeline seeding synthetic societies at scale.
- Session model: stateful agents with evolving memory and per-cycle decision persistence.
- Media pipeline: response storage, run history, and temp lifecycle management.
- Hybrid search: semantic + fuzzy search across all agent responses and run histories.
- Graph extraction: relationship inference for atlas and workbench visualization.
- Dashboard API: run history, response retrieval, and aggregated analytics endpoints.
- Search index: configurable scan depth via
SEMANTIC_SCAN_MAX_RESPONSES.
- Agent personas: bootstrap-generated profiles with role, memory, and decision style.
- Simulation cycles: multi-step event injection with configurable year and cycle count.
- Run isolation: each simulation run is independently queryable and graph-extractable.
- Decision log: full audit trail of every agent response across all cycles.
- Docker Compose: one-command full-stack startup.
- Persistent storage:
agents.db+agents_data/volume mounts for prod. - CORS config: environment-driven origin allowlisting.
- LLM failover: swap providers and models via env vars with no code changes.
- Health endpoints:
/healthand/api/llm/healthfor monitoring integration.
- Backend checks: Ruff lint, Ruff format, Python compile validation on every push and PR.
- Frontend checks: ESLint, format check, TypeScript type-check, production build.
- GitHub Actions: runs on both push and pull requests; keeps the repo merge-ready.
- Default: all data stays local; no telemetry, no external calls except your configured LLM endpoint.
- CORS: restrict
CORS_ORIGINSto your real frontend domain in production. - Storage:
agents.dbandagents_data/should be mounted as persistent volumes and excluded from public access. - Reverse proxy: always put the backend behind Nginx / Caddy / Traefik before exposing publicly.
- LLM traffic: if using a remote LLM backend, ensure the connection is over a trusted network or VPN.
Details: Security policy · Self-hosting guide
- Put the backend behind a reverse proxy (Nginx / Caddy / Traefik).
- Restrict
CORS_ORIGINSto your real frontend domain only. - Use persistent Docker volumes for
backend/agents_dataandbackend/agents.db. - Pin your LLM model name and set resource limits before running scale tests.
- Configure log shipping and monitoring from day one.
Details: Self-hosting guide
Use these when you're past the quick start and want the deeper reference.
- Start with the docs index for navigation and "what's where."
- Follow the self-hosting guide for production deployment.
- Review the security policy before exposing anything publicly.
- Read the contribution guide before submitting a PR.
- Check the support guide if you're stuck.
See CONTRIBUTING.md for guidelines and how to submit PRs. AI/vibe-coded PRs welcome!
- Contribution guide: CONTRIBUTING.md
- Code of conduct: CODE_OF_CONDUCT.md
- Security policy: SECURITY.md
- Support guide: SUPPORT.md
- Funding: .github/FUNDING.yml
Active development. PRs are welcome.
If you are building a private, self-hosted intelligence cockpit for knowledge exploration and agent behavior analysis, ZELL is built for that mission.