Skip to content

kushvinth/ZELL

ZELL

SIMULATE. OBSERVE. UNDERSTAND.

CI status GitHub release MIT License Open Issues

ZELL is a self-hosted knowledge intelligence and multi-agent simulation platform. It gives teams a command-center experience for running synthetic societies, tracking agent behavior over time, and exploring relationship graphs and search-driven insights, all inside your own infrastructure.

If you want open-control-room energy with full data ownership, this is it.

Docs · Self-Hosting Guide · API Reference · Quick Start · Contributing · Security


Why ZELL

ZELL is built for those who want full control over their simulations. It is completely self-hosted by design, keeping all your data, prompts, agent memories, and outputs securely within your own infrastructure with zero telemetry or cloud dependency. With native support for Ollama and LocalAI, it delivers powerful multi-agent orchestration and synthetic society simulations while ensuring complete data sovereignty and privacy.


Core Capabilities


Quick Start

Runtime: Docker (recommended) or Python 3.11+ / Node 18+.

Full setup guide here

Option 1: Docker Compose (recommended)

git clone https://github.com/kushvinth/zell
cd zell
docker compose up --build

Services start at:

Option 2: Local Development

Backend:

cd backend
uv sync --all-groups
uv run uvicorn main:app --reload --host 0.0.0.0 --port 8000

Frontend:

cd frontend
npm ci
npm run dev

Dev URLs:


Star History

Star History Chart


Configuration

Minimal .env (LLM + defaults):

LLM_PROVIDER=ollama
LLM_BASE_URL=http://host.docker.internal:11434
LLM_MODEL=qwen2.5:1.5b-instruct
CORS_ORIGINS=http://localhost:5173,http://127.0.0.1:5173

Full configuration reference (all keys + examples).


Everything we built so far

Core platform

  • FastAPI backend with simulation orchestration, agent runtime, storage engine, and search index.
  • React + Vite frontend: dashboard, atlas workbench, search UI, and relationship graph explorer.
  • Agent bootstrap engine: persona generation pipeline seeding synthetic societies at scale.
  • Session model: stateful agents with evolving memory and per-cycle decision persistence.
  • Media pipeline: response storage, run history, and temp lifecycle management.

Knowledge surfaces

  • Hybrid search: semantic + fuzzy search across all agent responses and run histories.
  • Graph extraction: relationship inference for atlas and workbench visualization.
  • Dashboard API: run history, response retrieval, and aggregated analytics endpoints.
  • Search index: configurable scan depth via SEMANTIC_SCAN_MAX_RESPONSES.

Agents + simulation

  • Agent personas: bootstrap-generated profiles with role, memory, and decision style.
  • Simulation cycles: multi-step event injection with configurable year and cycle count.
  • Run isolation: each simulation run is independently queryable and graph-extractable.
  • Decision log: full audit trail of every agent response across all cycles.

Runtime + ops

CI + quality gates

  • Backend checks: Ruff lint, Ruff format, Python compile validation on every push and PR.
  • Frontend checks: ESLint, format check, TypeScript type-check, production build.
  • GitHub Actions: runs on both push and pull requests; keeps the repo merge-ready.

Security model (important)

  • Default: all data stays local; no telemetry, no external calls except your configured LLM endpoint.
  • CORS: restrict CORS_ORIGINS to your real frontend domain in production.
  • Storage: agents.db and agents_data/ should be mounted as persistent volumes and excluded from public access.
  • Reverse proxy: always put the backend behind Nginx / Caddy / Traefik before exposing publicly.
  • LLM traffic: if using a remote LLM backend, ensure the connection is over a trusted network or VPN.

Details: Security policy · Self-hosting guide


Production notes

  • Put the backend behind a reverse proxy (Nginx / Caddy / Traefik).
  • Restrict CORS_ORIGINS to your real frontend domain only.
  • Use persistent Docker volumes for backend/agents_data and backend/agents.db.
  • Pin your LLM model name and set resource limits before running scale tests.
  • Configure log shipping and monitoring from day one.

Details: Self-hosting guide


Docs

Use these when you're past the quick start and want the deeper reference.


Community and governance

See CONTRIBUTING.md for guidelines and how to submit PRs. AI/vibe-coded PRs welcome!


Project status

Active development. PRs are welcome.

If you are building a private, self-hosted intelligence cockpit for knowledge exploration and agent behavior analysis, ZELL is built for that mission.

About

AI Civilization Simulation Engine

Resources

License

Code of conduct

Contributing

Security policy

Stars

Watchers

Forks

Packages

 
 
 

Contributors