This repository is the central monorepo for the Virtual Human Memory (VHM) project, a multi-agent, psychologically-grounded, long-term memory system for virtual humans. Our goal is to enable emergent identity through the stories virtual humans tell over time.
➡️ View the full Project Page here
For researchers: PURE_SHOWCASE.md — ready-to-use text for uploading this project to Pure (institutional research portal).
Chat with this repo: DeepWiki — index and chat with this repository.
The VHM system is built on a distributed, microservices architecture orchestrated by Kubernetes. This monorepo contains all the code for the following services:
workers/indexer: Ingests and indexes new memories.workers/resonance: Calculates the emotional and contextual significance of memories.workers/reteller: Weaves memories into coherent, dynamic narratives.common/utils: Shared utilities and data models for all services.k8s/: All Kubernetes manifests for deploying the system.
- Typed configuration surface backed by Pydantic ensures consistent defaults across dev, staging, and prod.
- Structured logging + retry-aware Qdrant calls keep recall behaviour transparent under load.
- Deterministic request/response models now power richer unit tests (
uv run pytest -k resonance) for CI confidence.
This project uses uv for Python environment management and Minikube for local Kubernetes deployment.
- Docker Desktop
- Minikube
- uv (Python package manager)
The entire deployment process has been streamlined. For a complete guide on the fixes that led to our stable system, please see the Minikube Worker Recovery Guide.
The quick-start steps are:
# 1. Start minikube and configure the local environment
./k8s/scripts/setup-cluster.sh
# 2. Point your Docker client to Minikube's Docker daemon
eval "$(minikube docker-env)"
# 3. Build the worker images
# (See the recovery guide for the full docker build commands)
docker build -t vhm-indexer:0.1.3 ...
docker build -t vhm-resonance:0.1.3 ...
docker build -t vhm-reteller:0.1.3 ...
# 4. Deploy the full application to Minikube
kubectl apply -f k8s/infrastructure/
kubectl apply -f k8s/config/
kubectl apply -f k8s/workers/
# 5. When you are finished, unset the Docker environment variable
eval "$(minikube docker-env -u)"Our Kubernetes manifests are configured to pull images from the GitHub Container Registry (ghcr.io).
docker login ghcr.io -u YOUR_GITHUB_USERNAMETo build and push an image (e.g., the indexer), follow this pattern:
# Define variables
export ORG="Research-Group-IxD"
export IMAGE_NAME="vhm-indexer"
export TAG="0.1.3"
# 1. Build the image using the shared Dockerfile
docker build -t "${IMAGE_NAME}:${TAG}" \
-f docker/worker.Dockerfile \
--build-arg WORKER_MODULE=workers.vhm_indexer.main .
# 2. Tag the image for the registry
docker tag "${IMAGE_NAME}:${TAG}" "ghcr.io/${ORG}/${IMAGE_NAME}:${TAG}"
# 3. Push the image to the registry
docker push "ghcr.io/${ORG}/${IMAGE_NAME}:${TAG}"Once your images are pushed, you can deploy them to any Kubernetes cluster.
# This will pull the newly pushed images from ghcr.io
kubectl apply -k k8s/We use GitHub Actions to keep the dojo disciplined:
- CI (
ci.yaml) runs on every push and pull request. It installs dependencies withuv, runs the placeholder Pytest suite, and builds each worker image usingdocker/worker.Dockerfile. These smoke tests will fail once you replace the placeholders with real assertions. - Publish (
publish.yaml) can be triggered manually from the Actions tab. Choose the worker and tag, and the workflow will build the image and push it toghcr.io/research-group-ixdusing theGITHUB_TOKEN.
To run the same checks locally:
# Install dependencies and run the placeholder tests
uv sync
uv run pytest
# Build a worker image the same way CI does
docker build -f docker/worker.Dockerfile \
--build-arg WORKER_MODULE=workers.vhm_indexer.main \
-t vhm-indexer:test .This project includes a comprehensive test suite for the indexer worker, with unit and integration tests. Tests can be run using pytest:
# Run all tests
uv run pytest
# Run indexer tests specifically
uv run pytest workers/indexer/tests/ -vFor detailed information about the test structure, how to navigate tests, and best practices, see the Test Guide.
This repository includes several tools and demo applications for testing and demonstrating the system:
The Indexer Service Demo provides a visual interface to test and explore the memory system:
uv run streamlit run tools/demo/indexer_demo.pyFeatures:
- Create and send memory anchors
- Visualize the complete processing flow (Kafka → Indexer → Qdrant)
- Inspect stored anchors and their embeddings
- Simulate time passing and test recall with temporal decay
See tools/demo/README.md for detailed usage instructions.
The Three Retells Demo tests the complete pipeline from anchor creation to narrative generation:
uv run python tools/demo_three_retells.pyThis script:
- Seeds three memory anchors with different timestamps
- Sends a recall request
- Waits for resonance beats
- Waits for retelling
- Logs everything to a JSON file
See tools/README.md for more information about available tools.
For detailed information about the project's architecture, research goals, and results, please see our full project page.