Fan project — This is not the original mod. I'm a fan who wanted to expand on the great work done in the original Steam Workshop mod. I'm not the author, I don't claim to be, and I'm just doing this for fun. All credit goes to the original creator.
A Flask-based LLM server that provides intelligent, dynamic NPC dialogue for Kenshi. Every NPC you encounter gets a unique personality, backstory, knowledge of the world, and the ability to hold meaningful conversations — powered by large language models.
- Dynamic NPC Dialogue — LLM-powered conversations with talk, whisper, and yell modes
- Auto-Generated Profiles — Every NPC gets a unique personality, backstory, speech quirks, and faction knowledge
- LLM-Enhanced Profiles — Seed profiles are upgraded with rich, lore-accurate personalities
- Race & Gender-Aware Names — Unique name pools per race (Greenlander, Scorchlander, Shek, Hive, Skeleton)
- Canon Character Support — Named characters retain their canon identities
- Ambient Banter — Radiant conversations between nearby NPCs
- Mission System — Data-driven mission generation and reward handling
- World State Tracking — Persistent event history, character lifecycles, and narrative synthesis
- Multi-Campaign Support — Isolated state per campaign
- llama.cpp Integration — Run local models with OpenAI-compatible API
Subscribe to and install Sentient Sands on Steam Workshop. Follow the workshop page instructions for initial setup and configuration.
To use this updated version:
- Find your Workshop mod folder (typically
Steam\steamapps\workshop\content\233860\<workshop_id>\) - Replace all files in the mod's
server/scripts/folder with the contents of this repo
Follow the Steam Workshop page instructions for configuring your LLM provider and models in config/providers.json and config/models.json.
Launch Kenshi through re_Kenshi and start the mod. The server will start automatically and connect to the game.
Run llama.cpp's server with an OpenAI-compatible endpoint:
./llama-server -m your-model.gguf --host 127.0.0.1 --port 11435 --ctx-size 16384Then configure config/providers.json:
{
"llamacpp": {
"api_key": "none",
"base_url": "http://127.0.0.1:11435/v1"
}
}And config/models.json:
{
"kenshi-qwen": {
"provider": "llamacpp",
"model": "Qwen3.5-9B-UD-Q6_K_XL.gguf"
},
"kenshi-gemma": {
"provider": "llamacpp",
"model": "gemma-4-E4B-it-Q8_K_XL.gguf"
}
}GPL-3.0 — see LICENSE for details.
Click to expand developer information
# Using uv (recommended)
uv sync
# Or with pip
pip install flask requestspython -m pytest tests/ -v115 tests pass, 0 skipped.
uv run ruff check .
uv run ruff format .python tools/structural_audit.pyEnforces file size (600 LOC) and function size (100 LOC) limits.
Game DLL → Named Pipe → Flask Routes → Services → LLM Provider
Service-oriented architecture with dependency injection via ServerRuntime. See PROJECT_MAP.md for a detailed module index and data flow documentation.