Skip to content

mihaiboldeanu/SentientSands

Repository files navigation

Sentient Sands — Kenshi AI Mod

Fan project — This is not the original mod. I'm a fan who wanted to expand on the great work done in the original Steam Workshop mod. I'm not the author, I don't claim to be, and I'm just doing this for fun. All credit goes to the original creator.

A Flask-based LLM server that provides intelligent, dynamic NPC dialogue for Kenshi. Every NPC you encounter gets a unique personality, backstory, knowledge of the world, and the ability to hold meaningful conversations — powered by large language models.

Features

  • Dynamic NPC Dialogue — LLM-powered conversations with talk, whisper, and yell modes
  • Auto-Generated Profiles — Every NPC gets a unique personality, backstory, speech quirks, and faction knowledge
  • LLM-Enhanced Profiles — Seed profiles are upgraded with rich, lore-accurate personalities
  • Race & Gender-Aware Names — Unique name pools per race (Greenlander, Scorchlander, Shek, Hive, Skeleton)
  • Canon Character Support — Named characters retain their canon identities
  • Ambient Banter — Radiant conversations between nearby NPCs
  • Mission System — Data-driven mission generation and reward handling
  • World State Tracking — Persistent event history, character lifecycles, and narrative synthesis
  • Multi-Campaign Support — Isolated state per campaign
  • llama.cpp Integration — Run local models with OpenAI-compatible API

Installation

1. Install the Steam Workshop Mod

Subscribe to and install Sentient Sands on Steam Workshop. Follow the workshop page instructions for initial setup and configuration.

2. Replace Scripts with This Version

To use this updated version:

  1. Find your Workshop mod folder (typically Steam\steamapps\workshop\content\233860\<workshop_id>\)
  2. Replace all files in the mod's server/scripts/ folder with the contents of this repo

3. Configure Providers and Models

Follow the Steam Workshop page instructions for configuring your LLM provider and models in config/providers.json and config/models.json.

4. Run the Game

Launch Kenshi through re_Kenshi and start the mod. The server will start automatically and connect to the game.

LLM Providers

llama.cpp

Run llama.cpp's server with an OpenAI-compatible endpoint:

./llama-server -m your-model.gguf --host 127.0.0.1 --port 11435 --ctx-size 16384

Then configure config/providers.json:

{
  "llamacpp": {
    "api_key": "none",
    "base_url": "http://127.0.0.1:11435/v1"
  }
}

And config/models.json:

{
  "kenshi-qwen": {
    "provider": "llamacpp",
    "model": "Qwen3.5-9B-UD-Q6_K_XL.gguf"
  },
  "kenshi-gemma": {
    "provider": "llamacpp",
    "model": "gemma-4-E4B-it-Q8_K_XL.gguf"
  }
}

License

GPL-3.0 — see LICENSE for details.


Development

Click to expand developer information

Install Dependencies

# Using uv (recommended)
uv sync

# Or with pip
pip install flask requests

Running Tests

python -m pytest tests/ -v

115 tests pass, 0 skipped.

Linting

uv run ruff check .
uv run ruff format .

Structural Audit

python tools/structural_audit.py

Enforces file size (600 LOC) and function size (100 LOC) limits.

Architecture

Game DLL → Named Pipe → Flask Routes → Services → LLM Provider

Service-oriented architecture with dependency injection via ServerRuntime. See PROJECT_MAP.md for a detailed module index and data flow documentation.

About

Fork of popular Kenshi mod SentientSands

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages