A structured writing environment for long-form fiction, powered by OpenAI and Anthropic. EMBER combines a scene-card pipeline, a memory backbone, and a stateful draft engine to help authors write consistently across chapters and books — not just generate text.
Most AI writing tools are stateless: each generation forgets what came before. EMBER is different. It maintains a persistent Memory Backbone — a living record of canon facts, character states, object tracking, open plot threads, and reader promises — and uses it to ensure every generated scene is consistent with everything that came before.
Built for commercial fiction. Designed for authors who want AI as a disciplined co-pilot, not an autocomplete.
Structured scene direction using YAML-like cards with hard (canon) and soft (guidance) fields. The writer reads the card but is not bound by its exact language.
- Canon Ledger — locked facts that can never be contradicted
- Character State Ledger — who knows what, who wants what, and how they've shifted
- Object Ledger — tracks physical objects, their holders, and their locations across scenes
- Knowledge Ledger — information asymmetry between characters and reader
- Promise Ledger — setup/payoff tracking for mysteries, emotional arcs, and plot threads
After each draft is accepted, a typed BookStateDiff is extracted — a structured diff of what changed in the world (objects moved, knowledge revealed, promises reinforced). Human approval gates what enters the canon.
- Lean remote pipeline via OpenAI or Anthropic
length_control→extract→continuity→quality_evalstages- Quality audit warns, never auto-rewrites
- Human Edit Memory: accepted edits are stored and influence future prompts
A branching-fiction storefront and reader at public/legacy/, built in vanilla JS with localStorage-based progress resumption.
An in-studio chat assistant (OpenAI / Anthropic / local Gemma) scoped to project, act, chapter, or scene context.
Direct prose drafting with integrated scene context, word count tracking, and AI copilot selection.

High-level project control across multiple phases, from foundation to market readiness.

Fine-grained narrative rules and model orchestration instructions to maintain a consistent voice.

Centralized tracking of characters, locations, and objects with automated state updates.

Context-aware brainstorming and strategic feedback on manuscript chapters and scene cards.

| Layer | Technology |
|---|---|
| Framework | Next.js 15, React 19 |
| Language | TypeScript 5 |
| AI Providers | OpenAI SDK, Anthropic SDK |
| Database | Supabase (PostgreSQL) |
| Styling | Vanilla CSS (globals.css) |
| Local AI | MLX / Gemma 4 (optional) |
ember-studio/
├── app/ # Next.js App Router
│ ├── api/ # Route handlers (book jobs, assistant, etc.)
│ ├── studio/ # Studio UI pages
│ └── samples/ # Sample reader
├── components/ # React components
│ └── studio/ # All studio UI components
├── lib/ # Core engine logic
│ ├── book-engine.ts # Main draft pipeline (context → draft → extract → quality)
│ ├── story-schema.ts # All TypeScript types + normalizers
│ ├── book-locked-facts.ts# Genre-specific canon fact profiles
│ ├── book-state-validator.ts # Deterministic StateDiff validation
│ ├── book-genre-engine-*.ts # Genre-specific prompt engines
│ └── server/ # Supabase + provider integrations
├── scripts/ # CLI tools
│ ├── bootstrap-book-from-regie.ts # Convert a Regie blueprint → Supabase book
│ └── book-state-validator.test.ts # StateDiff test suite
├── public/legacy/ # Vanilla JS storefront + branching reader
└── supabase/migrations/ # SQL schema
- Node.js 20+
- A Supabase project (or local Supabase)
- OpenAI API key and/or Anthropic API key
# 1. Clone and install
git clone https://github.com/KleinDigitalSolutions/EMBER.git
cd ember-studio
npm install
# 2. Configure environment
cp .env.example .env.local
# Fill in your API keys and Supabase credentials
# 3. Run database migrations
# Apply files in supabase/migrations/ to your Supabase project
# 4. Start the dev server
npm run devOpen http://localhost:3000/studio.
| Variable | Required | Description |
|---|---|---|
OPENAI_API_KEY |
Optional* | For OpenAI draft jobs |
ANTHROPIC_API_KEY |
Optional* | For Anthropic draft jobs |
NEXT_PUBLIC_SUPABASE_URL |
Yes | Supabase project URL |
NEXT_PUBLIC_SUPABASE_ANON_KEY |
Yes | Supabase anon key |
SUPABASE_SERVICE_ROLE_KEY |
Yes | Supabase service role key |
LOCAL_GEMMA_SERVER_URL |
No | Local Gemma/MLX for studio chat |
*At least one AI provider key is needed for draft generation.
Scene Card
↓
Context Pack (canon + character state + open threads)
↓
Draft (direct from Scene Intention — no separate beat_plan call)
↓
Length Control (expand/compress only for strong outliers)
↓
Extract (StateDiff + canon candidates)
↓
Continuity Guard
↓
Quality Eval (warns, never blocks)
↓
Human Review → Accept / Reject
↓
StateDiff Approval → Memory Backbone Update
Pluggable genre engines extend the base pipeline with genre-specific locked facts, continuity rules, and prompt overlays. Current genres:
domestic_suspense_thrillerya_superhero_origin
Every accepted draft produces a BookStateDiff with object changes, knowledge state updates, promise reinforcements, and proposed canon facts. Promotion into the canon ledger requires explicit human approval.
# Run the StateDiff test suite
npm run test:book-state
# Type check
npm run typecheck
# Bootstrap a new book from a Regie blueprint
npx tsx scripts/bootstrap-book-from-regie.ts <path-to-regie.md>This is an active portfolio / research project. The core pipeline is functional and used for real manuscript drafting. The UI is a working studio, not a polished SaaS product.
MIT — see LICENSE for details.