A tool for generating macro-prompts, as well as a GitHub Repo framework, for use with Agentic CLI tools like Claude Code, Codex, Cursor, Antigravity, and other LLM-powered development assistants involved in software engineering sensitive to preservation of context across turns.
- GECK/ - The Garden of Eden Creation Kit protocol specifications (v1.0, v1.1, v1.2, v1.3)
- geck_generator/ - A GUI/CLI tool to generate GECK project files (Python reference implementation)
- crates/ - Rust workspace hosting
geck-core(library) andgeck-cli(thegeckbinary)
GECK (Garden of Eden Creation Kit) is a lightweight protocol that gives CLI-based LLM agents structured memory and task management across sessions. It solves the "context amnesia" problem where AI assistants forget what they were working on between conversations.
When working with CLI LLM agents (Claude Code, Aider, Cursor, etc.):
- Each session starts fresh with no memory of previous work
- The AI may repeat mistakes or forget decisions made earlier
- There's no audit trail of what changed and why
- Large projects lose coherence across multiple sessions
GECK (v1.3) creates a structured folder in your project:
your_project/
└── GECK/
├── LLM_init.md # Project goals, constraints, context budget
├── GECK_Inst.md # Instructions for the AI agent (protocol v1.3)
├── env.md # Environment documentation
├── tasks.md # Typed tasks with stable TASK-NNN IDs
├── log.md # Episodic per-turn log (tightened format)
├── log_index.jsonl # Machine-readable log index (one JSON line per entry)
├── decisions.md # Index of decision records
├── decisions/ # Per-decision files (DECISION-NNN.md)
├── learnings.md # Index of learning records
├── learnings/ # Per-learning files (LEARNING-NNN.md)
└── log_archive/ # Rolled-over log entries
You then add instructions to your project or global Claude.md, or similar harness initialization (depending on how much you use GECK), to access the GECK folder, review the most recent log.md entries, review the tasks.md, and begin work at the start of each session. It reads only the slice it needs (driven by the declared Context Budget), understands what was done before, and continues where it left off.
The GECK Generator is a tool that helps you create LLM_init.md files and initialize GECK folder structures for your projects.
-
Clone this repository:
git clone https://github.com/crussella0129/GECK.git cd GECK -
Install dependencies:
pip install -r requirements.txt
-
Run the generator:
# GUI mode python -m geck_generator --gui # Interactive CLI mode python -m geck_generator --cli
git clone https://github.com/crussella0129/GECK.git
cd GECK
pip install -e .Note: the username in the URL will match yours if you forked as a template. Then run with:
geck-generator --guiAfter cloning and installing dependencies, create shortcuts for easy access:
cd GECK
# Create both desktop and start menu shortcuts
python -m geck_generator --install-shortcut
# Or create only desktop shortcut
python -m geck_generator --install-shortcut desktop
# Or create only start menu shortcut
python -m geck_generator --install-shortcut menuTo remove shortcuts:
python -m geck_generator --uninstall-shortcutpython -m geck_generator --guiThe GUI provides a tabbed interface to:
- Enter basic project information (name, repo URL, local path)
- Select a preset profile (Web App, CLI Tool, Data Science, etc.)
- Define goals and success criteria
- Preview and save the generated files
python -m geck_generator --cliInteractive prompts guide you through the same process in your terminal.
To create a full GECK folder structure:
python -m geck_generator --cli --init-geck# List available profiles
python -m geck_generator --list-profiles
# Generate from a profile
python -m geck_generator --profile web_app --project-name "My App" --goal "Build a REST API" -o ./LLM_init.mdA second, native implementation of the generator lives under crates/. It
ships as a single geck binary, targets the same v1.3 protocol, and
produces output byte-equivalent to the Python generator (see
scripts/check_parity.sh).
cargo build --release --bin geck
# target/release/geck (or target/<triple>/release/geck)# Protocol version this binary targets
geck protocol-version
# List preset profiles (same registry as the Python tool)
geck list-profiles
geck list-profiles --json
# List built-in templates
geck list-templates
# Render an LLM_init.md from flags to stdout
geck generate \
--project-name "My App" \
--goal "Build a REST API" \
--profile api
# Scaffold a full GECK/ folder in a project directory
geck init ./my_project \
--project-name "My App" \
--goal "Build a REST API" \
--profile apiscripts/check_parity.sh scaffolds the same project through both the
Python generator and the geck binary and asserts that every
deterministic piece of output (file tree, log_index.jsonl record,
GECK_Inst.md, task lines, LLM_init.md modulo the Created-date line,
and the empty decisions/learnings indexes) matches. Run it any time you
touch templates or scaffold logic on either side.
Python remains the reference (and the home of the GUI). Rust gives a
zero-runtime-dependency CLI and a library (geck-core) that other Rust
tools — e.g. a future GECK editor — can embed directly.
-
Create your project repository and clone it locally
-
Generate LLM_init.md using GECK Generator:
cd your_project python -m geck_generator --guiFill in your project goals, constraints, and success criteria.
-
Start your CLI LLM agent (Claude Code, Aider, etc.) and give it this prompt:
Read LLM_init.md and initialize the GECK. -
The AI will create the
GECK/folder with all necessary files and summarize its understanding of your project.
Starting a session:
Read the GECK files and continue working on the project.
The AI will:
- Read
LLM_init.mdfor goals - Check
log.mdfor what happened last session - Review
tasks.mdfor current work items - Pick up where you left off
During work:
- The AI commits changes incrementally
- Updates
tasks.mdas work progresses - Logs significant actions to
log.md
Ending a session:
- The AI will update the log with what was accomplished
- Mark tasks complete or note blockers
- State its checkpoint status (CONTINUE/WAIT/ROLLBACK)
# Starting fresh
Read LLM_init.md and initialize the GECK.
# Continuing work
Read the GECK and continue with the next task.
# Checking status
Read tasks.md and summarize what's done and what's remaining.
# After a break
Read the last log entry and remind me where we left off.
-
Be specific in LLM_init.md - Clear success criteria help the AI know when it's done
-
Use profiles - The GECK Generator profiles include suggested success criteria and constraints for common project types
-
Review log entries - The AI's log entries help you understand its reasoning and catch issues early
-
Trust the checkpoints - When the AI says WAIT, it genuinely needs your input. Don't skip these.
-
Commit the GECK folder - Keep
GECK/in version control so the history is preserved
| Version | Status | Description |
|---|---|---|
| v1.3 | Current | Typed tasks (TASK-NNN), decisions/learnings as first-class records, Drift Check, tiered log + JSONL index, Context Budget |
| v1.2 | Stable | Added GECK_Inst.md for AI agent instructions |
| v1.1 | Stable | Work modes, simplified file names |
| v1.0 | Legacy | Initial release |
See GECK/GECK_Macro_v1.3.md for the full protocol specification.
v1.3 sharpens the protocol around how a fresh agent reconstructs context across sessions:
- Typed tasks with stable IDs. Tasks are addressable as
TASK-NNNwith a TYPE/SCOPE/OWNER header, so log entries and decisions can reference them without ambiguity. - Decisions and learnings are first-class. Each is a per-record file under
decisions/orlearnings/with YAML frontmatter, indexed bydecisions.md/learnings.md— the same composition pattern used by Claude Code's memory system, adapted clean-room for git. - Tightened log + JSONL index.
log.mdkeeps the human-auditable narrative (Did / Files / State / Refs / Next);log_index.jsonlgives the next agent a one-line-per-entry index it can scan cheaply. - Drift Check. A short cognitive checksum at session start catches stale assumptions before they propagate.
- Context Budget. A
small/medium/largeparameter declared inLLM_init.mdtells the next agent how many recent log entries to load (3 / 10 / 25), so the protocol adapts to the model window without per-agent guessing.
Python generator (GUI + CLI + parity-reference):
- Python 3.10+
- tkinter (usually included with Python)
- jinja2
- questionary (for CLI mode)
Install with:
pip install -r requirements.txtRust geck binary (optional, native CLI):
- Rust 1.75+ (stable)
Build with:
cargo build --release --bin geckSee LICENSE file.