DSL-based intent execution system with iterative refinement, AMEN boundary, and AI-powered assistance
INTENT-ITERATIVE is a system that allows you to:
- Define intents using a simple YAML-based DSL
- Simulate execution with dry-run planning
- Get AI suggestions using local LLMs via Ollama
- Iteratively refine your intent through feedback loops
- Execute safely with the AMEN boundary (explicit approval required)
┌─────────────────────┐
│ CLI / Web UI │ ← User interface
└─────────┬───────────┘
↓
┌─────────────────────┐
│ Parser / Validator │ ← DSL → IR
└─────────┬───────────┘
↓
┌─────────────────────┐
│ Intermediate Rep. │ ← Canonical state
└─────────┬───────────┘
↓
┌─────────────────────┐
│ Planner / Simulator │ ← Dry-run
└─────────┬───────────┘
↓
┌─────────────────────┐
│ AI Gateway (Ollama) │ ← LLM suggestions
└─────────┬───────────┘
↓
┌─────────────────────┐
│ Feedback Loop │ ← Iterate
└─────────┬───────────┘
↓
┌─────────────────────┐
│ AMEN Boundary │ ← Explicit approval
└─────────┬───────────┘
↓
┌─────────────────────┐
│ Executor │ ← Real execution
└─────────────────────┘
# Clone repository
git clone https://github.com/softreck/intent-iterative.git
cd intent-iterative
# Full setup (recommended)
make setup
# Or manual install
pip install -r requirements.txt
pip install litellm
cp .env.example .envCopy .env.example to .env and adjust:
# Server
HOST=0.0.0.0
PORT=8080
# AI Gateway
OLLAMA_BASE_URL=http://localhost:11434
DEFAULT_MODEL=llama3.2
MAX_MODEL_PARAMS=12.0
# Execution (no AMEN prompt by default)
SKIP_AMEN_CONFIRMATION=true
CONTAINER_PORT=8000# Install Ollama
curl -fsSL https://ollama.com/install.sh | sh
# Start and pull model
make ollama-start
make ollama-pull
# Or manually
ollama serve
ollama pull llama3.2make help # Show all commands
make setup # Full setup
make web # Start web server
make shell # Interactive shell
make execute # Execute example intent
make test # Run all tests
make ollama-models # List available models
make clean # Clean temp files# Start interactive shell
make shell
# Or: python -m cli.main
# Execute intent directly (no AMEN prompt)
make execute
# Or: python -m cli.main execute examples/user-api.intent.yamlInteractive Shell Commands:
intent> new my-api # Create new intent
intent> load file.yaml # Load from file
intent> plan # Run dry-run
intent> suggest # Get AI suggestions
intent> apply # Auto-apply AI suggestions
intent> chat # Chat with AI
intent> iterate # Apply manual changes
intent> amen # Approve execution
intent> execute # Execute approved intent
intent> show [json] # Show current state
intent> models # List AI models
intent> ai-health # Check AI Gateway status
intent> help # Show help
intent> exit # Exit shell
# Start web server
python -m web.app
# Open browser at http://localhost:8080The AI Gateway uses LiteLLM to provide unified access to local LLMs via Ollama.
| Model | Size | Description |
|---|---|---|
llama3.2 |
3B | Default - Fast and efficient |
llama3.2:1b |
1B | Ultra lightweight |
llama3.1:8b |
8B | Balanced performance |
mistral |
7B | Fast inference |
mistral-nemo |
12B | Best quality under 12B |
gemma2 |
9B | Google Gemma 2 |
gemma2:2b |
2B | Lightweight |
phi3 |
3.8B | Microsoft Phi-3 |
qwen2.5 |
7B | Alibaba Qwen 2.5 |
codellama |
7B | Code generation |
codegemma |
7B | Google CodeGemma |
deepseek-coder |
6.7B | DeepSeek Coder |
Environment variables:
export OLLAMA_BASE_URL="http://localhost:11434"
export DEFAULT_MODEL="llama3.2"
export MAX_MODEL_PARAMS="12.0"INTENT:
name: user-api
goal: Create a REST API for user management
ENVIRONMENT:
runtime: docker
base_image: python:3.12-slim
ports:
- 8000
IMPLEMENTATION:
language: python
framework: fastapi
actions:
- api.expose GET /ping
- api.expose GET /users
- api.expose POST /users
- api.expose DELETE /users/{id}
EXECUTION:
mode: dry-run| Action | Format | Description |
|---|---|---|
api.expose |
api.expose METHOD /path |
Expose HTTP endpoint |
db.create |
db.create table_name |
Create database table |
db.add_column |
db.add_column table column type |
Add column to table |
shell.exec |
shell.exec command |
Execute shell command |
rest.call |
rest.call METHOD url |
Call external REST API |
file.create |
file.create path |
Create file |
| Method | Endpoint | Description |
|---|---|---|
GET |
/api/intents |
List all intents |
POST |
/api/intents/parse |
Parse DSL and create intent |
GET |
/api/intents/{id} |
Get intent by ID |
DELETE |
/api/intents/{id} |
Delete intent |
POST |
/api/intents/{id}/plan |
Run dry-run |
POST |
/api/intents/{id}/iterate |
Apply changes |
POST |
/api/intents/{id}/amen |
Approve for execution |
POST |
/api/intents/{id}/execute |
Execute approved intent |
GET |
/api/intents/{id}/code |
Get generated code |
| Method | Endpoint | Description |
|---|---|---|
GET |
/api/ai/status |
Check AI Gateway status |
GET |
/api/ai/models |
List available models |
POST |
/api/ai/complete |
Generate AI completion |
POST |
/api/ai/chat |
Chat with AI |
POST |
/api/intents/{id}/ai/suggest |
Get AI suggestions |
POST |
/api/intents/{id}/ai/apply |
Auto-apply suggestions |
from ir.models import IntentIR
from parser import parse_dsl
from planner import plan_intent
from ai_gateway import get_gateway, create_feedback_loop
# Parse DSL
ir = parse_dsl(dsl_content)
# Run dry-run
result = plan_intent(ir)
print(result.generated_code)
# Get AI suggestions
loop = create_feedback_loop()
suggestions = loop.analyze(ir, focus="security")
print(suggestions.suggestions)
# Apply AI suggestions
loop.iterate(ir, auto_apply=True)
# Or chat directly
gateway = get_gateway()
response = gateway.complete("Explain FastAPI middleware")
print(response["content"])# Run all tests (58 tests)
pytest
# Run specific test suites
pytest tests/e2e/test_shell.py -v # 17 tests
pytest tests/e2e/test_web.py -v # 18 tests
pytest tests/e2e/test_ai_gateway.py -v # 23 tests
# Run with coverage
pytest --cov=. --cov-report=htmlintent-iterative/
├── ir/ # Intermediate Representation models
├── parser/ # DSL parser
├── planner/ # Dry-run simulator
├── executor/ # Execution engine
├── ai_gateway/ # LiteLLM AI Gateway
│ ├── gateway.py # Main gateway with Ollama models
│ └── feedback_loop.py # LLM-powered feedback loop
├── cli/ # Shell interface
├── web/ # Web interface (FastAPI)
├── tests/e2e/ # E2E test suite
│ ├── test_shell.py
│ ├── test_web.py
│ └── test_ai_gateway.py
├── examples/ # Example DSL files
├── config.py # Configuration loader
├── Makefile # Build automation
├── .env.example # Environment template
├── requirements.txt
└── README.md
- Define Intent → Write DSL or use web editor
- Parse → Convert DSL to IR (Intermediate Representation)
- Plan → Run dry-run simulation, review generated code
- Get AI Suggestions → Analyze with local LLM
- Iterate → Make changes, re-plan until satisfied
- AMEN → Explicitly approve for execution
- Execute → Run with real side effects (Docker build, etc.)
Apache 2 License - see LICENSE file.