diff --git a/docs/c1-chart.md b/docs/c1-chart.md
new file mode 100644
index 0000000..b52f5ac
--- /dev/null
+++ b/docs/c1-chart.md
@@ -0,0 +1,46 @@
+```mermaid
+graph LR
+ subgraph Actors[" Actors "]
+ direction TB
+ SRE["SRE / DevOps\n[Person]"]
+ OnCall["On-Call Operator\n[Person · HITL]"]
+ end
+
+ subgraph External[" External Systems "]
+ direction TB
+ Prom["Prometheus\nAlertManager\n[External Monitor]"]
+ Platforms["Chat Platforms\nSlack · MS Teams · Google Chat\n[Inbound Event Sources]"]
+ end
+
+ subgraph Core[" Alert Routing Infrastructure [Software System] "]
+ Rail["Extension Tool · L0 Agent · Dapr\nDedup · Rate Limit · Route · Reply"]
+ end
+
+ subgraph Downstream[" Downstream "]
+ direction TB
+ L1["L1 Agents\n[Specialist Automation · A2A JSON-RPC]"]
+ StateStore["Dapr State Store\nRedis 7\n[Fingerprints · Rate Counters]"]
+ Broker["Dapr Pub/Sub\nRabbitMQ\n[alerts-inbound · alerts-outbound]"]
+ end
+
+ SRE -->|"configures alert rules"| Prom
+ Prom -->|"webhook · HTTPS"| Platforms
+ Platforms -->|"Event JSON · Sync · HTTPS"| Rail
+ Rail -->|"formatted reply · HTTPS"| Platforms
+ OnCall -->|"HITL approval"| Rail
+ Rail -->|"A2A JSON-RPC · HTTPS"| L1
+ Rail <-->|"[State] get/set/expire · RESP3"| StateStore
+ Rail <-->|"[Pub/Sub] AMQP 0-9-1"| Broker
+
+ classDef actor fill:#1a2a1a,stroke:#00E5A0,stroke-width:1.5px,color:#00E5A0
+ classDef external fill:#1a1a2a,stroke:#00C9FF,stroke-width:1.5px,color:#00C9FF
+ classDef core fill:#0d2a1a,stroke:#00E5A0,stroke-width:2.5px,color:#00E5A0
+ classDef infra fill:#2a1a00,stroke:#FF9900,stroke-width:1.5px,color:#FF9900
+ classDef agents fill:#1e1a2a,stroke:#C084FC,stroke-width:1.5px,color:#C084FC
+
+ class SRE,OnCall actor
+ class Prom,Platforms external
+ class Rail core
+ class StateStore,Broker infra
+ class L1 agents
+```
diff --git a/docs/c2-chart.md b/docs/c2-chart.md
new file mode 100644
index 0000000..a710c85
--- /dev/null
+++ b/docs/c2-chart.md
@@ -0,0 +1,83 @@
+```mermaid
+graph LR
+ %% ── External Systems
+ Prom["Prometheus
AlertManager
[External Monitor]"]
+ Platforms["Chat Platforms
Slack · Teams · GChat
[Inbound Sources]"]
+
+ %% ── System Boundary
+ subgraph Boundary["System Boundary — Alert Routing Infrastructure"]
+ direction TB
+
+ %% Extension Tool
+ subgraph ET["Extension Tool"]
+ direction TB
+
+ GW["Message Gateway
Platform Adapter"]
+ VP["Validation Pipeline
Schema · Dedup · Rate Limiter"]
+ DaprET["Dapr Sidecar
Pub: alerts-inbound · Sub: alerts-outbound
State: fingerprints · counters"]
+
+ GW --> VP --> DaprET
+ end
+
+ %% Dapr State Store
+ subgraph State["Dapr State Store · Redis 7"]
+ Redis[("Redis 7
Fingerprints · Rate Counters")]
+ end
+
+ %% Dapr Pub/Sub Broker
+ subgraph MsgBroker["Dapr Pub/Sub · RabbitMQ"]
+ direction TB
+ CH1["alerts-inbound"]
+ CH2["alerts-outbound"]
+ end
+
+ %% L0 Agent
+ subgraph L0["L0 Agent"]
+ direction TB
+ QM["Queue Manager
Alert Consumer · Payload Parser"]
+ A2A["A2A Server
auto → L1 dispatch
hitl → HITL approval"]
+ RH["Response Handler
Formatter · Router"]
+ DaprL0["Dapr Sidecar
Sub: alerts-inbound · Pub: alerts-outbound"]
+
+ DaprL0 --> QM --> A2A --> RH --> DaprL0
+ end
+
+ PlatOut["Outbound Platform APIs
Slack Web API · Teams Graph API · GChat REST
[Block Kit · Adaptive Card · Card JSON]"]
+ end
+
+ %% L1 Agents
+ L1["L1 Agents
[A2A JSON-RPC · HTTPS]"]
+
+ %% ── Flows
+ Prom -->|"webhook · HTTPS"| Platforms
+ Platforms -->|"[Inbound] Event JSON · Sync"| GW
+ DaprET <-->|"[State] Redis RESP3 · get/set/expire"| Redis
+ DaprET -->|"[Pub/Sub] AMQP 0-9-1 publish · alerts-inbound"| CH1
+ CH1 -->|"[Pub/Sub] AMQP 0-9-1 deliver · alerts-inbound"| DaprL0
+ DaprL0 -.->|"[Pub/Sub] AMQP 0-9-1 publish · alerts-outbound"| CH2
+ CH2 -.->|"[Pub/Sub] AMQP 0-9-1 deliver · alerts-outbound"| DaprET
+ DaprET -.->|"[Outbound] ResponsePayload · HTTP callback"| GW
+ GW -.->|"[Outbound] formatted reply"| PlatOut
+ A2A <-.->|"A2A JSON-RPC · HTTPS · Async"| L1
+
+ %% ── Styling
+ classDef external fill:#1a1a2a,stroke:#00C9FF,stroke-width:1.5px,color:#00C9FF
+ classDef gateway fill:#0d1a2a,stroke:#00C9FF,stroke-width:1px,color:#00C9FF
+ classDef validate fill:#1a0d2a,stroke:#7B61FF,stroke-width:1px,color:#7B61FF
+ classDef dapr fill:#1e0d2a,stroke:#7B61FF,stroke-width:1.5px,color:#C084FC
+ classDef broker fill:#2a1a00,stroke:#FF9900,stroke-width:1px,color:#FF9900
+ classDef redis fill:#1a0d1a,stroke:#7B61FF,stroke-width:1.5px,color:#7B61FF
+ classDef l0core fill:#0d2a1a,stroke:#00E5A0,stroke-width:1px,color:#00E5A0
+ classDef outbound fill:#2a0d14,stroke:#FF4D6D,stroke-width:1px,color:#FF4D6D
+ classDef agents fill:#1e1a2a,stroke:#C084FC,stroke-width:1.5px,color:#C084FC
+
+ class Prom,Platforms external
+ class GW gateway
+ class VP validate
+ class DaprET,DaprL0 dapr
+ class CH1,CH2 broker
+ class Redis redis
+ class QM,A2A,RH l0core
+ class PlatOut outbound
+ class L1 agents
+```
diff --git a/docs/c3-l0-agent.md b/docs/c3-l0-agent.md
new file mode 100644
index 0000000..2dc39ff
--- /dev/null
+++ b/docs/c3-l0-agent.md
@@ -0,0 +1,53 @@
+```mermaid
+graph LR
+ subgraph L0[" L0 Agent — Component Detail "]
+ direction LR
+
+ DaprL0["Dapr Sidecar\n[Sub: alerts-inbound\nPub: alerts-outbound]"]
+
+ subgraph QM[" Queue Manager "]
+ direction TB
+ AC["Alert Consumer\n[Dapr /subscribe callback\n→ RawAlert]"]
+ PP["Payload Parser\n[Normalise · Enrich\nScore severity\n→ NormalisedAlert\n{routingHint: auto|hitl}]"]
+ AC --> PP
+ end
+
+ subgraph Core[" Core "]
+ A2A["A2A Server\n[Central core · A2A JSON-RPC · HTTPS\nauto → dispatches to L1 Agents\nhitl → HITL operator approval\n→ AgentResponse · HumanResponse]"]
+ end
+
+ subgraph RH[" Response Handler "]
+ direction TB
+ FM["Formatter\n[Slack → Block Kit\nTeams → Adaptive Card\nGChat → Card JSON]"]
+ RT["Router\n[Publishes to Dapr\n→ RoutedResponse]"]
+ FM --> RT
+ end
+
+ DaprL0 -->|"RawAlert"| AC
+ PP -->|"NormalisedAlert"| A2A
+ A2A -.->|"AgentResponse / HumanResponse"| FM
+ RT -.->|"RoutedResponse"| DaprL0
+ end
+
+ InboundBroker["Dapr Pub/Sub\n[alerts-inbound]"]
+ OutboundBroker["Dapr Pub/Sub\n[alerts-outbound]"]
+ L1["L1 Agents\n[Specialist Automation]"]
+
+ InboundBroker -->|"[Pub/Sub] AMQP 0-9-1 deliver · alerts-inbound"| DaprL0
+ DaprL0 -.->|"[Pub/Sub] AMQP 0-9-1 publish · alerts-outbound"| OutboundBroker
+ A2A <-.->|"A2A JSON-RPC · HTTPS · Async"| L1
+
+ classDef dapr fill:#1e0d2a,stroke:#C084FC,stroke-width:2px,color:#C084FC
+ classDef queuemgr fill:#0d1a2a,stroke:#00C9FF,stroke-width:1.5px,color:#00C9FF
+ classDef a2a fill:#0d2a1a,stroke:#00E5A0,stroke-width:2.5px,color:#00E5A0
+ classDef response fill:#2a0d14,stroke:#FF4D6D,stroke-width:1.5px,color:#FF4D6D
+ classDef broker fill:#2a1a00,stroke:#FF9900,stroke-width:1px,color:#FF9900
+ classDef agents fill:#1e1a2a,stroke:#C084FC,stroke-width:1.5px,color:#C084FC
+
+ class DaprL0 dapr
+ class AC,PP queuemgr
+ class A2A a2a
+ class FM,RT response
+ class InboundBroker,OutboundBroker broker
+ class L1 agents
+```
diff --git a/docs/c3-message-gateway.md b/docs/c3-message-gateway.md
new file mode 100644
index 0000000..e1782cb
--- /dev/null
+++ b/docs/c3-message-gateway.md
@@ -0,0 +1,42 @@
+```mermaid
+flowchart TB
+ subgraph Platforms
+ SL[Slack webhook]
+ TM[Teams webhook]
+ DC[Discord websocket]
+ end
+
+ subgraph Adapters["Adapters — BaseAdapter: verify · normalize · monitor() · send()"]
+
+ AD["SlackAdapter · TeamsAdapter · DiscordAdapter"]
+ end
+
+ subgraph NL["Normalization layer"]
+ IM["InboundMessage
platform-agnostic canonical schema"]
+ end
+
+ subgraph GW["Gateway — hub-and-spoke"]
+ SUP["_supervise()
auto-restart with backoff"]
+ DISP["_dispatch()
fan-out to handlers"]
+ ROUTE["send()
route OutboundMessage"]
+ HEALTH["health
per-platform status"]
+ end
+
+ subgraph Core["Core / Dispatcher — main.py"]
+ DAPR["publish_to_dapr()
Dapr broker publish"]
+ REPLY["OutboundMessage
platform-agnostic reply"]
+ end
+
+ SL --> AD
+ TM --> AD
+ DC --> AD
+
+ AD --> IM
+
+ IM --> SUP
+ SUP --> DISP
+ DISP --> DAPR
+ DAPR --> REPLY
+ REPLY --> ROUTE
+ ROUTE --> AD
+```
diff --git a/helm-chart/values.yaml b/helm-chart/values.yaml
index ce15451..d4b64a7 100644
--- a/helm-chart/values.yaml
+++ b/helm-chart/values.yaml
@@ -31,7 +31,7 @@ pinecone:
agents:
- name: l0
enabled: true
- image: 01community/agent-l0:v1
+ image: 01community/agent-l0:v1.1
containerPort: 3000
env:
# Application settings
diff --git a/k8s-agent/message-gateway/.env.example b/k8s-agent/message-gateway/.env.example
new file mode 100644
index 0000000..9c83660
--- /dev/null
+++ b/k8s-agent/message-gateway/.env.example
@@ -0,0 +1,21 @@
+# Message Gateway - Slack Adapter Environment Configuration
+# Copy this file to .env and fill in your credentials
+
+# ── Slack Configuration (Required) ─────────────────────
+# Create a Slack app at https://api.slack.com/apps
+# 1. Add Bot Token Scopes: channels:history, chat:write
+# 2. Enable Event Subscriptions and get Signing Secret
+# 3. Subscribe to message.channels event
+SLACK_BOT_TOKEN=xoxb-your-bot-token-here
+SLACK_SIGNING_SECRET=your-signing-secret-here
+
+# ── Server Configuration ───────────────────────────────
+HOST=0.0.0.0
+PORT=8000
+LOG_LEVEL=INFO # DEBUG, INFO, WARNING, ERROR
+
+# ── CORS Configuration ─────────────────────────────────
+# CORS_ORIGINS=* # Default: allow all origins
+
+# ── Development Settings ───────────────────────────────
+# DEBUG=false # Set to true for development
diff --git a/k8s-agent/message-gateway/.gitignore b/k8s-agent/message-gateway/.gitignore
new file mode 100644
index 0000000..444c947
--- /dev/null
+++ b/k8s-agent/message-gateway/.gitignore
@@ -0,0 +1,293 @@
+# ============================================
+# 01cloud-agents Git Ignore Configuration
+# Guardrail files for L0, L1, L2 agents
+# ============================================
+
+# ---------------------------------------------------------------------
+# OS and Editor Files
+# ---------------------------------------------------------------------
+
+# macOS
+.DS_Store
+.DS_Store?
+._*
+.Spotlight-V100
+.Trashes
+ehthumbs.db
+Thumbs.db
+
+# Windows
+Thumbs.db
+Desktop.ini
+
+# Linux
+*~
+
+# Editor directories
+.vscode/
+.idea/
+*.swp
+*.swo
+*~
+
+# ---------------------------------------------------------------------
+# Python
+# ---------------------------------------------------------------------
+
+# Byte-compiled / optimized / DLL files
+__pycache__/
+*.py[cod]
+*$py.class
+*.so
+.Python
+build/
+develop-eggs/
+dist/
+downloads/
+eggs/
+.eggs/
+lib/
+lib64/
+parts/
+sdist/
+var/
+wheels/
+share/python-wheels/
+*.egg-info/
+.installed.cfg
+*.egg
+MANIFEST
+
+# PyInstaller
+*.manifest
+*.spec
+
+# Installer logs
+pip-log.txt
+pip-delete-this-directory.txt
+
+# Unit test / coverage reports
+htmlcov/
+.tox/
+.nox/
+.coverage
+.coverage.*
+.cache
+nosetests.xml
+coverage.xml
+*.cover
+*.py,cover
+.hypothesis/
+.pytest_cache/
+.mypy_cache/
+.dmypy.json
+dmypy.json
+
+# Jupyter Notebook
+.ipynb_checkpoints
+
+# IPython
+profile_default/
+ipython_config.py
+
+# pyenv
+.python-version
+
+# pipenv
+Pipfile.lock
+
+# poetry
+poetry.lock
+
+# pdm
+.pdm.toml
+
+# ---------------------------------------------------------------------
+# Node.js / JavaScript
+# ---------------------------------------------------------------------
+
+# Dependencies
+node_modules/
+
+# Logs
+npm-debug.log*
+yarn-debug.log*
+yarn-error.log*
+lerna-debug.log*
+
+# Optional npm cache directory
+.npm
+
+# Optional eslint cache
+.eslintcache
+
+# Optional stylelint cache
+.stylelintcache
+
+# Optional REPL history
+.node_repl_history
+
+# ---------------------------------------------------------------------
+# Environment and Secrets
+# ---------------------------------------------------------------------
+
+# Environment variables
+.env
+.env.local
+.env.*.local
+.env.development.local
+.env.test.local
+.env.production.local
+
+# Keep .env.example (template)
+!.env.example
+
+# Secrets
+secrets.yaml
+secrets.yml
+*.key
+*.pem
+*.cert
+*.crt
+
+# ---------------------------------------------------------------------
+# Logs and Outputs
+# ---------------------------------------------------------------------
+
+# Application logs
+*.log
+logs/
+agent.log
+
+# Output directories (generated files)
+outputs/
+**/outputs/*
+!outputs/.gitkeep
+
+# Temporary files
+tmp/
+temp/
+*.tmp
+*.temp
+
+# ---------------------------------------------------------------------
+# Virtual Environments
+# ---------------------------------------------------------------------
+
+# Python virtual environments
+venv/
+env/
+.venv/
+ENV/
+
+# Conda environments
+.env/
+
+# ---------------------------------------------------------------------
+# Build and Distribution
+# ---------------------------------------------------------------------
+
+# Distribution / packaging
+.Python
+build/
+develop-eggs/
+dist/
+downloads/
+eggs/
+.eggs/
+lib/
+lib64/
+parts/
+sdist/
+var/
+wheels/
+share/python-wheels/
+*.egg-info/
+.installed.cfg
+*.egg
+
+# ---------------------------------------------------------------------
+# Testing and Coverage
+# ---------------------------------------------------------------------
+
+# Coverage reports
+.coverage
+.coverage.*
+.cache
+nosetests.xml
+coverage.xml
+*.cover
+*.py,cover
+
+# pytest
+.pytest_cache/
+**/test_*
+
+# mypy
+.mypy_cache/
+.dmypy.json
+dmypy.json
+
+# ---------------------------------------------------------------------
+# Documentation
+# ---------------------------------------------------------------------
+
+# Sphinx
+docs/_build/
+
+# MkDocs
+site/
+
+# ---------------------------------------------------------------------
+# Project Specific
+# ---------------------------------------------------------------------
+
+# Agent-specific generated files
+**/agent.log
+**/*.pyc
+**/__pycache__/
+
+# Egg-info directories (Python package metadata)
+**/*.egg-info/
+
+# Kubernetes agent outputs
+k8s-agent/**/outputs/*
+!k8s-agent/**/outputs/.gitkeep
+
+# Agent card files (A2A metadata)
+
+
+# ---------------------------------------------------------------------
+# Configuration Files to KEEP (should be committed)
+# ---------------------------------------------------------------------
+# .pre-commit-config.yaml - Keep
+# .prettierrc - Keep
+# .stylelintrc.json - Keep
+# commitlint.config.cjs - Keep
+# pyproject.toml - Keep
+# requirements.txt - Keep
+# README.md - Keep
+
+# ---------------------------------------------------------------------
+# Miscellaneous
+# ---------------------------------------------------------------------
+
+# Custom
+.cache
+.project
+.settings
+.classpath
+.factorypath
+
+# JetBrains IDEs
+.idea/
+*.iml
+
+# VS Code
+.vscode/
+!.vscode/settings.json
+!.vscode/tasks.json
+!.vscode/launch.json
+!.vscode/extensions.json
+
+# End of .gitignore
diff --git a/k8s-agent/message-gateway/.kilocode/rules/specify-rules.md b/k8s-agent/message-gateway/.kilocode/rules/specify-rules.md
new file mode 100644
index 0000000..56f79b3
--- /dev/null
+++ b/k8s-agent/message-gateway/.kilocode/rules/specify-rules.md
@@ -0,0 +1,29 @@
+# message-gateway Development Guidelines
+
+Auto-generated from all feature plans. Last updated: 2026-03-17
+
+## Active Technologies
+
+- Python 3.11+ (based on requirements.txt dependencies) + FastAPI (>=0.110.0), uvicorn[standard] (>=0.29.0), slack-sdk (>=3.27.0), google-auth (>=2.28.0), python-dotenv (>=1.0.0) (001-slack-adapter)
+
+## Project Structure
+
+```text
+src/
+tests/
+```
+
+## Commands
+
+cd src [ONLY COMMANDS FOR ACTIVE TECHNOLOGIES][ONLY COMMANDS FOR ACTIVE TECHNOLOGIES] pytest [ONLY COMMANDS FOR ACTIVE TECHNOLOGIES][ONLY COMMANDS FOR ACTIVE TECHNOLOGIES] ruff check .
+
+## Code Style
+
+Python 3.11+ (based on requirements.txt dependencies): Follow standard conventions
+
+## Recent Changes
+
+- 001-slack-adapter: Added Python 3.11+ (based on requirements.txt dependencies) + FastAPI (>=0.110.0), uvicorn[standard] (>=0.29.0), slack-sdk (>=3.27.0), google-auth (>=2.28.0), python-dotenv (>=1.0.0)
+
+
+
diff --git a/k8s-agent/message-gateway/.kilocode/workflows/speckit.analyze.md b/k8s-agent/message-gateway/.kilocode/workflows/speckit.analyze.md
new file mode 100644
index 0000000..98b04b0
--- /dev/null
+++ b/k8s-agent/message-gateway/.kilocode/workflows/speckit.analyze.md
@@ -0,0 +1,184 @@
+---
+description: Perform a non-destructive cross-artifact consistency and quality analysis across spec.md, plan.md, and tasks.md after task generation.
+---
+
+## User Input
+
+```text
+$ARGUMENTS
+```
+
+You **MUST** consider the user input before proceeding (if not empty).
+
+## Goal
+
+Identify inconsistencies, duplications, ambiguities, and underspecified items across the three core artifacts (`spec.md`, `plan.md`, `tasks.md`) before implementation. This command MUST run only after `/speckit.tasks` has successfully produced a complete `tasks.md`.
+
+## Operating Constraints
+
+**STRICTLY READ-ONLY**: Do **not** modify any files. Output a structured analysis report. Offer an optional remediation plan (user must explicitly approve before any follow-up editing commands would be invoked manually).
+
+**Constitution Authority**: The project constitution (`.specify/memory/constitution.md`) is **non-negotiable** within this analysis scope. Constitution conflicts are automatically CRITICAL and require adjustment of the spec, plan, or tasks—not dilution, reinterpretation, or silent ignoring of the principle. If a principle itself needs to change, that must occur in a separate, explicit constitution update outside `/speckit.analyze`.
+
+## Execution Steps
+
+### 1. Initialize Analysis Context
+
+Run `.specify/scripts/bash/check-prerequisites.sh --json --require-tasks --include-tasks` once from repo root and parse JSON for FEATURE_DIR and AVAILABLE_DOCS. Derive absolute paths:
+
+- SPEC = FEATURE_DIR/spec.md
+- PLAN = FEATURE_DIR/plan.md
+- TASKS = FEATURE_DIR/tasks.md
+
+Abort with an error message if any required file is missing (instruct the user to run missing prerequisite command).
+For single quotes in args like "I'm Groot", use escape syntax: e.g 'I'\''m Groot' (or double-quote if possible: "I'm Groot").
+
+### 2. Load Artifacts (Progressive Disclosure)
+
+Load only the minimal necessary context from each artifact:
+
+**From spec.md:**
+
+- Overview/Context
+- Functional Requirements
+- Non-Functional Requirements
+- User Stories
+- Edge Cases (if present)
+
+**From plan.md:**
+
+- Architecture/stack choices
+- Data Model references
+- Phases
+- Technical constraints
+
+**From tasks.md:**
+
+- Task IDs
+- Descriptions
+- Phase grouping
+- Parallel markers [P]
+- Referenced file paths
+
+**From constitution:**
+
+- Load `.specify/memory/constitution.md` for principle validation
+
+### 3. Build Semantic Models
+
+Create internal representations (do not include raw artifacts in output):
+
+- **Requirements inventory**: Each functional + non-functional requirement with a stable key (derive slug based on imperative phrase; e.g., "User can upload file" → `user-can-upload-file`)
+- **User story/action inventory**: Discrete user actions with acceptance criteria
+- **Task coverage mapping**: Map each task to one or more requirements or stories (inference by keyword / explicit reference patterns like IDs or key phrases)
+- **Constitution rule set**: Extract principle names and MUST/SHOULD normative statements
+
+### 4. Detection Passes (Token-Efficient Analysis)
+
+Focus on high-signal findings. Limit to 50 findings total; aggregate remainder in overflow summary.
+
+#### A. Duplication Detection
+
+- Identify near-duplicate requirements
+- Mark lower-quality phrasing for consolidation
+
+#### B. Ambiguity Detection
+
+- Flag vague adjectives (fast, scalable, secure, intuitive, robust) lacking measurable criteria
+- Flag unresolved placeholders (TODO, TKTK, ???, ``, etc.)
+
+#### C. Underspecification
+
+- Requirements with verbs but missing object or measurable outcome
+- User stories missing acceptance criteria alignment
+- Tasks referencing files or components not defined in spec/plan
+
+#### D. Constitution Alignment
+
+- Any requirement or plan element conflicting with a MUST principle
+- Missing mandated sections or quality gates from constitution
+
+#### E. Coverage Gaps
+
+- Requirements with zero associated tasks
+- Tasks with no mapped requirement/story
+- Non-functional requirements not reflected in tasks (e.g., performance, security)
+
+#### F. Inconsistency
+
+- Terminology drift (same concept named differently across files)
+- Data entities referenced in plan but absent in spec (or vice versa)
+- Task ordering contradictions (e.g., integration tasks before foundational setup tasks without dependency note)
+- Conflicting requirements (e.g., one requires Next.js while other specifies Vue)
+
+### 5. Severity Assignment
+
+Use this heuristic to prioritize findings:
+
+- **CRITICAL**: Violates constitution MUST, missing core spec artifact, or requirement with zero coverage that blocks baseline functionality
+- **HIGH**: Duplicate or conflicting requirement, ambiguous security/performance attribute, untestable acceptance criterion
+- **MEDIUM**: Terminology drift, missing non-functional task coverage, underspecified edge case
+- **LOW**: Style/wording improvements, minor redundancy not affecting execution order
+
+### 6. Produce Compact Analysis Report
+
+Output a Markdown report (no file writes) with the following structure:
+
+## Specification Analysis Report
+
+| ID | Category | Severity | Location(s) | Summary | Recommendation |
+|----|----------|----------|-------------|---------|----------------|
+| A1 | Duplication | HIGH | spec.md:L120-134 | Two similar requirements ... | Merge phrasing; keep clearer version |
+
+(Add one row per finding; generate stable IDs prefixed by category initial.)
+
+**Coverage Summary Table:**
+
+| Requirement Key | Has Task? | Task IDs | Notes |
+|-----------------|-----------|----------|-------|
+
+**Constitution Alignment Issues:** (if any)
+
+**Unmapped Tasks:** (if any)
+
+**Metrics:**
+
+- Total Requirements
+- Total Tasks
+- Coverage % (requirements with >=1 task)
+- Ambiguity Count
+- Duplication Count
+- Critical Issues Count
+
+### 7. Provide Next Actions
+
+At end of report, output a concise Next Actions block:
+
+- If CRITICAL issues exist: Recommend resolving before `/speckit.implement`
+- If only LOW/MEDIUM: User may proceed, but provide improvement suggestions
+- Provide explicit command suggestions: e.g., "Run /speckit.specify with refinement", "Run /speckit.plan to adjust architecture", "Manually edit tasks.md to add coverage for 'performance-metrics'"
+
+### 8. Offer Remediation
+
+Ask the user: "Would you like me to suggest concrete remediation edits for the top N issues?" (Do NOT apply them automatically.)
+
+## Operating Principles
+
+### Context Efficiency
+
+- **Minimal high-signal tokens**: Focus on actionable findings, not exhaustive documentation
+- **Progressive disclosure**: Load artifacts incrementally; don't dump all content into analysis
+- **Token-efficient output**: Limit findings table to 50 rows; summarize overflow
+- **Deterministic results**: Rerunning without changes should produce consistent IDs and counts
+
+### Analysis Guidelines
+
+- **NEVER modify files** (this is read-only analysis)
+- **NEVER hallucinate missing sections** (if absent, report them accurately)
+- **Prioritize constitution violations** (these are always CRITICAL)
+- **Use examples over exhaustive rules** (cite specific instances, not generic patterns)
+- **Report zero issues gracefully** (emit success report with coverage statistics)
+
+## Context
+
+$ARGUMENTS
diff --git a/k8s-agent/message-gateway/.kilocode/workflows/speckit.checklist.md b/k8s-agent/message-gateway/.kilocode/workflows/speckit.checklist.md
new file mode 100644
index 0000000..b7624e2
--- /dev/null
+++ b/k8s-agent/message-gateway/.kilocode/workflows/speckit.checklist.md
@@ -0,0 +1,295 @@
+---
+description: Generate a custom checklist for the current feature based on user requirements.
+---
+
+## Checklist Purpose: "Unit Tests for English"
+
+**CRITICAL CONCEPT**: Checklists are **UNIT TESTS FOR REQUIREMENTS WRITING** - they validate the quality, clarity, and completeness of requirements in a given domain.
+
+**NOT for verification/testing**:
+
+- ❌ NOT "Verify the button clicks correctly"
+- ❌ NOT "Test error handling works"
+- ❌ NOT "Confirm the API returns 200"
+- ❌ NOT checking if code/implementation matches the spec
+
+**FOR requirements quality validation**:
+
+- ✅ "Are visual hierarchy requirements defined for all card types?" (completeness)
+- ✅ "Is 'prominent display' quantified with specific sizing/positioning?" (clarity)
+- ✅ "Are hover state requirements consistent across all interactive elements?" (consistency)
+- ✅ "Are accessibility requirements defined for keyboard navigation?" (coverage)
+- ✅ "Does the spec define what happens when logo image fails to load?" (edge cases)
+
+**Metaphor**: If your spec is code written in English, the checklist is its unit test suite. You're testing whether the requirements are well-written, complete, unambiguous, and ready for implementation - NOT whether the implementation works.
+
+## User Input
+
+```text
+$ARGUMENTS
+```
+
+You **MUST** consider the user input before proceeding (if not empty).
+
+## Execution Steps
+
+1. **Setup**: Run `.specify/scripts/bash/check-prerequisites.sh --json` from repo root and parse JSON for FEATURE_DIR and AVAILABLE_DOCS list.
+ - All file paths must be absolute.
+ - For single quotes in args like "I'm Groot", use escape syntax: e.g 'I'\''m Groot' (or double-quote if possible: "I'm Groot").
+
+2. **Clarify intent (dynamic)**: Derive up to THREE initial contextual clarifying questions (no pre-baked catalog). They MUST:
+ - Be generated from the user's phrasing + extracted signals from spec/plan/tasks
+ - Only ask about information that materially changes checklist content
+ - Be skipped individually if already unambiguous in `$ARGUMENTS`
+ - Prefer precision over breadth
+
+ Generation algorithm:
+ 1. Extract signals: feature domain keywords (e.g., auth, latency, UX, API), risk indicators ("critical", "must", "compliance"), stakeholder hints ("QA", "review", "security team"), and explicit deliverables ("a11y", "rollback", "contracts").
+ 2. Cluster signals into candidate focus areas (max 4) ranked by relevance.
+ 3. Identify probable audience & timing (author, reviewer, QA, release) if not explicit.
+ 4. Detect missing dimensions: scope breadth, depth/rigor, risk emphasis, exclusion boundaries, measurable acceptance criteria.
+ 5. Formulate questions chosen from these archetypes:
+ - Scope refinement (e.g., "Should this include integration touchpoints with X and Y or stay limited to local module correctness?")
+ - Risk prioritization (e.g., "Which of these potential risk areas should receive mandatory gating checks?")
+ - Depth calibration (e.g., "Is this a lightweight pre-commit sanity list or a formal release gate?")
+ - Audience framing (e.g., "Will this be used by the author only or peers during PR review?")
+ - Boundary exclusion (e.g., "Should we explicitly exclude performance tuning items this round?")
+ - Scenario class gap (e.g., "No recovery flows detected—are rollback / partial failure paths in scope?")
+
+ Question formatting rules:
+ - If presenting options, generate a compact table with columns: Option | Candidate | Why It Matters
+ - Limit to A–E options maximum; omit table if a free-form answer is clearer
+ - Never ask the user to restate what they already said
+ - Avoid speculative categories (no hallucination). If uncertain, ask explicitly: "Confirm whether X belongs in scope."
+
+ Defaults when interaction impossible:
+ - Depth: Standard
+ - Audience: Reviewer (PR) if code-related; Author otherwise
+ - Focus: Top 2 relevance clusters
+
+ Output the questions (label Q1/Q2/Q3). After answers: if ≥2 scenario classes (Alternate / Exception / Recovery / Non-Functional domain) remain unclear, you MAY ask up to TWO more targeted follow‑ups (Q4/Q5) with a one-line justification each (e.g., "Unresolved recovery path risk"). Do not exceed five total questions. Skip escalation if user explicitly declines more.
+
+3. **Understand user request**: Combine `$ARGUMENTS` + clarifying answers:
+ - Derive checklist theme (e.g., security, review, deploy, ux)
+ - Consolidate explicit must-have items mentioned by user
+ - Map focus selections to category scaffolding
+ - Infer any missing context from spec/plan/tasks (do NOT hallucinate)
+
+4. **Load feature context**: Read from FEATURE_DIR:
+ - spec.md: Feature requirements and scope
+ - plan.md (if exists): Technical details, dependencies
+ - tasks.md (if exists): Implementation tasks
+
+ **Context Loading Strategy**:
+ - Load only necessary portions relevant to active focus areas (avoid full-file dumping)
+ - Prefer summarizing long sections into concise scenario/requirement bullets
+ - Use progressive disclosure: add follow-on retrieval only if gaps detected
+ - If source docs are large, generate interim summary items instead of embedding raw text
+
+5. **Generate checklist** - Create "Unit Tests for Requirements":
+ - Create `FEATURE_DIR/checklists/` directory if it doesn't exist
+ - Generate unique checklist filename:
+ - Use short, descriptive name based on domain (e.g., `ux.md`, `api.md`, `security.md`)
+ - Format: `[domain].md`
+ - File handling behavior:
+ - If file does NOT exist: Create new file and number items starting from CHK001
+ - If file exists: Append new items to existing file, continuing from the last CHK ID (e.g., if last item is CHK015, start new items at CHK016)
+ - Never delete or replace existing checklist content - always preserve and append
+
+ **CORE PRINCIPLE - Test the Requirements, Not the Implementation**:
+ Every checklist item MUST evaluate the REQUIREMENTS THEMSELVES for:
+ - **Completeness**: Are all necessary requirements present?
+ - **Clarity**: Are requirements unambiguous and specific?
+ - **Consistency**: Do requirements align with each other?
+ - **Measurability**: Can requirements be objectively verified?
+ - **Coverage**: Are all scenarios/edge cases addressed?
+
+ **Category Structure** - Group items by requirement quality dimensions:
+ - **Requirement Completeness** (Are all necessary requirements documented?)
+ - **Requirement Clarity** (Are requirements specific and unambiguous?)
+ - **Requirement Consistency** (Do requirements align without conflicts?)
+ - **Acceptance Criteria Quality** (Are success criteria measurable?)
+ - **Scenario Coverage** (Are all flows/cases addressed?)
+ - **Edge Case Coverage** (Are boundary conditions defined?)
+ - **Non-Functional Requirements** (Performance, Security, Accessibility, etc. - are they specified?)
+ - **Dependencies & Assumptions** (Are they documented and validated?)
+ - **Ambiguities & Conflicts** (What needs clarification?)
+
+ **HOW TO WRITE CHECKLIST ITEMS - "Unit Tests for English"**:
+
+ ❌ **WRONG** (Testing implementation):
+ - "Verify landing page displays 3 episode cards"
+ - "Test hover states work on desktop"
+ - "Confirm logo click navigates home"
+
+ ✅ **CORRECT** (Testing requirements quality):
+ - "Are the exact number and layout of featured episodes specified?" [Completeness]
+ - "Is 'prominent display' quantified with specific sizing/positioning?" [Clarity]
+ - "Are hover state requirements consistent across all interactive elements?" [Consistency]
+ - "Are keyboard navigation requirements defined for all interactive UI?" [Coverage]
+ - "Is the fallback behavior specified when logo image fails to load?" [Edge Cases]
+ - "Are loading states defined for asynchronous episode data?" [Completeness]
+ - "Does the spec define visual hierarchy for competing UI elements?" [Clarity]
+
+ **ITEM STRUCTURE**:
+ Each item should follow this pattern:
+ - Question format asking about requirement quality
+ - Focus on what's WRITTEN (or not written) in the spec/plan
+ - Include quality dimension in brackets [Completeness/Clarity/Consistency/etc.]
+ - Reference spec section `[Spec §X.Y]` when checking existing requirements
+ - Use `[Gap]` marker when checking for missing requirements
+
+ **EXAMPLES BY QUALITY DIMENSION**:
+
+ Completeness:
+ - "Are error handling requirements defined for all API failure modes? [Gap]"
+ - "Are accessibility requirements specified for all interactive elements? [Completeness]"
+ - "Are mobile breakpoint requirements defined for responsive layouts? [Gap]"
+
+ Clarity:
+ - "Is 'fast loading' quantified with specific timing thresholds? [Clarity, Spec §NFR-2]"
+ - "Are 'related episodes' selection criteria explicitly defined? [Clarity, Spec §FR-5]"
+ - "Is 'prominent' defined with measurable visual properties? [Ambiguity, Spec §FR-4]"
+
+ Consistency:
+ - "Do navigation requirements align across all pages? [Consistency, Spec §FR-10]"
+ - "Are card component requirements consistent between landing and detail pages? [Consistency]"
+
+ Coverage:
+ - "Are requirements defined for zero-state scenarios (no episodes)? [Coverage, Edge Case]"
+ - "Are concurrent user interaction scenarios addressed? [Coverage, Gap]"
+ - "Are requirements specified for partial data loading failures? [Coverage, Exception Flow]"
+
+ Measurability:
+ - "Are visual hierarchy requirements measurable/testable? [Acceptance Criteria, Spec §FR-1]"
+ - "Can 'balanced visual weight' be objectively verified? [Measurability, Spec §FR-2]"
+
+ **Scenario Classification & Coverage** (Requirements Quality Focus):
+ - Check if requirements exist for: Primary, Alternate, Exception/Error, Recovery, Non-Functional scenarios
+ - For each scenario class, ask: "Are [scenario type] requirements complete, clear, and consistent?"
+ - If scenario class missing: "Are [scenario type] requirements intentionally excluded or missing? [Gap]"
+ - Include resilience/rollback when state mutation occurs: "Are rollback requirements defined for migration failures? [Gap]"
+
+ **Traceability Requirements**:
+ - MINIMUM: ≥80% of items MUST include at least one traceability reference
+ - Each item should reference: spec section `[Spec §X.Y]`, or use markers: `[Gap]`, `[Ambiguity]`, `[Conflict]`, `[Assumption]`
+ - If no ID system exists: "Is a requirement & acceptance criteria ID scheme established? [Traceability]"
+
+ **Surface & Resolve Issues** (Requirements Quality Problems):
+ Ask questions about the requirements themselves:
+ - Ambiguities: "Is the term 'fast' quantified with specific metrics? [Ambiguity, Spec §NFR-1]"
+ - Conflicts: "Do navigation requirements conflict between §FR-10 and §FR-10a? [Conflict]"
+ - Assumptions: "Is the assumption of 'always available podcast API' validated? [Assumption]"
+ - Dependencies: "Are external podcast API requirements documented? [Dependency, Gap]"
+ - Missing definitions: "Is 'visual hierarchy' defined with measurable criteria? [Gap]"
+
+ **Content Consolidation**:
+ - Soft cap: If raw candidate items > 40, prioritize by risk/impact
+ - Merge near-duplicates checking the same requirement aspect
+ - If >5 low-impact edge cases, create one item: "Are edge cases X, Y, Z addressed in requirements? [Coverage]"
+
+ **🚫 ABSOLUTELY PROHIBITED** - These make it an implementation test, not a requirements test:
+ - ❌ Any item starting with "Verify", "Test", "Confirm", "Check" + implementation behavior
+ - ❌ References to code execution, user actions, system behavior
+ - ❌ "Displays correctly", "works properly", "functions as expected"
+ - ❌ "Click", "navigate", "render", "load", "execute"
+ - ❌ Test cases, test plans, QA procedures
+ - ❌ Implementation details (frameworks, APIs, algorithms)
+
+ **✅ REQUIRED PATTERNS** - These test requirements quality:
+ - ✅ "Are [requirement type] defined/specified/documented for [scenario]?"
+ - ✅ "Is [vague term] quantified/clarified with specific criteria?"
+ - ✅ "Are requirements consistent between [section A] and [section B]?"
+ - ✅ "Can [requirement] be objectively measured/verified?"
+ - ✅ "Are [edge cases/scenarios] addressed in requirements?"
+ - ✅ "Does the spec define [missing aspect]?"
+
+6. **Structure Reference**: Generate the checklist following the canonical template in `.specify/templates/checklist-template.md` for title, meta section, category headings, and ID formatting. If template is unavailable, use: H1 title, purpose/created meta lines, `##` category sections containing `- [ ] CHK### ` lines with globally incrementing IDs starting at CHK001.
+
+7. **Report**: Output full path to checklist file, item count, and summarize whether the run created a new file or appended to an existing one. Summarize:
+ - Focus areas selected
+ - Depth level
+ - Actor/timing
+ - Any explicit user-specified must-have items incorporated
+
+**Important**: Each `/speckit.checklist` command invocation uses a short, descriptive checklist filename and either creates a new file or appends to an existing one. This allows:
+
+- Multiple checklists of different types (e.g., `ux.md`, `test.md`, `security.md`)
+- Simple, memorable filenames that indicate checklist purpose
+- Easy identification and navigation in the `checklists/` folder
+
+To avoid clutter, use descriptive types and clean up obsolete checklists when done.
+
+## Example Checklist Types & Sample Items
+
+**UX Requirements Quality:** `ux.md`
+
+Sample items (testing the requirements, NOT the implementation):
+
+- "Are visual hierarchy requirements defined with measurable criteria? [Clarity, Spec §FR-1]"
+- "Is the number and positioning of UI elements explicitly specified? [Completeness, Spec §FR-1]"
+- "Are interaction state requirements (hover, focus, active) consistently defined? [Consistency]"
+- "Are accessibility requirements specified for all interactive elements? [Coverage, Gap]"
+- "Is fallback behavior defined when images fail to load? [Edge Case, Gap]"
+- "Can 'prominent display' be objectively measured? [Measurability, Spec §FR-4]"
+
+**API Requirements Quality:** `api.md`
+
+Sample items:
+
+- "Are error response formats specified for all failure scenarios? [Completeness]"
+- "Are rate limiting requirements quantified with specific thresholds? [Clarity]"
+- "Are authentication requirements consistent across all endpoints? [Consistency]"
+- "Are retry/timeout requirements defined for external dependencies? [Coverage, Gap]"
+- "Is versioning strategy documented in requirements? [Gap]"
+
+**Performance Requirements Quality:** `performance.md`
+
+Sample items:
+
+- "Are performance requirements quantified with specific metrics? [Clarity]"
+- "Are performance targets defined for all critical user journeys? [Coverage]"
+- "Are performance requirements under different load conditions specified? [Completeness]"
+- "Can performance requirements be objectively measured? [Measurability]"
+- "Are degradation requirements defined for high-load scenarios? [Edge Case, Gap]"
+
+**Security Requirements Quality:** `security.md`
+
+Sample items:
+
+- "Are authentication requirements specified for all protected resources? [Coverage]"
+- "Are data protection requirements defined for sensitive information? [Completeness]"
+- "Is the threat model documented and requirements aligned to it? [Traceability]"
+- "Are security requirements consistent with compliance obligations? [Consistency]"
+- "Are security failure/breach response requirements defined? [Gap, Exception Flow]"
+
+## Anti-Examples: What NOT To Do
+
+**❌ WRONG - These test implementation, not requirements:**
+
+```markdown
+- [ ] CHK001 - Verify landing page displays 3 episode cards [Spec §FR-001]
+- [ ] CHK002 - Test hover states work correctly on desktop [Spec §FR-003]
+- [ ] CHK003 - Confirm logo click navigates to home page [Spec §FR-010]
+- [ ] CHK004 - Check that related episodes section shows 3-5 items [Spec §FR-005]
+```
+
+**✅ CORRECT - These test requirements quality:**
+
+```markdown
+- [ ] CHK001 - Are the number and layout of featured episodes explicitly specified? [Completeness, Spec §FR-001]
+- [ ] CHK002 - Are hover state requirements consistently defined for all interactive elements? [Consistency, Spec §FR-003]
+- [ ] CHK003 - Are navigation requirements clear for all clickable brand elements? [Clarity, Spec §FR-010]
+- [ ] CHK004 - Is the selection criteria for related episodes documented? [Gap, Spec §FR-005]
+- [ ] CHK005 - Are loading state requirements defined for asynchronous episode data? [Gap]
+- [ ] CHK006 - Can "visual hierarchy" requirements be objectively measured? [Measurability, Spec §FR-001]
+```
+
+**Key Differences:**
+
+- Wrong: Tests if the system works correctly
+- Correct: Tests if the requirements are written correctly
+- Wrong: Verification of behavior
+- Correct: Validation of requirement quality
+- Wrong: "Does it do X?"
+- Correct: "Is X clearly specified?"
diff --git a/k8s-agent/message-gateway/.kilocode/workflows/speckit.clarify.md b/k8s-agent/message-gateway/.kilocode/workflows/speckit.clarify.md
new file mode 100644
index 0000000..f2a9696
--- /dev/null
+++ b/k8s-agent/message-gateway/.kilocode/workflows/speckit.clarify.md
@@ -0,0 +1,181 @@
+---
+description: Identify underspecified areas in the current feature spec by asking up to 5 highly targeted clarification questions and encoding answers back into the spec.
+handoffs:
+ - label: Build Technical Plan
+ agent: speckit.plan
+ prompt: Create a plan for the spec. I am building with...
+---
+
+## User Input
+
+```text
+$ARGUMENTS
+```
+
+You **MUST** consider the user input before proceeding (if not empty).
+
+## Outline
+
+Goal: Detect and reduce ambiguity or missing decision points in the active feature specification and record the clarifications directly in the spec file.
+
+Note: This clarification workflow is expected to run (and be completed) BEFORE invoking `/speckit.plan`. If the user explicitly states they are skipping clarification (e.g., exploratory spike), you may proceed, but must warn that downstream rework risk increases.
+
+Execution steps:
+
+1. Run `.specify/scripts/bash/check-prerequisites.sh --json --paths-only` from repo root **once** (combined `--json --paths-only` mode / `-Json -PathsOnly`). Parse minimal JSON payload fields:
+ - `FEATURE_DIR`
+ - `FEATURE_SPEC`
+ - (Optionally capture `IMPL_PLAN`, `TASKS` for future chained flows.)
+ - If JSON parsing fails, abort and instruct user to re-run `/speckit.specify` or verify feature branch environment.
+ - For single quotes in args like "I'm Groot", use escape syntax: e.g 'I'\''m Groot' (or double-quote if possible: "I'm Groot").
+
+2. Load the current spec file. Perform a structured ambiguity & coverage scan using this taxonomy. For each category, mark status: Clear / Partial / Missing. Produce an internal coverage map used for prioritization (do not output raw map unless no questions will be asked).
+
+ Functional Scope & Behavior:
+ - Core user goals & success criteria
+ - Explicit out-of-scope declarations
+ - User roles / personas differentiation
+
+ Domain & Data Model:
+ - Entities, attributes, relationships
+ - Identity & uniqueness rules
+ - Lifecycle/state transitions
+ - Data volume / scale assumptions
+
+ Interaction & UX Flow:
+ - Critical user journeys / sequences
+ - Error/empty/loading states
+ - Accessibility or localization notes
+
+ Non-Functional Quality Attributes:
+ - Performance (latency, throughput targets)
+ - Scalability (horizontal/vertical, limits)
+ - Reliability & availability (uptime, recovery expectations)
+ - Observability (logging, metrics, tracing signals)
+ - Security & privacy (authN/Z, data protection, threat assumptions)
+ - Compliance / regulatory constraints (if any)
+
+ Integration & External Dependencies:
+ - External services/APIs and failure modes
+ - Data import/export formats
+ - Protocol/versioning assumptions
+
+ Edge Cases & Failure Handling:
+ - Negative scenarios
+ - Rate limiting / throttling
+ - Conflict resolution (e.g., concurrent edits)
+
+ Constraints & Tradeoffs:
+ - Technical constraints (language, storage, hosting)
+ - Explicit tradeoffs or rejected alternatives
+
+ Terminology & Consistency:
+ - Canonical glossary terms
+ - Avoided synonyms / deprecated terms
+
+ Completion Signals:
+ - Acceptance criteria testability
+ - Measurable Definition of Done style indicators
+
+ Misc / Placeholders:
+ - TODO markers / unresolved decisions
+ - Ambiguous adjectives ("robust", "intuitive") lacking quantification
+
+ For each category with Partial or Missing status, add a candidate question opportunity unless:
+ - Clarification would not materially change implementation or validation strategy
+ - Information is better deferred to planning phase (note internally)
+
+3. Generate (internally) a prioritized queue of candidate clarification questions (maximum 5). Do NOT output them all at once. Apply these constraints:
+ - Maximum of 5 total questions across the whole session.
+ - Each question must be answerable with EITHER:
+ - A short multiple‑choice selection (2–5 distinct, mutually exclusive options), OR
+ - A one-word / short‑phrase answer (explicitly constrain: "Answer in <=5 words").
+ - Only include questions whose answers materially impact architecture, data modeling, task decomposition, test design, UX behavior, operational readiness, or compliance validation.
+ - Ensure category coverage balance: attempt to cover the highest impact unresolved categories first; avoid asking two low-impact questions when a single high-impact area (e.g., security posture) is unresolved.
+ - Exclude questions already answered, trivial stylistic preferences, or plan-level execution details (unless blocking correctness).
+ - Favor clarifications that reduce downstream rework risk or prevent misaligned acceptance tests.
+ - If more than 5 categories remain unresolved, select the top 5 by (Impact * Uncertainty) heuristic.
+
+4. Sequential questioning loop (interactive):
+ - Present EXACTLY ONE question at a time.
+ - For multiple‑choice questions:
+ - **Analyze all options** and determine the **most suitable option** based on:
+ - Best practices for the project type
+ - Common patterns in similar implementations
+ - Risk reduction (security, performance, maintainability)
+ - Alignment with any explicit project goals or constraints visible in the spec
+ - Present your **recommended option prominently** at the top with clear reasoning (1-2 sentences explaining why this is the best choice).
+ - Format as: `**Recommended:** Option [X] - `
+ - Then render all options as a Markdown table:
+
+ | Option | Description |
+ |--------|-------------|
+ | A |