The Constitutional AI Operating System. Orchestrate Claude · GPT · Gemini · Codex as one team — shared memory, shared skills, shared agents. Routes through your existing IDE subscriptions (Antigravity, ChatGPT) so multi-model collaboration costs $0 in API spend.
Today's AI tools — Claude Code, Codex CLI, Cursor, Continue, Antigravity, raw API — each live in their own silo. The same user, the same machine, the same project, but every tool starts from zero each session:
- Your preference taught to Claude doesn't carry to Codex
- The skill you wrote in Cursor isn't visible to Antigravity
- The bug Codex fixed yesterday gets hit again by Claude tomorrow
- Every assistant rebuilds knowledge nobody chose to lose
Nova Kernel is the missing layer: a single source of truth for memory + skills + agents, with automatic projection to all the AI tools you use. It's the OS your AIs share.
Most "agent frameworks" assume you'll pay per-token to a single API. Nova flips that: it routes every task through whichever AI tool you already have a subscription with, and treats them as a coordinated team.
| Worker | What it's good at | How Nova reaches it | Your cost |
|---|---|---|---|
| Claude Sonnet 4.6 | Code reasoning, structured extraction | antigravity-claude-sonnet-4-6 via ag-bridge :11435 |
$0 (Antigravity IDE subscription) |
| Claude Opus 4.6 Thinking | Deep planning, multi-step decisions | antigravity-claude-opus-4-6-thinking |
$0 (Antigravity) |
| Gemini 3.1 Pro High | Long-context analysis, multimodal | antigravity-gemini-3.1-pro-high (or direct Gemini API free tier) |
$0 (Antigravity / free tier) |
| Gemini 3 Flash | Fast classification, tagging | gemini-flash direct |
$0 (free tier) |
| GPT-5 / Codex | Code review, sandboxed execution | codex CLI (npm @openai/codex) |
$0 (ChatGPT subscription) |
| Local bge-m3 / Ollama | Embeddings, vector search | http://127.0.0.1:11434 |
$0 (local) |
Driver Claude orchestrates the team — same conversation, but each task automatically routed to the cheapest model good enough for the job:
You: "fix the auth bug, then verify with tests"
│
▼
Driver Claude (you're talking to)
├─ writes the patch → Sonnet 4.6 (via ag-bridge, free)
├─ deep-thinks edge cases → Opus 4.6 Thinking (via ag-bridge, free)
├─ runs npm test → Codex CLI (via ChatGPT sub, free)
├─ summarizes outcome → Gemini Flash (free tier)
└─ writes lesson learned → memory (local, free)
│
▼
Net cost: $0
When you don't have a subscription for a tier, Nova gracefully falls back to the next best option — set ANTHROPIC_API_KEY / OPENAI_API_KEY and Nova uses them as a last resort.
┌──────────────────────────────────┐
│ Your AI tools │
│ Claude · Codex · Cursor · ... │
└──────────────┬───────────────────┘
│ all read the same
▼
╔══════════════════════════════════╗
║ Nova Kernel — single source ║
║ of truth (append-only jsonl) ║
╚══════════════════════════════════╝
│
┌──────────┬───────────────┼──────────────────┬──────────────┐
▼ ▼ ▼ ▼ ▼
Memory Skills Agents Pipelines Connectors
(4 types) (proposed → (registry + (debate / code / (8 external
voted → invoke) codex) tools)
promoted)
| # | Loop | Trigger | What it does |
|---|---|---|---|
| ① | Task Identification | Every new task | nova_task_plan(intent) — keyword bigram match → relevant skills/agents/warnings |
| ② | Execution Telemetry | Every agent call | Failure 100% / success 12.5% sampling → auto feedback memory |
| ③ | Skill Distillation | 6h cron | Cluster recent feedback → LLM proposes new skill → write to proposals/ |
| ④ | External Discovery | 24h cron | npm version compare for connectors + LLM freshness check for skills → upgrade proposals |
| ⑤ | Constitutional Council | On proposal | 3 AI voters (Opus + Gemini Pro + Sonnet) vote → user final approval |
| ⑥ | 4-Way Projection | <10ms after write | Sync to ~/.claude/, ~/.codex/, ./AGENTS.md, ./GEMINI.md |
arch-snapshot 30m · gap-detector 60m · daily-digest 24h · connectors 60m+live · kernel-watch live · skill-miner 6h · memory-hygiene 12h · external-scout 24h
git clone https://github.com/<your-org>/nova-kernel.git
cd nova-kernel
npm install
# Configure
cp .env.example .env
# Edit .env: at minimum set GEMINI_API_KEY (free tier works)
# Run
node --env-file=.env start-ecosystem.mjs --kernel
# Server now listening on http://127.0.0.1:3700# Identify capabilities for a task
curl -X POST http://127.0.0.1:3700/task/plan \
-H "Authorization: Bearer $NOVA_INTERNAL_TOKEN" \
-H "Content-Type: application/json" \
-d '{"intent": "implement an atomic file write helper"}'
# → returns matching skills, agents, warnings
# Scan memory hygiene
curl -X POST http://127.0.0.1:3700/memory/hygiene \
-H "Authorization: Bearer $NOVA_INTERNAL_TOKEN" \
-d '{}'
# Check what's in the skill library
ls evolution/skills/Add to your MCP client config:
{
"mcpServers": {
"nova": {
"command": "node",
"args": ["D:/path/to/nova-kernel/bin/nova-mcp.mjs"]
}
}
}41 MCP tools become available: nova_health, nova_task_plan, nova_memory_write, nova_council_submit, nova_scout_external, ...
| Level | Scope | Behavior |
|---|---|---|
| L0 | constitutional.json, audit.db, l3-gate.mjs |
Hard-locked. Any AI write → rejected. |
| L1 | Internal generation (text, reports) | Auto-execute if confidence ≥ 0.85 |
| L2 | Predictions, internal mutations | Execute + 24h human veto window |
| L3 | External actions (publish, charge, message) | Mandatory council vote + user approval |
Append-only JSONL with status evolution: active → superseded → deleted. Reads use last-write-wins by ID. No row is ever physically destroyed (full audit trail). Snapshot-type memories use upsertSnapshot for constant file size.
Four memory types:
user— identity, preferences, hardwarefeedback— corrections, lessons learnedproject— current work contextreference— external resource pointers
feedback memories
│ (skill-miner 6h, name-prefix bigram clustering)
▼
evolution/proposals/skill-*.md
│ (council 3-vote → awaiting_human)
▼
user approve
│
▼
evolution/skills/*.md ← 4-way projection → All AI tools see it
kernel/utils/llm.mjs provides one calling surface for every model:
import { callLlmJson } from './kernel/utils/llm.mjs';
const result = await callLlmJson(prompt, {
model: 'antigravity-claude-sonnet-4-6', // or 'gemini-flash', 'gpt-4o', etc.
task_type: 'structured-extract',
timeout_ms: 60_000,
});
// → { ok, json, model, latency_ms } — same shape regardless of providerThe ai-executor resolves model role → actual model ID via model-discovery.mjs. Switch providers without touching caller code.
kernel/
server.js # HTTP API on :3700
constitutional.json # The framework spec (L0-L3)
utils/
l3-gate.mjs # Risk classifier + write blocker
llm.mjs # Unified LLM call + JSON extraction
redact.mjs # Auto-strip secrets from logs
memory/
memory-writer.mjs # Append-only writes + supersede + upsertSnapshot
memory-sync.mjs # 4-way projection + orphan cleanup
hygiene.mjs # Cleanup agent (test residue / module backfill)
architecture-snapshot.mjs
task/
task-planner.mjs # Identify needed skills/agents/warnings for intent
evolution/
skill-miner.mjs # 6h: cluster feedback → skill proposals (LLM-distilled)
external-scout.mjs # 24h: npm version check + skill freshness LLM
gap-detector.js # 60m: structural anti-pattern detection
proposal-engine.mjs # Generic AI-proposes-change pipeline
council/
async-council.mjs # 3-vote async council + retry mechanism
agents/
registry.json # Agent declaration (internal/python/etc.)
invoke.mjs # Universal agent dispatcher
workers/
ai-executor.mjs # Task-type → model routing
providers.mjs # Anthropic / Gemini / OpenAI / Antigravity bridge
worker-guard.mjs # Anti-pollution check (worker ≠ driver)
connectors/
discovery.mjs # External tool detection (8 manifest-driven)
manifests/*.json # Declarative tool specs
kb/ # KB v2 — vector search + intel pool + curator tiers
pipeline/
pipeline.mjs # debate / code / codex pipelines
router/
intent-router.mjs # Natural language → action routing
audit/
audit.js # SQLite tamper-evident log
notify/ # Pluggable notify (Lark / WeChat Work / DingTalk)
evolution/
skills/ # Promoted (council-approved) skills
bin/
nova-mcp.mjs # MCP server (41 tools exposed to AI clients)
The system enforces AI cannot rewrite its own rules. kernel/constitutional.json and kernel/utils/l3-gate.mjs are L0 hard-locked — any AI write attempt is rejected at the kernel level. To change them, an AI must submit a proposal → 3-vote council → human final approval.
This isn't theater. The same gate that prevents AI from "deciding it doesn't need the gate anymore" is the foundation of trust. Self-evolving, but not self-emancipating.
See CONTRIBUTING.md. TL;DR:
- New skill? Write
evolution/proposals/skill-<name>.mdand submit vianova_council_submit— the council votes, then a maintainer approves. - New agent? Add to
kernel/agents/registry.jsonand PR. - New connector? Add a manifest to
kernel/connectors/manifests/<tool>.json. - Bug? Open an issue with the
nova_healthoutput and reproduction steps.
docs/ARCHITECTURE.md— full system designdocs/MEMORY.md— append-only model + 4-way projectiondocs/EVOLUTION.md— skill lifecycle + council mechanicsdocs/MCP.md— all 41 MCP tools reference
- Web UI for memory browsing + council voting (currently CLI-only)
- Kubernetes deployment chart
- Postgres backend (alternative to JSONL for >100k entries)
- More connector manifests (community-driven)
- Multi-user / team mode (currently single-user)
Apache 2.0 — see LICENSE.
Built with Driver Claude (Sonnet 4.6) on a 2× RTX 5080 + 64GB Windows workstation. Memory persists. Skills compound. Agents specialize. The AI gets better at being your AI.