Have you ever finished a build session and wondered — did I miss something today that would have changed how I'm doing this?
You're juggling three, four, maybe five projects. Each one has a different stack, different blockers, different risk surface. And every single day, the AI landscape shifts — new model releases, new frameworks, deprecations, traction surges on GitHub, a tool that didn't exist last week that now does exactly what you've been building from scratch.
You can't read everything. But you also can't afford to miss the thing that matters.
Most builders end up doing one of two things: spending way too much time trying to keep up with the flood, or tuning it out entirely and hoping nothing important slips by. Both feel bad. Neither works.
What you actually need is a system that already knows your projects.
A system that reads the incoming signal, understands where each of your projects stands today, and can tell you clearly: this matters for Project A and here's what to do about it, this is noise for everything else, and here's the one thing you should work on next if you have 45 minutes. Not a newsletter. Not a dashboard. A system that thinks with you — and remembers everything between sessions.
That's what the AI Innovation Radar is.
The AI Innovation Radar is a personal intelligence layer that runs inside Claude or whatever AI system you choose. It keeps a living knowledge base of your active projects — their current state, blockers, stack decisions, and relevant innovations — and evaluates every incoming AI signal against that context using a consistent judgment framework.
You paste in a batch of findings from your daily scan (Perplexity, newsletters, whatever you use), say "drop," and the Radar evaluates each one against your specific portfolio. It files what matters, ignores what doesn't, tells you why, and keeps a prioritized action queue updated so you always know the highest-leverage thing you could work on right now.
It's not a passive archive. It's an active system. The more you use it, the smarter it gets about your projects.
Daily scan (Perplexity, newsletters, GitHub trending, wherever you look)
↓
Paste the batch → say "drop"
↓
Radar evaluates each finding against your live project context
↓
Files what matters, filters noise, explains every decision
↓
Updates your project files and priority queue automatically
↓
Next time you ask "what should I work on?" — it already knows
The whole loop takes 5–10 minutes. The filing and memory happen automatically. Nothing gets lost between sessions.
Cuts noise ruthlessly. Most daily AI news is irrelevant to what you're building. The Radar filters it against your actual stack and project state — not generically, but specifically. A new TTS API update only matters if you have a project that uses audio. A new agent framework is only urgent if you're in the design phase of something it would affect.
Keeps your project context current. Every project has a live status file — current state, active blockers, next session focus, and a running log of relevant innovations. You never start a session trying to remember where you left off.
Gives you a prioritized action queue. One file, always current, ranked by impact-to-effort ratio. When you have 30 minutes, you ask what to work on and get a direct answer — not a wall of notes to re-read.
Applies consistent judgment. The Disruption-to-Value test (see below) is the same framework applied to every finding, every session. No hype. No FOMO. Just: does this materially help a project right now, and what's the honest switching cost?
Builds institutional memory. Every evaluated batch is logged in a master outcomes file. The system never re-evaluates the same thing twice and can track what was tried, decided, and what actually happened.
Trigger any mode with natural language in Claude:
| Trigger | What happens |
|---|---|
Drop + paste findings |
Evaluates a batch against your full project portfolio |
Survey [project name] |
Full landscape scan before starting a new build phase |
Briefing on [project name] |
Deep intelligence review for one project — where it stands, what the landscape looks like relative to your blockers |
Working on [project name] today |
Focus mode — surfaces only what helps the current session, ends with a status update |
Horizon update |
Macro AI trend scan across your full portfolio |
Every finding gets evaluated through two layers:
Layer 1 — First Principles Check Does this tool actually do what it claims? Is it solving the actual problem, or a similar one? If the three biggest claims were wrong, is there still value?
Layer 2 — Switching Cost vs. Payoff
| Verdict | What it means |
|---|---|
| Keep building | Real value, but switching cost exceeds payoff right now |
| Integrate when convenient | Helpful, low friction — fold in at next natural pause |
| Pause and adopt | Materially changes your timeline or capability — worth stopping for |
| Foundational shift | Changes your entire approach — rare, requires full justification |
"Pause and adopt" verdicts are rare by design. The default answer to almost everything is "keep building."
- Claude — Cowork mode (desktop app) is what this was built on, but it works in Claude Code or any Claude interface that supports file access and persistent context. You can adapt it to whatever AI assistant you prefer — the system is a set of files and prompts, not a locked-in platform.
- A daily scan source — This setup uses Perplexity with scheduled search prompts. You could use any source: newsletters, RSS feeds, GitHub trending, manual browsing, or even OpenClaw and other open-source intelligence tools. The Radar evaluates whatever you paste in.
- About 30 minutes to set up — Initialize your project files, customize the CLAUDE.md with your projects, set up your scan prompts. That's it.
Layer 1: CLAUDE.md — ambient context, loaded every session automatically
Layer 2: SKILL.md — the full operating system, loaded when Radar triggers
Layer 3: File system — living knowledge base, read and written each session
Claude reads CLAUDE.md automatically in every session — it knows your projects, priorities, and file locations without you having to explain anything. When you trigger a Radar mode, it loads the full skill file and your project files, runs the evaluation, and writes results back. The knowledge base grows every session.
Your Workspace/
├── CLAUDE.md ← edit this first
├── .claude/
│ └── skills/
│ └── ai-innovation-radar/
│ └── SKILL.md ← the operating system
└── Projects/
├── AI Innovation Radar/
│ ├── OUTCOMES.md ← master log of all Drop sessions
│ ├── PRIORITY_QUEUE.md ← your ranked action list
│ ├── HORIZON_log.md ← macro trend entries
│ └── WORKFLOW_Perplexity_Scouts.md ← your scan prompt library
├── [Your Project 1]/
│ ├── PROJECT_[Name].md ← full project profile
│ └── STATUS_[Name].md ← live state + progress journal
└── [Your Project 2]/
├── PROJECT_[Name].md
└── STATUS_[Name].md
Step 1 — Copy this repo into your workspace folder.
Place the contents where your AI assistant can read and write files. The .claude/ folder must be at the root level.
Step 2 — Edit CLAUDE.md.
Replace all [PLACEHOLDER] values: your name, your active projects, your GitHub repos, your weekly hours. This is the most important step — it's what makes the Radar aware of your specific portfolio.
Step 3 — Initialize your project files.
For each active project, copy Templates/PROJECT_template.md and Templates/STATUS_template.md into Projects/[Your Project Name]/. Fill in the project profile. The STATUS file starts nearly empty and grows as you work.
Step 4 — Set up your scan prompts.
See Templates/WORKFLOW_Perplexity_Scouts_template.md for the prompt library. Adapt the domains to match your stack and projects. Set them up as scheduled searches in Perplexity (or wherever you scan). Daily for your primary tech stack, weekly for project-specific domains.
Step 5 — Run your first Drop. Paste your first batch of findings and say "drop." The Radar loads, evaluates, logs, and updates your queue. You're live.
The Example/ folder shows the complete system running against a fictional portfolio — a deployed web app, an IoT hardware project, and an early-stage API concept — across about two weeks of Drop sessions. It's the fastest way to understand how the pieces connect before you customize your own.
Look at Example/Projects/AI Innovation Radar/PRIORITY_QUEUE.md first. That's the output you're building toward — a ranked, always-current answer to "what's the most important thing I can work on right now?"
This system was built and is actively used by @MrPickl3s a novice AI developer. It started as a personal workflow and got good enough to share.
Fork it, adapt it to your stack, and submit PRs. Most interested in: alternative scan prompt designs, adaptations for team use, integrations with other intelligence sources, and improvements to the Disruption-to-Value framework. Issues and discussions welcome.
If you build something on top of this or adapt it in an interesting way — open an issue and share it. The system gets better when more people stress-test it against different project types.
MIT — use it, fork it, build on it.