-
Notifications
You must be signed in to change notification settings - Fork 0
Source Repo Analysis
John Williams edited this page Mar 16, 2026
·
1 revision
Last Mile 360 was built by studying 16+ open-source projects in the AI agent and code analysis space. For each project, we evaluated what patterns and concepts were worth adopting — and explicitly rejected importing any code or dependencies. This page documents every evaluation.
Key principle: Patterns are portable. Dependencies are liabilities.
- Stars: 247k+ | Language: TypeScript | Risk Level: 🔴 CRITICAL
- What it is: The dominant open-source LLM inference engine
- What was taken: Nothing — no code, no dependency, no import
- What was rejected: The entire runtime. Self-hosted inference means managing GPU servers, CUDA drivers, model weights, and OOM kills
- Why rejected: Last Mile uses Claude API + Workers AI instead. Zero infrastructure to manage, predictable costs, automatic scaling. For a security product, running your own inference cluster is an unnecessary attack surface
- Replacement: Tier 1 Claude API (complex analysis) + Tier 2 Workers AI (edge inference)
- Stars: 11k+ | Language: TypeScript | Risk Level: 🔴 CRITICAL
- What it is: Lightweight variant of OpenClaw for smaller models
- What was taken: Nothing
- What was rejected: Same reasoning as OpenClaw — self-hosted inference is operational overhead with no benefit for our use case
- Why rejected: Workers AI already provides lightweight models at the edge with zero cold start
- Replacement: Cloudflare Workers AI
- Stars: 12k+ | Language: Go | Risk Level: 🟠 HIGH
- What it is: Minimal Go implementation for model serving
- What was taken: Concept only — the idea that inference can be decomposed into small, focused units
- What was rejected: All code. Go runtime adds another language to the stack
- Why rejected: Cloudflare's platform is JavaScript/TypeScript-native; adding Go increases complexity without adding capability
- Language: Rust | Risk Level: 🟠 HIGH
- What it is: Rust-based inference with WASM compilation target
- What was taken: Concept only — WASM as a portable execution target is interesting for future Workers integration
- What was rejected: All code. Rust WASM compilation toolchain is complex
- Why rejected: Premature optimization. Workers AI handles inference today; custom WASM inference is a Phase 6+ consideration
- Language: Pure C | Target: ESP32 | Risk Level: 🔴 CRITICAL
- What it is: Runs tiny models on microcontrollers
- What was taken: Nothing — zero overlap with cloud-based code analysis
- What was rejected: Everything. Embedded C on ESP32 has no relevance to web security scanning
- Why rejected: Completely different domain. Listed for completeness as part of the Claw ecosystem evaluation
- Stars: 14k+ | Language: Rust | Risk Level: 🟡 MEDIUM
- What it is: Agent operating system with strong isolation primitives
- What was taken: Agent isolation pattern — the concept that each agent should run in its own sandboxed environment with no shared memory or filesystem
- What was rejected: Rust codebase, custom runtime, process management
- Why relevant: Directly influenced the decision to run each Last Mile agent as a separate Cloudflare Worker (V8 isolate = natural sandbox)
- Implementation: Each agent is a separate Worker deployment with its own Queues binding
- Language: Python | Risk Level: 🟡 MEDIUM
- What it is: Framework for orchestrating role-based AI agent teams
- What was taken: Role-based crew pattern — the idea that agents should have explicit roles, goals, and backstories that constrain their behavior
- What was rejected: Python runtime, LangChain dependency, custom agent loop
- Why relevant: Influenced agent design: each Last Mile agent has a defined role (Security, Database, Infra, Observability, Quality) with specific rules it's responsible for
- Implementation: Agent specialization via rule ownership, not general-purpose reasoning
- Language: Python | Risk Level: 🟢 LOW
- What it is: Multi-agent conversation framework
- What was taken: Conversational agent pattern — agents can discuss findings and cross-reference results
- What was rejected: Python runtime, complex conversation management, Azure dependencies
- Why relevant: Informed the results aggregation phase where findings from different agents are correlated (e.g., a missing RLS policy found by Database Agent affects the Security score)
- Implementation: Cross-agent correlation in the scoring aggregator
- Language: Python | Risk Level: 🟠 HIGH
- What it is: Autonomous AI agent framework with tool use
- What was taken: Concept only — the autonomous execution loop is interesting but dangerous for security tooling
- What was rejected: Everything. Autonomous agents making security decisions without human review is antithetical to Last Mile's philosophy
- Why rejected: Security findings must be human-reviewed. Auto-fix is suggested, never auto-applied. The "super autonomous" model is inappropriate for a trust product
- Language: Python/JavaScript | Risk Level: 🟡 MEDIUM
- What it is: The dominant LLM application framework
- What was taken: Patterns only — chain-of-thought prompting, structured output parsing, tool-use patterns
-
What was rejected: All code. Zero imports. No
langchaininpackage.json - Why rejected: LangChain is a massive dependency tree with frequent breaking changes. For a security product, every dependency is attack surface. The patterns are simple enough to implement directly
- Implementation: Direct API calls to Claude/OpenAI with structured JSON output schemas
- Risk Level: 🟡 MEDIUM
- What it is: Memory-first agent architecture with tiered storage
- What was taken: Three-tier memory pattern — hot (KV), warm (D1), cold (Vectorize) storage tiers for different access patterns
- What was rejected: Custom memory management code, Python runtime
- Why relevant: Directly shaped the storage architecture. Scan results go to D1 (queryable), hot config to KV (fast), semantic patterns to Vectorize (searchable)
- Implementation: KV for session/config cache, D1 for structured results, Vectorize for pattern similarity
- Risk Level: 🟠 HIGH
- What it is: Chatbot built on memU memory system
- What was taken: Nothing
- What was rejected: Everything — chatbot interaction model doesn't apply to a scanner
- Why rejected: Last Mile is a CLI/CI tool, not a conversational agent
- Risk Level: 🟡 MEDIUM
- What it is: Computer-use agent for browser/desktop automation
- What was taken: Concept deferred to Phase 5 — browser-based security testing (OWASP ZAP-style) via Cloudflare Browser Rendering
- What was rejected: Current implementation — desktop automation is out of scope
- Why deferred: Browser rendering API integration for runtime security testing is a future capability, not a current priority
- Stars: ~4k lines | Language: Python | Risk Level: 🟢 LOW
- What it is: Minimal agent loop in under 4,000 lines of Python
- What was taken: Agent loop pattern — the reason → act → observe → repeat cycle that structures how each agent processes findings
- What was rejected: Python code, specific implementations
- Why relevant: Proved that a useful agent loop can be implemented in very few lines. Influenced the design of Last Mile's agent execution cycle
-
Implementation: Each agent follows:
reason(what to check) →act(run rules) →observe(collect findings) →repeat(next rule)
- Risk Level: 🟢 LOW
- What it is: Documentation and operational guides for the Claw ecosystem
- What was taken: Documentation patterns — how to structure operational documentation for AI systems
- What was rejected: Content (specific to Claw infrastructure)
- Why relevant: Informed this wiki's structure
- Risk Level: ⚪ NONE
- What it is: The meme entry — "what if we just... didn't?"
- What was taken: The philosophy — sometimes the right amount of infrastructure is zero
- What was rejected: N/A (it's a meme)
- Why relevant: Reinforced the decision to use managed APIs instead of self-hosted inference. NoClaw is unironically the correct answer for most teams
| Repo | Risk | Taken | Category |
|---|---|---|---|
| OpenClaw | 🔴 CRITICAL | Nothing (replaced by Claude + Workers AI) | Inference |
| NanoClaw | 🔴 CRITICAL | Nothing (replaced by Workers AI) | Inference |
| PicoClaw | 🟠 HIGH | Concept only | Inference |
| ZeroClaw | 🟠 HIGH | Concept only | Inference |
| MimiClaw | 🔴 CRITICAL | Nothing | Inference |
| OpenFang | 🟡 MEDIUM | Agent isolation pattern | Multi-Agent |
| CrewAI | 🟡 MEDIUM | Role-based crew pattern | Multi-Agent |
| AutoGen | 🟢 LOW | Conversational pattern | Multi-Agent |
| SuperAGI | 🟠 HIGH | Concept only (autonomous = dangerous) | Multi-Agent |
| LangChain | 🟡 MEDIUM | Patterns only (zero imports) | Multi-Agent |
| memU | 🟡 MEDIUM | Three-tier memory pattern | Memory |
| memU Bot | 🟠 HIGH | Nothing | Memory |
| Agent S3 | 🟡 MEDIUM | Deferred to Phase 5 | Computer Use |
| Nanobot | 🟢 LOW | Agent loop pattern | Lightweight |
| OpenClaw Handbook | 🟢 LOW | Documentation patterns | Docs |
| NoClaw | ⚪ NONE | The philosophy | Meme |
Last Mile 360
Agents
Usage
Technical
Project