Skip to content
John Williams edited this page Mar 16, 2026 · 1 revision

FAQ

Why not just use Snyk / SonarQube / Dependabot?

Those are excellent tools — but they're point solutions:

Tool What it covers What it misses
Snyk Dependencies, container scanning Database rules, infra config, observability, code quality
SonarQube SAST, code smells, complexity Secrets, dependencies, infrastructure, observability
Dependabot Dependency updates Everything else
ESLint Code style, some patterns Security, database, infrastructure, observability

Last Mile 360 covers all five dimensions in a single scan with a single score. You don't need to configure 4 tools, read 4 dashboards, and mentally aggregate 4 sets of findings.

That said, Last Mile doesn't replace deep specialized tools. If you need Snyk's container scanning depth or SonarQube's 5,000+ rules, use them alongside Last Mile. We focus on breadth + actionability over depth.


Why Cloudflare over AWS / Vercel?

Three reasons:

1. Security model

Workers run in V8 isolates — no filesystem, no network sockets, no shared memory. This is inherently more secure than containers (EC2, ECS, Lambda) which share a kernel. For a security product, the runtime security model matters.

2. Zero origin servers

AWS architectures require origin servers (EC2, ECS, Fargate). Even "serverless" Lambda runs on managed servers you're implicitly trusting. Cloudflare Workers have no origin — code runs at the edge, period. This eliminates an entire class of infrastructure vulnerabilities.

3. Integrated stack

D1 + R2 + KV + Queues + Workers AI + AI Gateway + Vectorize are all first-party services with native bindings. On AWS, you'd be wiring together Lambda + DynamoDB + S3 + SQS + SageMaker + custom proxy with IAM policies, VPCs, and security groups. Cloudflare's integration is tighter and simpler.

What about Vercel?

Vercel is great for frontend deployment. But it doesn't have: Durable Objects, Queues, D1, Vectorize, Workers AI, AI Gateway, or Zero Trust. Last Mile needs a full-stack platform, not a frontend host.


Why no self-hosted models?

See Inference Strategy for the full reasoning. The short version:

  1. GPUs are attack surface. A server running CUDA drivers is a server that needs patching.
  2. Operational burden. Managing OOM kills, model weight downloads, and VRAM allocation is not our value proposition.
  3. API inference is better for our use case. Structured input → structured output, no streaming, cacheable.
  4. Cost is predictable. Per-token > per-hour when usage is bursty.
  5. Redundancy. A 4-tier fallback chain is more reliable than a single GPU server.

What if Claude / OpenAI goes down?

Last Mile has a 4-tier fallback chain:

Claude API → Workers AI → OpenAI → Gemini → Rule-only mode

If every LLM provider is down simultaneously (unlikely but possible), the scanner falls back to rule-only mode: the 14 SAST rules, dependency auditing, and all agent checks still run without AI. You get a complete scan — just without the contextual severity assessment that LLMs provide.

The scanner degrades gracefully. LLM analysis is a quality enhancement, not a hard dependency.


How is my code protected?

Local CLI mode (last-mile scan .)

Your code never leaves your machine. All analysis runs locally. Zero network requests in --no-ai mode.

Cloud mode (API)

  1. Encrypted upload: AES-256-GCM encryption, TLS 1.3 in transit
  2. Presigned URLs: Single-use, 15-minute expiry, scoped to your scan
  3. Auto-delete: Source code deleted immediately after scan completes
  4. Maximum retention: 1 hour (Cron Trigger sweep catches orphaned objects)
  5. No persistent storage: Only findings and scores are retained, never source code
  6. No training: No LLM provider trains on API-submitted data

See Security Posture for the full security architecture.


Can I use this for enterprise / HIPAA / SOC 2?

Current state (Phase 5)

  • Local CLI mode is suitable for regulated environments — code never leaves your network
  • Cloud mode encrypts data at rest and in transit, auto-deletes source code, and uses no-training API agreements
  • Self-scan CI workflow demonstrates internal security practices

Phase 6 plans

  • SOC 2 Type II compliance documentation
  • Enterprise SSO via Cloudflare Zero Trust
  • Data Processing Agreement (DPA) for cloud mode
  • HIPAA BAA availability (via Cloudflare's enterprise plan)
  • Audit logging for all scan operations

Recommendation

For HIPAA/SOC 2 today: use local CLI mode with --no-ai flag. All analysis runs on your infrastructure with zero network calls. Phase 6 will bring cloud mode up to enterprise compliance standards.


What's the production readiness guarantee?

There isn't one yet. Last Mile 360 is in active development (Phase 6: Scale). The tool is functional, produces accurate findings, and is used in production by its creator — but it has not been independently audited, does not have published accuracy metrics, and is not covered by an SLA.

What you can rely on today:

  • ✅ The scanner runs and produces findings
  • ✅ SAST rules are deterministic (same input → same output)
  • ✅ CWE mappings follow published standards
  • ✅ The project self-scans and maintains a 95+ score
  • ✅ Local mode has zero network dependencies

What's coming:

  • Published precision/recall metrics per rule
  • Community validation against open-source projects
  • Comparison benchmarks with Snyk/SonarQube
  • Independent security audit of the scanner itself

The honest answer:

Use Last Mile as a complement to your existing tools, not a replacement. As accuracy metrics are published and community trust is established, it can take on a larger role. Shipping a security tool without published accuracy data and claiming it's production-ready would be irresponsible.


Is this free / open source?


How do I contribute?

See CONTRIBUTING.md in the repository. The most impactful contributions:

  1. New SAST rules with test fixtures and CWE mappings
  2. False positive reports with reproduction steps
  3. Framework-specific rule requests with example vulnerable patterns
  4. Documentation improvements to this wiki
  5. Real-world scan results (anonymized) to validate accuracy

Clone this wiki locally