-
Notifications
You must be signed in to change notification settings - Fork 0
FAQ
Those are excellent tools — but they're point solutions:
| Tool | What it covers | What it misses |
|---|---|---|
| Snyk | Dependencies, container scanning | Database rules, infra config, observability, code quality |
| SonarQube | SAST, code smells, complexity | Secrets, dependencies, infrastructure, observability |
| Dependabot | Dependency updates | Everything else |
| ESLint | Code style, some patterns | Security, database, infrastructure, observability |
Last Mile 360 covers all five dimensions in a single scan with a single score. You don't need to configure 4 tools, read 4 dashboards, and mentally aggregate 4 sets of findings.
That said, Last Mile doesn't replace deep specialized tools. If you need Snyk's container scanning depth or SonarQube's 5,000+ rules, use them alongside Last Mile. We focus on breadth + actionability over depth.
Three reasons:
Workers run in V8 isolates — no filesystem, no network sockets, no shared memory. This is inherently more secure than containers (EC2, ECS, Lambda) which share a kernel. For a security product, the runtime security model matters.
AWS architectures require origin servers (EC2, ECS, Fargate). Even "serverless" Lambda runs on managed servers you're implicitly trusting. Cloudflare Workers have no origin — code runs at the edge, period. This eliminates an entire class of infrastructure vulnerabilities.
D1 + R2 + KV + Queues + Workers AI + AI Gateway + Vectorize are all first-party services with native bindings. On AWS, you'd be wiring together Lambda + DynamoDB + S3 + SQS + SageMaker + custom proxy with IAM policies, VPCs, and security groups. Cloudflare's integration is tighter and simpler.
Vercel is great for frontend deployment. But it doesn't have: Durable Objects, Queues, D1, Vectorize, Workers AI, AI Gateway, or Zero Trust. Last Mile needs a full-stack platform, not a frontend host.
See Inference Strategy for the full reasoning. The short version:
- GPUs are attack surface. A server running CUDA drivers is a server that needs patching.
- Operational burden. Managing OOM kills, model weight downloads, and VRAM allocation is not our value proposition.
- API inference is better for our use case. Structured input → structured output, no streaming, cacheable.
- Cost is predictable. Per-token > per-hour when usage is bursty.
- Redundancy. A 4-tier fallback chain is more reliable than a single GPU server.
Last Mile has a 4-tier fallback chain:
Claude API → Workers AI → OpenAI → Gemini → Rule-only mode
If every LLM provider is down simultaneously (unlikely but possible), the scanner falls back to rule-only mode: the 14 SAST rules, dependency auditing, and all agent checks still run without AI. You get a complete scan — just without the contextual severity assessment that LLMs provide.
The scanner degrades gracefully. LLM analysis is a quality enhancement, not a hard dependency.
Your code never leaves your machine. All analysis runs locally. Zero network requests in --no-ai mode.
- Encrypted upload: AES-256-GCM encryption, TLS 1.3 in transit
- Presigned URLs: Single-use, 15-minute expiry, scoped to your scan
- Auto-delete: Source code deleted immediately after scan completes
- Maximum retention: 1 hour (Cron Trigger sweep catches orphaned objects)
- No persistent storage: Only findings and scores are retained, never source code
- No training: No LLM provider trains on API-submitted data
See Security Posture for the full security architecture.
- Local CLI mode is suitable for regulated environments — code never leaves your network
- Cloud mode encrypts data at rest and in transit, auto-deletes source code, and uses no-training API agreements
- Self-scan CI workflow demonstrates internal security practices
- SOC 2 Type II compliance documentation
- Enterprise SSO via Cloudflare Zero Trust
- Data Processing Agreement (DPA) for cloud mode
- HIPAA BAA availability (via Cloudflare's enterprise plan)
- Audit logging for all scan operations
For HIPAA/SOC 2 today: use local CLI mode with --no-ai flag. All analysis runs on your infrastructure with zero network calls. Phase 6 will bring cloud mode up to enterprise compliance standards.
There isn't one yet. Last Mile 360 is in active development (Phase 6: Scale). The tool is functional, produces accurate findings, and is used in production by its creator — but it has not been independently audited, does not have published accuracy metrics, and is not covered by an SLA.
- ✅ The scanner runs and produces findings
- ✅ SAST rules are deterministic (same input → same output)
- ✅ CWE mappings follow published standards
- ✅ The project self-scans and maintains a 95+ score
- ✅ Local mode has zero network dependencies
- Published precision/recall metrics per rule
- Community validation against open-source projects
- Comparison benchmarks with Snyk/SonarQube
- Independent security audit of the scanner itself
Use Last Mile as a complement to your existing tools, not a replacement. As accuracy metrics are published and community trust is established, it can take on a larger role. Shipping a security tool without published accuracy data and claiming it's production-ready would be irresponsible.
- CLI: Open source, free for local scanning
- Cloud API: Free tier planned, paid tiers for teams and enterprises
- License: See the repository LICENSE file
- Source: github.com/itallstartedwithaidea/last-mile
See CONTRIBUTING.md in the repository. The most impactful contributions:
- New SAST rules with test fixtures and CWE mappings
- False positive reports with reproduction steps
- Framework-specific rule requests with example vulnerable patterns
- Documentation improvements to this wiki
- Real-world scan results (anonymized) to validate accuracy
Last Mile 360
Agents
Usage
Technical
Project