Context
Through v0.7, sheaf's focus has been capabilities: 20+ model types, Ray Serve + Modal + offline batch paths, batching/caching/streaming/Feast, logging/metrics/tracing. The remaining gap is friction — installing, deploying, and evaluating sheaf is harder than it needs to be for new users. v0.8 shifts from capabilities to adoption.
Scope
1. Prebuilt Docker images
2. Docs site (mkdocs-material → GitHub Pages)
3. Benchmarks (bench/)
4. Helm chart (charts/sheaf/)
5. CLI DX (stretch)
Non-goals
- New model backends (capabilities track)
- New feature store integrations (separate issue)
- ONNX / Triton interop
Risks
- CI bandwidth — GPU-extra Docker builds are large; consider nightly rather than per-PR
- Docs drift — auto-generation covers API reference; prose sections will rot without discipline
- Benchmark controversy — framework comparisons invite pushback; lean on reproducibility + honesty about where sheaf loses
Success criteria
- New user gets Chronos2 running end-to-end in <5 minutes from `docker run`
- Docs site live at a stable URL
- ≥1 published benchmark comparing against ≥2 alternatives
- Helm chart installable via `helm install`
Context
Through v0.7, sheaf's focus has been capabilities: 20+ model types, Ray Serve + Modal + offline batch paths, batching/caching/streaming/Feast, logging/metrics/tracing. The remaining gap is friction — installing, deploying, and evaluating sheaf is harder than it needs to be for new users. v0.8 shifts from capabilities to adoption.
Scope
1. Prebuilt Docker images
uv syncDockerfile (core + per-extra variants)ghcr.io/korbonits/sheafon tag:0.8.0,:0.8.0-time-series,:0.8.0-audio,:0.8.0-vision,:0.8.0-all2. Docs site (mkdocs-material → GitHub Pages)
examples/)3. Benchmarks (
bench/)4. Helm chart (
charts/sheaf/)5. CLI DX (stretch)
sheaf serve <backend> --model <id>— run a ModelServer without writing a spec fileNon-goals
Risks
Success criteria