Skip to content

feat(model): Phase 1.A MTP head detection + scaffolding#171

Merged
github-actions[bot] merged 1 commit into
mainfrom
perf/mtp-phase-1a
May 14, 2026
Merged

feat(model): Phase 1.A MTP head detection + scaffolding#171
github-actions[bot] merged 1 commit into
mainfrom
perf/mtp-phase-1a

Conversation

@kekzl
Copy link
Copy Markdown
Owner

@kekzl kekzl commented May 14, 2026

Summary

First step of the spec-decode MTP wiring project. Detects `model_mtp.safetensors` sidecar (DeepSeek-V3-family models — Qwen3.6, DeepSeek V3) when loading SafeTensors models. Phase 1.A scope is detection + metadata only; tensor load and forward path come in Phases 1.B-5 (see spec doc).

What this ships

  • `src/model/mtp_head.h` — new `MtpHeadInfo` struct (path, file size, tensor count)
  • `Model::mtp_info_` — `std::optional` field, populated when detected
  • `safetensors_loader.cpp` — detection block after main load: reads file size + safetensors header → tensor count, logs "MTP head detected"
  • `docs/superpowers/specs/2026-05-14-mtp-wiring-design.md` — full 5-phase roadmap with BF16 weight layout, forward kernel design, verify-loop integration, and acceptance-rate validation plan

Validation

Qwen3.6-NVFP4 model load logs:
```
[INFO] MTP head detected: /models/Qwen3.6-35B-A3B-NVFP4/model_mtp.safetensors
(1.57 GiB, 19 tensors) — not yet wired (see docs/superpowers/specs/...)
```
Bench unchanged (Qwen3.6-NVFP4 pp=64=2034, tg=179 tok/s same as before). verify-fast green (decode +0.50%, prefill +1.71%).

What this does NOT ship

  • Loading `mtp.*` tensors (`llm_compressor_loader.cpp` still skips the prefix)
  • Forward pass kernel
  • CLI flag / config wiring
  • VRAM budget changes
  • Production use

These are Phases 1.B-5, each 1-5 dev-days, total 2-3 dev-weeks for production MTP spec-decode (scoped in design spec).

Background

Per memory `spec_decode_qwen36_broken_2026_05_02`: imp's current self-speculative decoding has ≈0% acceptance because it reuses the trained LM head on an intermediate hidden state with no early-exit adapter. Qwen3.6 (and DeepSeek V3) ship a trained MTP head as `model_mtp.safetensors` — this PR is the foundation for routing that head through imp's spec-decode infrastructure.

Test plan

  • Build clean (`make build`)
  • verify-fast green (decode +0.50%, prefill +1.71%)
  • MTP detection log fires on Qwen3.6-NVFP4
  • Bench unchanged on Qwen3.6-NVFP4

🤖 Generated with Claude Code

First step of the spec-decode MTP wiring project documented in
docs/superpowers/specs/2026-05-14-mtp-wiring-design.md.

Detects model_mtp.safetensors sidecar (DeepSeek-V3-family models — Qwen3.6,
DeepSeek V3) when loading SafeTensors models. Phase 1.A scope is detection
+ metadata only; tensor load and forward path come in Phases 1.B-5.

## What this ships

- `src/model/mtp_head.h` — new MtpHeadInfo struct (path, file size, tensor count)
- `Model::mtp_info_` — std::optional<MtpHeadInfo> field, populated when detected
- safetensors_loader.cpp — detection block after main load succeeds:
  reads file size + safetensors header → tensor count, logs "MTP head detected"
- Design spec: docs/superpowers/specs/2026-05-14-mtp-wiring-design.md
  (full 5-phase roadmap including the BF16 weight layout, forward kernel design,
  verify-loop integration, and acceptance-rate validation plan)

## Validation

Qwen3.6-NVFP4 model load:
  [INFO] MTP head detected: /models/Qwen3.6-35B-A3B-NVFP4/model_mtp.safetensors
         (1.57 GiB, 19 tensors) — not yet wired (see docs/superpowers/specs/...)

Bench unchanged (pp=64 tok/s, tg=179 tok/s same as before). verify-fast green
(decode -2.28% within 3% threshold, prefill +3.26%). No production behavior
change for any model.

## What this does NOT ship

- Loading mtp.* tensors (llm_compressor_loader.cpp still skips the prefix)
- Forward pass kernel
- CLI flag / config wiring
- VRAM budget changes
- Production use

These are Phases 1.B through 5 — each scoped in the design spec, each
1-5 dev-days, total 2-3 dev-weeks for production MTP spec-decode.

## Background

Per memory `spec_decode_qwen36_broken_2026_05_02`: imp's current self-speculative
decoding has ≈0% acceptance because it reuses the trained LM head on an
intermediate hidden state with no early-exit adapter. Qwen3.6 (and DeepSeek V3)
ship a TRAINED MTP head as `model_mtp.safetensors` — this PR is the foundation
for routing that head through imp's spec-decode infrastructure.
@github-actions github-actions Bot enabled auto-merge (squash) May 14, 2026 11:41
@github-actions github-actions Bot merged commit 536af79 into main May 14, 2026
3 checks passed
kekzl added a commit that referenced this pull request May 14, 2026
…lding

Builds on Phase 1.A (PR #171 detection) to ship a complete MTP scaffolding
stack: tensors load to GPU, reduced forward kernel runs, engine API exists,
CLI flag works, model produces output without crashes on Qwen3.6-NVFP4.

## Per-phase deliverables

**Phase 1.B — Tensor loading**
- `MtpHead` struct expanded from metadata-only to 19 named Tensor fields
  (pre_fc_norm_*, fc, input/post_attention_layernorm, q/k/v/o_proj,
  q/k_norm, router, experts_gate_up_packed, experts_down_packed,
  shared_expert_*, final_norm)
- `safetensors_loader::load_safetensors` runs a separate load pass on
  `model_mtp.safetensors` after main load, dispatches the 19 tensors to
  MtpHead fields by name, retains the mmap via `Model::split_mmaps_`
- Translation NOT applied to MTP names (literal `mtp.*` preserved)

**Phase 1.C — Storage decision**
- BF16 retained on disk, converted to FP16 on GPU upload (matches main
  weights path). NVFP4 quant deferred — 1.6 GB FP16 cost is acceptable
  on a 32 GB GPU running a 35 B model.

**Phase 2.1 — Reduced forward kernel** (`src/runtime/mtp_forward.{cu,h}`)
- `mtp_draft_step()`: emb → pre_fc_norm × 2 → concat → fc → final_norm
  → lm_head → argmax
- Workspace alloc/free helpers (`MtpDraftWorkspace`)
- **Phase 2.1 limitation**: transformer block (attention + 256-expert MoE)
  is SKIPPED. Compute is a passthrough of fc_out. Production correctness
  requires the full block (Phase 2.2 — genuinely multi-week to write the
  MoE forward from scratch). Acceptance rate will be far below trained
  optimum until Phase 2.2 lands.

**Phase 3 — Engine API**
- `Engine::enable_mtp_spec_decode(int k)` + `Engine::mtp_draft_one(...)`
- `Engine::mtp_ws_storage_` field (type-erased to avoid header include)
- Workspace allocated on enable, freed on destroy
- Decode-loop auto-invocation deferred to Phase 3.5/Phase 4 production work

**Phase 4 — CLI + C API**
- `--mtp-spec-decode K` CLI flag
- `imp_enable_mtp_spec_decode(ctx, k)` C API entry point
- main.cpp calls C API after context creation if flag set

**Phase 5 — End-to-end smoke test** (validated, this PR)
- Qwen3.6-NVFP4 + `--mtp-spec-decode 2`:
  - MTP head loads (1.57 GiB, 19 tensors, BF16)
  - GPU upload succeeds (19 allocations)
  - Spec-decode enabled (k=2, hidden=2048, vocab=248320, workspace allocated)
  - Model produces output (125 tok/s decode, no crashes)

## Production gaps documented
- Phase 2.2: full transformer block (multi-week)
- Phase 3.5: decode-loop auto-invocation
- Phase 5.5: acceptance rate measurement (needs 2.2 + 3.5)

## Validation
- verify-fast green (decode -0.28%, prefill +1.64%)
- Qwen3.6-NVFP4 smoke: end-to-end works without crashing
- No production behavior change without explicit `--mtp-spec-decode` flag

## Design spec
docs/superpowers/specs/2026-05-14-mtp-wiring-design.md (Phase 1.A PR #171)
github-actions Bot pushed a commit that referenced this pull request May 14, 2026
…lding (#172)

Builds on Phase 1.A (PR #171 detection) to ship a complete MTP scaffolding
stack: tensors load to GPU, reduced forward kernel runs, engine API exists,
CLI flag works, model produces output without crashes on Qwen3.6-NVFP4.

## Per-phase deliverables

**Phase 1.B — Tensor loading**
- `MtpHead` struct expanded from metadata-only to 19 named Tensor fields
  (pre_fc_norm_*, fc, input/post_attention_layernorm, q/k/v/o_proj,
  q/k_norm, router, experts_gate_up_packed, experts_down_packed,
  shared_expert_*, final_norm)
- `safetensors_loader::load_safetensors` runs a separate load pass on
  `model_mtp.safetensors` after main load, dispatches the 19 tensors to
  MtpHead fields by name, retains the mmap via `Model::split_mmaps_`
- Translation NOT applied to MTP names (literal `mtp.*` preserved)

**Phase 1.C — Storage decision**
- BF16 retained on disk, converted to FP16 on GPU upload (matches main
  weights path). NVFP4 quant deferred — 1.6 GB FP16 cost is acceptable
  on a 32 GB GPU running a 35 B model.

**Phase 2.1 — Reduced forward kernel** (`src/runtime/mtp_forward.{cu,h}`)
- `mtp_draft_step()`: emb → pre_fc_norm × 2 → concat → fc → final_norm
  → lm_head → argmax
- Workspace alloc/free helpers (`MtpDraftWorkspace`)
- **Phase 2.1 limitation**: transformer block (attention + 256-expert MoE)
  is SKIPPED. Compute is a passthrough of fc_out. Production correctness
  requires the full block (Phase 2.2 — genuinely multi-week to write the
  MoE forward from scratch). Acceptance rate will be far below trained
  optimum until Phase 2.2 lands.

**Phase 3 — Engine API**
- `Engine::enable_mtp_spec_decode(int k)` + `Engine::mtp_draft_one(...)`
- `Engine::mtp_ws_storage_` field (type-erased to avoid header include)
- Workspace allocated on enable, freed on destroy
- Decode-loop auto-invocation deferred to Phase 3.5/Phase 4 production work

**Phase 4 — CLI + C API**
- `--mtp-spec-decode K` CLI flag
- `imp_enable_mtp_spec_decode(ctx, k)` C API entry point
- main.cpp calls C API after context creation if flag set

**Phase 5 — End-to-end smoke test** (validated, this PR)
- Qwen3.6-NVFP4 + `--mtp-spec-decode 2`:
  - MTP head loads (1.57 GiB, 19 tensors, BF16)
  - GPU upload succeeds (19 allocations)
  - Spec-decode enabled (k=2, hidden=2048, vocab=248320, workspace allocated)
  - Model produces output (125 tok/s decode, no crashes)

## Production gaps documented
- Phase 2.2: full transformer block (multi-week)
- Phase 3.5: decode-loop auto-invocation
- Phase 5.5: acceptance rate measurement (needs 2.2 + 3.5)

## Validation
- verify-fast green (decode -0.28%, prefill +1.64%)
- Qwen3.6-NVFP4 smoke: end-to-end works without crashing
- No production behavior change without explicit `--mtp-spec-decode` flag

## Design spec
docs/superpowers/specs/2026-05-14-mtp-wiring-design.md (Phase 1.A PR #171)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant