diff --git a/.gitignore b/.gitignore
index 82f4340..98da21e 100644
--- a/.gitignore
+++ b/.gitignore
@@ -8,6 +8,17 @@ out/
build/
*.local
+# Portfolio screenshots under docs/screenshots/renders/ — the contact
+# sheet HTML is a build intermediate (capture.mjs uses it to render the
+# contact-sheet.png), so it stays out of the repo.
+docs/screenshots/renders/contact-sheet.html
+
+# Portfolio deck — the editable PPTX is too large for the 2MB large-files
+# guard. Rebuild via `cd docs/deck && npm run build`. The PDF preview
+# fits under the guard and is committed.
+docs/deck/AssistSupport-LinkedIn-Live.pptx
+docs/deck/thumbs/
+
# Tauri / Rust
src-tauri/target/
**/target/
diff --git a/docs/case-study.md b/docs/case-study.md
new file mode 100644
index 0000000..91e0007
--- /dev/null
+++ b/docs/case-study.md
@@ -0,0 +1,228 @@
+# AssistSupport · A Local-First IT Support Agent on a Mac
+
+A case study in building an AI support assistant that's fast,
+auditable, and private — by choosing the _unfashionable_ tool at every
+decision point.
+
+AssistSupport is a Tauri 2 + React + Rust desktop app that drafts
+KB-grounded responses to IT support tickets. The ML stack — intent
+classifier, hybrid retrieval, cross-encoder reranker, local LLM — runs
+end-to-end on the operator's laptop. No cloud round trip, no per-seat
+pricing, no tenant data leaking across a wire.
+
+This case study walks through three architectural decisions that cut
+against the industry default, and explains why each one was
+load-bearing for the product.
+
+## The problem
+
+Tier-1 IT support is the same conversation, replayed. "Can I use a
+flash drive?" "Why does Outlook keep crashing?" "How do I get Snowflake
+access?" About a quarter of tickets are policy or how-to questions
+already answered somewhere in the knowledge base. The operator's job is
+not to invent an answer — it's to _find the right KB article, write a
+human response that cites it, and paste it into Jira._
+
+Cloud AI support tools promise to automate this. In practice they add
+three problems the original workflow didn't have:
+
+1. **Data residency.** Every ticket — including anything the user
+ accidentally pastes — goes to a third-party tenant.
+2. **Confident hallucinations.** Large models will cheerfully invent
+ policy that doesn't exist. When the operator trusts the draft, IT
+ Security gets paged.
+3. **Per-seat pricing.** The economics only work if you eliminate
+ operators. But operators are also the reviewers — you can't.
+
+The target, then, is a tool that _sits next to the operator_, drafts
+an answer they can actually paste, cites real files, and stays quiet
+when it doesn't know. Running locally, because that's the only way to
+close the first two problems.
+
+## Decision 1 · Logistic regression over embeddings for intent
+
+Every ticket needs to be routed: **policy · howto · access · incident
+· runbook.** The routing decides which KB lane is searched, what
+clarifying questions the draft asks, and whether a human needs to
+approve the response before it ships.
+
+The industry default for intent classification in 2026 is a small
+sentence-transformer — `all-MiniLM-L6-v2` or similar — plus a dense
+vector cosine classifier. It's the path of least resistance: drop in
+a 22MB ONNX model, compute an embedding, nearest-neighbor against
+labeled examples. F1 in the low 0.90s, just works.
+
+AssistSupport ships **logistic regression over TF-IDF bigrams**.
+Here's why.
+
+**Latency.** The dense path takes 50–80ms for a single classification
+on CPU. That sounds fine until you realize the classifier runs
+_before retrieval_, and retrieval has its own 22ms budget, and the LLM
+hasn't even started yet. Every millisecond in the classifier pushes
+the time-to-first-token past the "feels instant" threshold. Logreg
+lands in 3ms. That's a 20× headroom for the reranker.
+
+**Inspectability.** A dense vector classifier's decision is an
+inner-product between two tensors. When the routing is wrong, you
+can't explain _why_ to the operator or to IT Security. Logreg's
+decision is a sorted list of weighted tokens: `"flash drive" +2.41`,
+`"removable media" +1.96`, `"usb stick" +1.58`. When something routes
+oddly, you can see the reason on one screen — and you can fix it by
+adding training examples, not by retraining a model.
+
+**Calibration.** Softmax over logits is not a probability. At score
+0.80 in an uncalibrated classifier, the empirical hit rate is often
+0.60–0.70 — meaning 30% of "confident" routings are wrong. Logreg
+with Platt scaling ships a calibrated score: at 0.80 the empirical
+hit rate is 0.88. That matters because the _same score_ drives the
+trust-gate: low confidence → clarify mode. If the score lies, the
+gate lies.
+
+**Model size.** The dense transformer is 22MB quantized; the logreg
+is 4MB. Seems like nothing until you add it to the LLM footprint, the
+reranker, the TF-IDF index, SQLCipher data — and remember that the
+whole thing ships on a MacBook.
+
+The tradeoff is expressiveness. Bigrams don't capture "USB drive ≈
+flash drive" the way an embedding does. But the _next stage_ — hybrid
+retrieval — uses a cross-encoder that handles semantics, so the
+classifier doesn't need to. Pushing semantics to where the budget
+allows it is the architectural win.
+
+## Decision 2 · Hybrid TF-IDF + cross-encoder, not ANN
+
+After routing, the pipeline retrieves KB articles and reranks them
+for the LLM's context. The industry default is again dense vectors:
+embed every KB chunk with a sentence-transformer, build an ANN index
+(FAISS, hnswlib, or similar), cosine-search at query time.
+
+AssistSupport runs **TF-IDF candidate retrieval** (returns ~14
+candidates in 22ms) followed by **ms-marco-MiniLM-L-6-v2 cross-encoder
+rerank** (reduces to top-4 in 48ms on CPU). Same reason: latency
+budget + architectural trick.
+
+The dense-retrieval path forces you to either:
+
+- **Embed everything upfront at ingest.** Expensive one-time cost, but
+ also an ongoing cost — every KB article change re-embeds. With
+ 3,500+ articles and a nightly reindex budget of 46 seconds, this
+ doesn't fit.
+- **Embed on-the-fly.** ~100ms per embedding, times 10 queries per
+ draft session, times 40 drafts per operator per day = hours of
+ aggregate CPU. Unfeasible.
+
+The hybrid path embeds only the _query_ and _~14 candidates_ at draft
+time. TF-IDF's recall is high for IT support prose (specific
+technical terms dominate), and the cross-encoder restores semantic
+precision on a small candidate set. Net latency: 22ms + 48ms = 70ms.
+Net quality on the eval suite: NDCG@5 of 0.88, within 2 points of a
+full dense pipeline at 10× the cost.
+
+The architectural trick to internalize: **cross-encoders are slow
+_per document_, but cheap when you only give them 14 documents.** The
+cheap stage (TF-IDF) filters to a size where the expensive stage
+(cross-encoder) becomes affordable. Most dense-retrieval systems skip
+the filter and throw money at ANN hardware.
+
+## Decision 3 · Trust-gated drafts, not optimistic generation
+
+The third choice is less about technique and more about product.
+
+Most LLM-powered support tools **always answer.** The model is prompted
+to generate a response; if retrieval is weak, the response is just
+vaguer. The operator sees a draft; the operator copies the draft. By
+the time a hallucination is caught, it's in Jira.
+
+AssistSupport has three explicit modes — **answer, clarify, abstain**
+— gated on confidence and grounded-claim checks:
+
+- **Answer** (≥0.80 confidence, all claims grounded): the draft ships
+ with inline `[n]` citations. This is the common case.
+- **Clarify** (0.60–0.79 or partial grounding): the draft is a single
+ clarifying question back to the reporter. The operator can still
+ edit and send, but the default is to stop the conversation until
+ more data exists.
+- **Abstain** (below threshold or any unsupported claim): the draft
+ refuses, surfaces the ticket as a KB-gap candidate, and the operator
+ takes over. Abstain fires on ~8% of tickets — the operator never
+ sees a plausible-but-wrong draft.
+
+Two technical pieces make this work:
+
+**Inline citations are generated _into_ the prompt, not post-hoc.**
+The LLM can't cite a document it didn't see; the retrieved chunks are
+numbered and handed to it, and the prompt template instructs it to
+include `[n]` markers. If the generated response lacks citations, it's
+not stripped — it's flagged as unsupported and the mode drops to
+abstain.
+
+**Grounded-claim checks run per-sentence.** The draft is split into
+sentences; each sentence is checked against the retrieved chunks for
+textual or semantic overlap. Sentences that match at least one chunk
+are "supported"; the ratio `supported / total` gates the mode. At
+6/7 supported claims the draft ships with a visible "6/7 claims
+supported" meter in the triage rail — the operator sees the gap
+before they paste.
+
+This looks like over-engineering until you realize **the KB-gap
+dashboard (see [04-kb-gap.png](screenshots/renders/04-kb-gap.png))
+is powered by the abstain signal.** Every abstention is a lead on
+what to write next. The feedback loop that makes the product
+self-improving depends on the trust gate being honest about what it
+doesn't know. Optimistic generation would silently bury these signals
+as "low-confidence answers that shipped anyway."
+
+## The compound effect
+
+Each decision on its own looks unfashionable. Stacked together, they
+form the product's moat:
+
+- The **3ms logreg** buys latency headroom for the **48ms cross-encoder**,
+ which delivers **NDCG@5 = 0.88**, which feeds the **8B local LLM**
+ enough context to draft at **1.2s end-to-end**.
+- The **trust gate** blocks unsupported drafts, which makes the
+ operator **actually trust** the tool, which makes them **rate
+ drafts** instead of skipping the feedback surface.
+- Ratings feed the **KB-gap analyzer**, which **prioritizes articles
+ to write**, which fills the gaps that caused the low-confidence
+ abstentions, which shifts the confidence distribution _right_ over
+ time.
+
+The product gets better because each piece is a **cheap**, **honest**
+component that composes. No single component is doing magic. The
+sum _is_ magic.
+
+## What I'd do again — and what I wouldn't
+
+**Would:**
+
+- **Tauri 2 over Electron.** Apple notarization, bundle size, Rust
+ FFI to the ML sidecar — iteration was 2–3× faster.
+- **Ollama as the LLM runtime.** Zero-maintenance dependency. Swap
+ models by changing a string. Everything else in the stack is tuned
+ to the model's interface, not its identity.
+- **SQLCipher from day one.** Retrofitting encryption is brutal.
+
+**Wouldn't:**
+
+- I'd spend more time on the **feedback UX** earlier. The KB-gap
+ analyzer only works if operators actually click thumbs-up/down, and
+ _any_ UX friction collapses that signal. A single-click rating is
+ the lesson I'd start from.
+- I'd build the **eval harness before the product.** Shipping a
+ grounding regression in prod, then scrambling to measure it after
+ the fact, was a bad trade.
+
+## Links
+
+- **Repo:** [github.com/saagpatel/AssistSupport](https://github.com/saagpatel/AssistSupport) (MIT, 229 commits)
+- **One-pager:** [docs/one-pager/AssistSupport-one-pager.pdf](one-pager/AssistSupport-one-pager.pdf)
+- **Deck:** [docs/deck/AssistSupport-LinkedIn-Live.pptx](deck/AssistSupport-LinkedIn-Live.pptx)
+- **Screenshot set:** [docs/screenshots/](screenshots/)
+- **Redesign handoff:** [docs/redesign/](redesign/)
+
+---
+
+_If there's one thing to take away: local-first is a UX decision, not
+just a security one. Your operators will trust the tool more because
+they can literally turn Wi-Fi off and it still works._
diff --git a/docs/deck/AssistSupport-LinkedIn-Live.pdf b/docs/deck/AssistSupport-LinkedIn-Live.pdf
new file mode 100644
index 0000000..73c4af5
Binary files /dev/null and b/docs/deck/AssistSupport-LinkedIn-Live.pdf differ
diff --git a/docs/deck/DEMO-VIDEO.md b/docs/deck/DEMO-VIDEO.md
new file mode 100644
index 0000000..bf17f50
--- /dev/null
+++ b/docs/deck/DEMO-VIDEO.md
@@ -0,0 +1,186 @@
+# 90-second Demo Video · Storyboard + Script
+
+A tight, async-consumable demo reel intended to live next to the
+one-pager on a portfolio site and in LinkedIn / social posts. This
+doc is the shot list, timing, and verbatim narration. Record once,
+cut twice.
+
+## Specs
+
+- **Runtime:** 90 seconds (hard budget — under 2min plays through autoplay)
+- **Aspect:** 16:9, 1920×1080, 60 fps capture
+- **Audio:** voiceover over gentle ambient bed; no music with vocals
+- **Captions:** on by default (most LinkedIn viewers watch muted)
+- **End card:** 3 seconds · `github.com/saagpatel/AssistSupport` · MIT
+
+## Pre-flight
+
+```bash
+VITE_E2E_MOCK_TAURI=1 pnpm dev -- --port 1422
+# In the browser:
+localStorage.setItem("assistsupport.flag.ASSISTSUPPORT_REVAMP_WORKSPACE_HERO", "1");
+localStorage.setItem("assistsupport.flag.ASSISTSUPPORT_ENABLE_ADMIN_TABS", "1");
+# Reload. The hero workspace renders.
+```
+
+Recording tool: QuickTime Screen Recording with system audio muted
+(voice recorded separately into a clean mic), exported and
+composited in whatever video tool you prefer (iMovie / ScreenFlow /
+DaVinci). Cursor highlights recommended on all click interactions.
+
+## Shot list
+
+### Shot 1 · Cold open (0:00 – 0:08) · 8s
+
+**Visual:** Close-up on the AssistSupport mark in the nav, zoom out
+slowly to reveal the full workspace shell. No UI interaction.
+
+**Narration:**
+
+> _"IT support is the same conversation, replayed. AssistSupport is the
+> second brain that sits next to the operator — and runs entirely on
+> their laptop."_
+
+**Caption:** `AssistSupport — a local-first IT support agent`
+
+---
+
+### Shot 2 · Ticket paste (0:08 – 0:18) · 10s
+
+**Visual:** Paste the ticket text into the composer at normal typing
+speed. Cursor drops into the textarea; text appears line-by-line.
+The "Policy" intent chip illuminates automatically as the ML trace
+fires.
+
+**Narration:**
+
+> _"Drop in a ticket. The ML classifier routes it in three
+> milliseconds — policy, how-to, access, incident, or runbook — so
+> the retrieval hits the right lane."_
+
+**Caption:** `Logistic regression · 0.914 macro-F1 · 3 ms on-device`
+
+---
+
+### Shot 3 · Generate + stream (0:18 – 0:38) · 20s
+
+**Visual:** Cursor moves to Generate (or hit ⌘↵). Brief flash on the
+composer — retrieval runs. The answer starts streaming into the hero
+column. Camera slowly zooms in on the confidence gauge as it fills to
+86%. Inline `[1]` and `[2]` citation pills appear in the prose.
+
+**Narration:**
+
+> _"Retrieval is hybrid — TF-IDF filters the KB to fourteen
+> candidates in twenty-two milliseconds, then a cross-encoder
+> reranks to the top four. The draft streams in from a local
+> Llama 3.1-8B at roughly 42 tokens per second. Inline `[n]`
+> citations are generated directly into the prompt — the model
+> can't cite a document it didn't actually see."_
+
+**Caption:** `Hybrid retrieval · 22ms p50 · 46ms p95 · 3,500+ articles`
+
+---
+
+### Shot 4 · Click a citation (0:38 – 0:48) · 10s
+
+**Visual:** Cursor hovers over `[1]` in the prose — the pill lights
+up with the accent color. Click. The cited source entry highlights in
+the Cited sources block beneath the draft. Smooth.
+
+**Narration:**
+
+> _"Every citation is clickable — it jumps to the exact KB article
+> the claim came from. No hallucinations, no invented URLs."_
+
+**Caption:** `0.93 grounded · 0.96 faithful · 6 / 7 claims supported`
+
+---
+
+### Shot 5 · Feedback → KB gap (0:48 – 1:08) · 20s
+
+**Visual:** Cursor moves to the triage rail, clicks thumbs-up on the
+Feedback card. Quick wipe. Cut to the **Analytics tab** opening —
+KB Gap Analysis panel comes into frame. Camera pans over the ranked
+gap clusters: VPN on office Wi-Fi · Outlook on macOS 14.5 · macOS 14
+permissions drift.
+
+**Narration:**
+
+> _"When the operator rates a draft — or when the model abstains
+> because the KB doesn't cover the question — that signal feeds a
+> self-improving loop. Low-confidence queries are clustered, ranked
+> by impact, and turned into a prioritized list of KB articles to
+> write. Every abstention is a lead on what's next."_
+
+**Caption:** `Self-improving · 14 gap clusters tracked · 87 tickets`
+
+---
+
+### Shot 6 · Privacy tell (1:08 – 1:22) · 14s
+
+**Visual:** Quick cut to macOS Settings → Network, Wi-Fi toggle off.
+Cut back to AssistSupport — the workspace still renders, user types a
+new ticket, Generate still works, draft streams in. A small "offline"
+badge could animate in for emphasis.
+
+**Narration:**
+
+> _"No cloud round trip. No tenant data leaving. No per-seat
+> pricing. Turn the Wi-Fi off — it still works. That's what
+> local-first means."_
+
+**Caption:** `SQLCipher AES-256 · 0 B data exfil · Tauri 2 + Rust`
+
+---
+
+### Shot 7 · End card (1:22 – 1:30) · 8s
+
+**Visual:** Solid dark (`#0B0D10`) background. Teal brand mark.
+Text stack:
+
+```
+AssistSupport
+Your support team's second brain
+github.com/saagpatel/AssistSupport · MIT
+```
+
+**Narration:** _(silent — let the repo URL breathe)_
+
+**Caption:** none — text is the message
+
+---
+
+## Timing summary
+
+| Shot | Runtime | Running total |
+| ---- | ------- | ------------- |
+| 1 | 0:08 | 0:08 |
+| 2 | 0:10 | 0:18 |
+| 3 | 0:20 | 0:38 |
+| 4 | 0:10 | 0:48 |
+| 5 | 0:20 | 1:08 |
+| 6 | 0:14 | 1:22 |
+| 7 | 0:08 | 1:30 |
+
+Running long? Cut shot 4 first (citation click) — the confidence
+gauge in shot 3 already carries the grounded-claims story.
+
+## Narration voice notes
+
+- **Pace:** slow. 90 seconds is less than 200 words. Don't fill silence.
+- **Tone:** engineering-professional, not salesy. No "unlock,"
+ "supercharge," "revolutionize."
+- **Accent word:** `local-first` (gentle emphasis, twice).
+- **Applause line:** the "Turn the Wi-Fi off — it still works"
+ sentence in shot 6. Half-second pause before it.
+
+## Distribution checklist
+
+- [ ] Upload to LinkedIn with 120-char caption:
+ _"AssistSupport — a local-first IT support agent. ML-routed,
+ KB-grounded drafts in under 25ms. No cloud, no leaks. MIT."_
+- [ ] Pin the video on the AssistSupport README (above the fold)
+- [ ] Embed on portfolio site next to the [one-pager PDF](../one-pager/AssistSupport-one-pager.pdf)
+- [ ] Upload a 720p variant for Slack sharing (bandwidth-friendly)
+- [ ] Add YouTube mirror with chapter markers at shot boundaries
diff --git a/docs/deck/README.md b/docs/deck/README.md
new file mode 100644
index 0000000..2fa5059
--- /dev/null
+++ b/docs/deck/README.md
@@ -0,0 +1,99 @@
+# LinkedIn Live Deck — Running a local-first support agent on a Mac
+
+Session 4 of the AssistSupport portfolio pass. A 12-slide editable PPTX
+deck plus a rendered PDF for preview / distribution.
+
+## Output
+
+| File | Purpose |
+| -------------------------------------------------------------------- | ---------------------------------------------------- |
+| [AssistSupport-LinkedIn-Live.pptx](AssistSupport-LinkedIn-Live.pptx) | Editable deck for PowerPoint / Keynote / Slides. |
+| [AssistSupport-LinkedIn-Live.pdf](AssistSupport-LinkedIn-Live.pdf) | 12-page PDF render (via LibreOffice) for preview. |
+| [build.mjs](build.mjs) | pptxgenjs composer — re-run to regenerate the deck. |
+| [package.json](package.json) | Local deps (`pptxgenjs`) isolated from the app root. |
+
+## Slide outline
+
+| # | Title | Purpose |
+| --- | ------------------------------------------------------------------------- | --------------------------------------------------------------------------------- |
+| 01 | Running a local-first support agent on a Mac. | Title · speaker chip · positioning. |
+| 02 | IT support drowns in the same questions — and cloud AI isn't a clean fix. | The problem framing. |
+| 03 | A second brain — not a replacement. | Thesis: local-first · KB-grounded · trust-gated. |
+| 04 | The pipeline — five stages, all local. | Architecture: intent → retrieve → rerank → draft → learn, with latency per stage. |
+| 05 | The workspace — composer, answer, triage. | Demo pause — hero screenshot + 3 annotated callouts. |
+| 06 | Why logreg + TF-IDF beat embeddings here. | ML intent classifier: F1, latency, inspectability. |
+| 07 | Sub-25 ms retrieval over 3,500+ articles. | Hybrid search: TF-IDF + MiniLM-L-6 cross-encoder, latency budget diagram. |
+| 08 | The model is allowed to say "I don't know." | Trust gating: answer / clarify / abstain modes. |
+| 09 | Low-confidence queries become the KB backlog. | Self-improving feedback loop with KB gap dashboard. |
+| 10 | Yes, a desktop app needs a deploy story. | Ops surface + eval harness side-by-side. |
+| 11 | Five things I didn't expect. | Lessons — UX, prompt cache, logreg, Tauri, feedback. |
+| 12 | Questions? | Resources · repo · connect · tech-stack line. |
+
+Every slide includes **speaker notes** for the Live — shown in the
+Presenter View off-screen. The notes cover timing cues, pivot points,
+and the audience-specific call-outs the speaker should make.
+
+## Design-system continuity
+
+Same tokens as the Workspace redesign, the screenshot set, and the
+one-pager:
+
+- Background: `#0B0D10` (warm graphite)
+- Accent: teal `#4FD1C5` — the only decorative color
+- Status colors (good / warn / bad / info) used only on slide 08
+ (trust-gated modes) where the three borders actually carry meaning
+- Type: IBM Plex Sans + JetBrains Mono (named font — the deck will
+ fall back cleanly on machines without them installed)
+- Thin teal hairline across the top of every slide + subtle border
+ strip along the bottom — mirrors the active-nav indicator in the
+ app shell
+- Slide number chip (`01 / 12`) in the top-right of every slide, in
+ JetBrains Mono with teal accent on the current number
+
+The demo slides (05, 06, 09, 10) embed the portfolio screenshots
+directly from [`docs/screenshots/renders/`](../screenshots/renders/), so any
+re-run of session 2 flows through to the deck on the next `build.mjs`
+invocation.
+
+## Editable slides
+
+The deck is built with `pptxgenjs` using native text boxes + shapes +
+embedded PNGs — **not** image-background slides. A speaker can open
+the .pptx in PowerPoint, Keynote, or Google Slides and:
+
+- Edit any title or bullet text directly
+- Swap the speaker chip on slide 01
+- Replace individual screenshots without rebuilding
+- Re-time the talk by adding or removing slides
+
+This is deliberate — a LinkedIn Live rehearsal almost always surfaces
+wording tweaks, and painting slides onto images would block that.
+
+## Regenerating
+
+```bash
+cd docs/deck
+npm run build
+# optional: render a PDF preview
+soffice --headless --convert-to pdf AssistSupport-LinkedIn-Live.pptx
+```
+
+`pptxgenjs` is installed locally under `docs/deck/node_modules/` so it
+never pollutes the app's root `package.json`. The LibreOffice PDF step
+is optional — PowerPoint itself can export to PDF if preferred.
+
+## Portfolio pass — summary
+
+Session 4 closes the AssistSupport portfolio pass. The four artifacts
+read as a coherent product:
+
+1. **Session 1** — [Workspace redesign handoff bundle](../redesign/README.md): 3-region hero layout as React + CSS drop-in, behind a feature flag, zero new tokens.
+2. **Session 2** — [6-panel 2× screenshot set](../screenshots/README.md) + captions: portfolio-grade PNGs of the live product surfaces.
+3. **Session 3** — [Landscape-letter one-pager PDF](../one-pager/README.md): tagline, five feature pillars, three impact stats.
+4. **Session 4** — this deck: 12 slides for the LinkedIn Live walkthrough.
+
+One design system spans all four: warm-graphite surfaces, teal accent,
+IBM Plex Sans + JetBrains Mono, tokens sourced from
+[`src/styles/revamp/tokens.css`](../../src/styles/revamp/tokens.css).
+If that token file shifts, every artifact re-syncs from the same
+source.
diff --git a/docs/deck/REHEARSAL.md b/docs/deck/REHEARSAL.md
new file mode 100644
index 0000000..2351825
--- /dev/null
+++ b/docs/deck/REHEARSAL.md
@@ -0,0 +1,156 @@
+# LinkedIn Live Rehearsal Kit
+
+Companion to [AssistSupport-LinkedIn-Live.pptx](AssistSupport-LinkedIn-Live.pptx).
+Walks you through a **~30-minute talk** structure plus **10-minute
+Q&A**. Timing, pivots per audience, and anticipated questions per
+slide. Rehearse twice — once with video off reading through cues, once
+full-dress with the live demo.
+
+## Total budget — 40 min
+
+| Block | Target | Notes |
+| -------------- | ------ | -------------------------------------------------- |
+| Slides 01 – 03 | 5 min | Hook + framing. Do **not** spend more. |
+| Slide 04 | 4 min | Architecture — heaviest visual, slow down. |
+| Slide 05 | 5 min | **Live demo pause.** Budget extra for interaction. |
+| Slides 06 – 08 | 7 min | ML intent · hybrid search · trust gating. |
+| Slide 09 | 3 min | Feedback loop — the self-improving story. |
+| Slide 10 | 2 min | Ops + eval — quick pass for IT leaders. |
+| Slide 11 | 3 min | Lessons — pick 2 to dwell on by audience. |
+| Slide 12 + Q&A | 11 min | Open with one planted question if room is quiet. |
+
+Running long? Cut slide 06 feature-weights and slide 08 "per-sentence
+match" details first — the thesis still lands.
+
+## Slide-by-slide cues
+
+### 01 · Title (40 sec)
+
+Open with a single sentence: _"AssistSupport is an IT support assistant
+that runs entirely on your laptop — no cloud, no tenant data leaking,
+sub-25ms answers grounded in your own KB."_ Name drop Tauri + Rust +
+local Ollama to anchor. Don't read the subtitle — let the slide do it.
+
+### 02 · The problem (60 sec)
+
+One bullet at a time. Land the **~25% of tickets** number verbally
+since it's the same number as the deflection stat on slide 12 —
+reviewers should feel the roundtrip. If audience is IT leadership,
+linger on "per-seat pricing compounds." If engineering, linger on
+"can't debug why."
+
+### 03 · Thesis (90 sec)
+
+Read the three pillar heads out loud: _local-first · KB-grounded ·
+trust-gated_. The italic kicker at the bottom is the quote you'll
+revisit in Q&A — read it slowly: **"You don't need a foundation model
+on every desk. You need a pipeline that knows the KB cold, runs fast,
+and keeps the operator in the loop."**
+
+### 04 · Architecture (4 min)
+
+Slowest slide. Walk left-to-right through the 5 stages. At DRAFT
+(highlighted), pause — that's where the LLM lives. Call out the
+runtime line verbatim so audience absorbs the dependency list. End on
+the stat row: **1.8s p95 · 22ms p50 · ~5GB · 0 B exfil**. The last stat
+(0 B) is the applause line — wait a beat.
+
+### 05 · Demo (5 min, interactive)
+
+Switch to screenshare / running app. Suggested script:
+
+1. Click into the composer, paste the real prompt from the deck
+ ("Can I use a flash drive...").
+2. Click a single intent chip so the ML trace lights up.
+3. Press **⌘↵**. Narrate the sub-25ms retrieval while it runs.
+4. When the draft streams in, hover a `[1]` citation — show the
+ source navigation.
+5. Thumbs-up, then click "Save template" to show the feedback loop.
+ Fallback if something breaks: switch back to slide 05's annotated
+ screenshot and walk the 3 callouts.
+
+### 06 · ML intent (90 sec)
+
+This is the **"why not embeddings?" slide**. Lead with: _"Logreg
+isn't a downgrade — it's a choice."_ Read the macro-F1 number. If
+someone asks during Q&A about BERT-level quality, point to the
+feature-weights visual and say _"every routing decision is
+inspectable — try getting that from a dense model."_
+
+### 07 · Hybrid search (2 min)
+
+Latency budget visual on the right is the anchor — point to each bar
+as you narrate. The key claim: _"Cross-encoder is slow but cheap
+here because it only sees 14 candidates."_ That's the architectural
+trick worth landing.
+
+### 08 · Trust gating (90 sec)
+
+Three mode cards — read the colored heads (ANSWER / CLARIFY / ABSTAIN)
+aloud. The landing line: _"The model is allowed to say 'I don't
+know.'"_ This is the IT-security applause line.
+
+### 09 · Feedback loop (3 min)
+
+The screenshot is the hero. Walk through the 5 bullets top-to-bottom.
+Emphasize: _"Every abstained query is a lead on what to write next."_
+That turns a negative (abstention) into a positive (KB backlog item).
+
+### 10 · Ops (2 min)
+
+Two screenshots side by side. Quick pass: _"Yes, a desktop app needs
+a deploy story."_ Name the **90-second rollback SLO** and the eval
+gate thresholds (grounding ≥ 0.90 · faithfulness ≥ 0.95).
+
+### 11 · Lessons (3 min)
+
+Pick **two** lessons to dwell on based on audience:
+
+- **IT leaders / security** → #1 (local-first UX), #3 (inspectable logreg)
+- **ML engineers** → #2 (prompt-cache), #3 (logreg vs embeddings)
+- **Platform / desktop devs** → #4 (Tauri), #5 (one-click rating)
+ Read the rest aloud but keep moving.
+
+### 12 · Q&A (10 min)
+
+Leave it on screen. Repeat the repo URL verbally: _"github dot com
+slash saag patel slash AssistSupport."_ Don't fill silence — count 3
+seconds after the first hand.
+
+## Anticipated questions (by slide)
+
+| From slide | Q | A (short form) |
+| ---------- | ------------------------------------------------ | -------------------------------------------------------------------------------- |
+| 04 | Why llama3.1-8b and not a 70B? | 5GB memory fits on any M-series · 1.2s draft is the budget · good enough. |
+| 04 | Can you swap the model? | Ollama backend · any chat-tuned model works · settings UI handles download. |
+| 06 | Why not BERT / sentence-transformers for intent? | Latency (50×) · model size (500×) · logreg is inspectable. Same F1. |
+| 07 | Won't TF-IDF miss semantic matches? | That's what the cross-encoder is for · it reranks on semantic similarity. |
+| 08 | How do you force inline citations? | Citations are generated _into_ the prompt · post-hoc strip if missing → abstain. |
+| 09 | Who writes the KB articles? | Operators · the gap analyzer just prioritizes what to write. |
+| 10 | Does the eval harness run per-commit? | Yes · release gate blocks on grounding/faithfulness/intent thresholds. |
+| general | Why not cloud? | Data residency · zero per-seat · tenant isolation is a single laptop. |
+| general | How does it scale to 1000 operators? | It doesn't need to · each laptop is independent · shared KB via file sync. |
+| general | What's the privacy story? | SQLCipher AES-256 · no network calls during inference · audited outbound. |
+| general | Open source? | Yes · MIT · github.com/saagpatel/AssistSupport. |
+
+## Opening line options (pick one per run)
+
+1. _"I built an IT support assistant that runs entirely on my laptop. No cloud, no tenant data leaving, sub-25ms answers. This is how it works."_
+2. _"Most AI support tools are expensive, leaky, and opaque. Today I'll show you one that's none of those — because it lives on your Mac."_
+3. _"The last time I deployed an AI support system, three things went wrong. Today I'll show what we built so they can't go wrong again."_
+
+## Closing line options
+
+1. _"Everything you just saw is MIT-licensed and 229 commits. Fork it, ship your own."_
+2. _"If there's one thing to take away: local-first is a UX decision, not just a security one. Your operators will trust the tool more."_
+3. _"Support will always be repetitive. The question is whether repetition is suffered by humans or compiled down into a pipeline. Thanks for watching."_
+
+## Dry-run checklist
+
+- [ ] Fonts — IBM Plex Sans + JetBrains Mono installed locally (PowerPoint will fallback otherwise)
+- [ ] Recording — Presenter View enabled so notes show on your laptop, slides on the stream
+- [ ] Demo — `pnpm dev` + `VITE_E2E_MOCK_TAURI=1` + `VITE_ASSISTSUPPORT_REVAMP_WORKSPACE_HERO=1` primed before you start
+- [ ] Fallback — [panels/01-workspace.html](../screenshots/panels/01-workspace.html) open in a spare browser tab in case the demo breaks
+- [ ] Network — confirm LinkedIn Live upload path works 10 minutes before going live
+- [ ] Camera — framed with the AssistSupport logo or a whiteboard in the background, not a messy desk
+- [ ] Water — within reach; 40 minutes is longer than you think
diff --git a/docs/deck/build.mjs b/docs/deck/build.mjs
new file mode 100644
index 0000000..9ecedcb
--- /dev/null
+++ b/docs/deck/build.mjs
@@ -0,0 +1,1264 @@
+/**
+ * build.mjs — compose the 12-slide LinkedIn Live deck.
+ *
+ * Run:
+ * cd docs/deck && npm run build
+ *
+ * Outputs:
+ * docs/deck/AssistSupport-LinkedIn-Live.pptx
+ *
+ * Design system:
+ * - Background: warm-graphite dark (#0b0d10 → #141a22 subtle gradient
+ * via solid fills, since pptx can't do radial gradients natively)
+ * - Accent: teal #4fd1c5
+ * - Type: IBM Plex Sans (fallback Calibri), JetBrains Mono (fallback Consolas)
+ * - Slides are native text boxes + shapes + embedded PNGs, so the
+ * speaker can edit titles/bullets in PowerPoint before the Live.
+ */
+
+import PptxGenJS from "pptxgenjs";
+import { dirname, join, resolve } from "node:path";
+import { fileURLToPath } from "node:url";
+
+const __dirname = dirname(fileURLToPath(import.meta.url));
+const ROOT = resolve(__dirname, "..", "..");
+const SHOT = (n) => join(ROOT, "docs", "screenshots", "renders", n);
+
+// =========================================================
+// TOKENS
+// =========================================================
+const C = {
+ bg: "0B0D10",
+ bg2: "141A22",
+ surface: "1B2330",
+ border: "262E3B",
+ text1: "F2F5F8",
+ text2: "B8C0CC",
+ text3: "7A8494",
+ accent: "4FD1C5",
+ accentDark: "2AA198",
+ good: "2DD4BF",
+ warn: "FBBF24",
+ bad: "FB7185",
+ info: "60A5FA",
+};
+const FONT = "IBM Plex Sans";
+const MONO = "JetBrains Mono";
+
+// =========================================================
+// DECK SETUP — 16:9 widescreen, 13.333 × 7.5 in
+// =========================================================
+const pptx = new PptxGenJS();
+pptx.layout = "LAYOUT_WIDE"; // 13.333 × 7.5 in
+pptx.title = "Running a local-first support agent on a Mac";
+pptx.author = "Saagar Patel";
+pptx.company = "AssistSupport";
+pptx.subject = "LinkedIn Live — portfolio-grade IT support assistant";
+
+const W = 13.333;
+const H = 7.5;
+
+// Shared master slide: dark background + thin teal accent line + footer
+pptx.defineSlideMaster({
+ title: "BASE",
+ background: { color: C.bg },
+ objects: [
+ // Top accent line
+ {
+ rect: {
+ x: 0,
+ y: 0,
+ w: W,
+ h: 0.04,
+ fill: { color: C.accent },
+ line: { color: C.accent, width: 0 },
+ },
+ },
+ // Bottom thin border strip
+ {
+ rect: {
+ x: 0,
+ y: H - 0.35,
+ w: W,
+ h: 0.02,
+ fill: { color: C.border },
+ line: { color: C.border, width: 0 },
+ },
+ },
+ // Footer left: brand
+ {
+ text: {
+ text: "AssistSupport",
+ options: {
+ x: 0.55,
+ y: H - 0.3,
+ w: 3,
+ h: 0.25,
+ fontFace: FONT,
+ fontSize: 9,
+ color: C.text3,
+ bold: true,
+ charSpacing: 2,
+ },
+ },
+ },
+ // Footer center: talk title
+ {
+ text: {
+ text: "Running a local-first support agent on a Mac",
+ options: {
+ x: 3.5,
+ y: H - 0.3,
+ w: 6.5,
+ h: 0.25,
+ fontFace: FONT,
+ fontSize: 9,
+ color: C.text3,
+ align: "center",
+ },
+ },
+ },
+ // Page numbering is handled per-slide via pageChip() — no
+ // master-level slideNumber to avoid double-numbering.
+ ],
+});
+
+// =========================================================
+// HELPERS
+// =========================================================
+
+/**
+ * Add a numbered page chip (e.g. "01 / 12") in the top-right corner.
+ */
+function pageChip(slide, n, total = 12) {
+ slide.addText(
+ [
+ {
+ text: `${String(n).padStart(2, "0")}`,
+ options: { color: C.accent, bold: true },
+ },
+ { text: ` / ${total}`, options: { color: C.text3 } },
+ ],
+ {
+ x: W - 1.2,
+ y: 0.25,
+ w: 0.9,
+ h: 0.3,
+ fontFace: MONO,
+ fontSize: 10,
+ align: "right",
+ charSpacing: 2,
+ },
+ );
+}
+
+/**
+ * Add the eyebrow label above a slide title, e.g. "CHAPTER 02".
+ */
+function eyebrow(slide, text, x = 0.55, y = 0.55) {
+ slide.addText(text, {
+ x,
+ y,
+ w: 10,
+ h: 0.3,
+ fontFace: MONO,
+ fontSize: 10,
+ color: C.accent,
+ bold: true,
+ charSpacing: 3,
+ });
+}
+
+/**
+ * Add the slide title (the big h1).
+ */
+function title(slide, text, opts = {}) {
+ slide.addText(text, {
+ x: opts.x ?? 0.55,
+ y: opts.y ?? 0.9,
+ w: opts.w ?? W - 1.6,
+ h: opts.h ?? 1.1,
+ fontFace: FONT,
+ fontSize: opts.fontSize ?? 34,
+ color: C.text1,
+ bold: true,
+ valign: "top",
+ charSpacing: -1,
+ });
+}
+
+/**
+ * Add a bulleted body list.
+ */
+function bullets(slide, items, opts = {}) {
+ slide.addText(
+ items.map((t) => ({ text: t, options: { bullet: { code: "2022" } } })),
+ {
+ x: opts.x ?? 0.55,
+ y: opts.y ?? 2.4,
+ w: opts.w ?? W - 1.6,
+ h: opts.h ?? 4,
+ fontFace: FONT,
+ fontSize: opts.fontSize ?? 16,
+ color: C.text2,
+ paraSpaceAfter: 10,
+ valign: "top",
+ },
+ );
+}
+
+/**
+ * Add a horizontal stat row — 2-4 stat cards. `h` defaults to 1.6;
+ * callers can pass a smaller height when vertical space is tight.
+ */
+function statRow(slide, stats, y = 5.3, h = 1.6) {
+ const n = stats.length;
+ const totalW = W - 1.1;
+ const gap = 0.2;
+ const w = (totalW - gap * (n - 1)) / n;
+ const valueFont = h >= 1.5 ? 36 : 30;
+ const valueH = h >= 1.5 ? 0.7 : 0.55;
+ const noteY = h >= 1.5 ? 1.15 : 0.95;
+ stats.forEach((s, i) => {
+ const x = 0.55 + i * (w + gap);
+ slide.addShape(pptx.shapes.ROUNDED_RECTANGLE, {
+ x,
+ y,
+ w,
+ h,
+ fill: { color: C.surface, transparency: 40 },
+ line: { color: C.border, width: 0.5 },
+ rectRadius: 0.1,
+ });
+ slide.addText(s.label, {
+ x: x + 0.2,
+ y: y + 0.12,
+ w: w - 0.4,
+ h: 0.28,
+ fontFace: MONO,
+ fontSize: 10,
+ color: C.accent,
+ bold: true,
+ charSpacing: 2,
+ });
+ slide.addText(s.value, {
+ x: x + 0.2,
+ y: y + 0.4,
+ w: w - 0.4,
+ h: valueH,
+ fontFace: MONO,
+ fontSize: valueFont,
+ color: C.text1,
+ bold: true,
+ charSpacing: -2,
+ });
+ slide.addText(s.note, {
+ x: x + 0.2,
+ y: y + noteY,
+ w: w - 0.4,
+ h: 0.4,
+ fontFace: FONT,
+ fontSize: 10.5,
+ color: C.text2,
+ valign: "top",
+ });
+ });
+}
+
+/**
+ * Add the speaker-notes text the presenter sees off-screen during the Live.
+ */
+function notes(slide, text) {
+ slide.addNotes(text);
+}
+
+// =========================================================
+// SLIDE 01 — TITLE
+// =========================================================
+{
+ const s = pptx.addSlide({ masterName: "BASE" });
+ pageChip(s, 1);
+
+ s.addText("● LIVE · portfolio walkthrough", {
+ x: 0.55,
+ y: 2.9,
+ w: 6,
+ h: 0.4,
+ fontFace: MONO,
+ fontSize: 12,
+ color: C.accent,
+ charSpacing: 3,
+ });
+
+ s.addText("Running a local-first", {
+ x: 0.55,
+ y: 3.25,
+ w: 11,
+ h: 1.1,
+ fontFace: FONT,
+ fontSize: 64,
+ color: C.text1,
+ bold: true,
+ charSpacing: -2,
+ });
+ s.addText(
+ [
+ { text: "support agent ", options: { color: C.text1 } },
+ { text: "on a Mac.", options: { color: C.accent } },
+ ],
+ {
+ x: 0.55,
+ y: 4.25,
+ w: 11,
+ h: 1.1,
+ fontFace: FONT,
+ fontSize: 64,
+ bold: true,
+ charSpacing: -2,
+ },
+ );
+
+ s.addText(
+ "How AssistSupport drafts KB-grounded IT support responses in under 25 ms — without a single query leaving the laptop.",
+ {
+ x: 0.55,
+ y: 5.6,
+ w: 10,
+ h: 0.9,
+ fontFace: FONT,
+ fontSize: 15,
+ color: C.text2,
+ valign: "top",
+ },
+ );
+
+ // Speaker chip
+ s.addShape(pptx.shapes.ROUNDED_RECTANGLE, {
+ x: 0.55,
+ y: 6.55,
+ w: 5.2,
+ h: 0.5,
+ fill: { color: C.surface, transparency: 30 },
+ line: { color: C.border, width: 0.5 },
+ rectRadius: 0.08,
+ });
+ s.addText(
+ [
+ { text: "Saagar Patel", options: { color: C.text1, bold: true } },
+ { text: " · IT Platform Eng · Box", options: { color: C.text3 } },
+ ],
+ {
+ x: 0.75,
+ y: 6.55,
+ w: 5,
+ h: 0.5,
+ fontFace: FONT,
+ fontSize: 12,
+ valign: "middle",
+ },
+ );
+
+ notes(
+ s,
+ [
+ "Welcome — we're going to walk through AssistSupport, a Tauri + React + Rust IT support assistant that runs entirely on a laptop.",
+ "No cloud round trips. No queries leave the machine. The whole ML pipeline — classifier, retrieval, reranker, generation — is local.",
+ "Format for the next ~30 minutes: show the product, then the architecture, then what I learned shipping it.",
+ ].join("\n"),
+ );
+}
+
+// =========================================================
+// SLIDE 02 — THE PROBLEM
+// =========================================================
+{
+ const s = pptx.addSlide({ masterName: "BASE" });
+ pageChip(s, 2);
+ eyebrow(s, "CHAPTER 01 · THE PROBLEM");
+ title(s, "IT support drowns in the same questions — and cloud AI isn't a clean fix.");
+
+ bullets(
+ s,
+ [
+ "Every IT team repeats itself: ~25% of tickets are policy / howto questions already answered in the KB.",
+ "Cloud LLMs promise automation but add three sharp costs: data leaves the tenant, hallucinations look confident, and per-seat pricing compounds.",
+ "Vendor assistants hide their routing and retrieval — when they answer wrong, you can't debug why.",
+ "The real bar: draft something a human would actually paste into Jira, cite where the claim came from, and be honest when the KB doesn't know.",
+ ],
+ { y: 2.3, fontSize: 17 },
+ );
+
+ notes(
+ s,
+ "Set the frame: this isn't 'replace IT with AI' — it's 'give IT a second brain that's cheap, auditable, and local'.",
+ );
+}
+
+// =========================================================
+// SLIDE 03 — THE THESIS
+// =========================================================
+{
+ const s = pptx.addSlide({ masterName: "BASE" });
+ pageChip(s, 3);
+ eyebrow(s, "CHAPTER 02 · THESIS");
+ title(s, "A second brain — not a replacement.");
+
+ // Three-pillar grid
+ const pillars = [
+ {
+ head: "LOCAL-FIRST",
+ body: "App, sidecar, classifier, retrieval, reranker, and LLM all run on-device. SQLCipher AES-256 at rest. Zero data leaves the machine.",
+ },
+ {
+ head: "KB-GROUNDED",
+ body: "Every draft cites real KB articles. Hybrid retrieval over 3,500+ indexed docs; inline [n] markers you can click.",
+ },
+ {
+ head: "TRUST-GATED",
+ body: "Confidence modes (answer / clarify / abstain). The model is allowed to refuse when the KB doesn't cover the question.",
+ },
+ ];
+ pillars.forEach((p, i) => {
+ const x = 0.55 + i * 4.15;
+ s.addShape(pptx.shapes.ROUNDED_RECTANGLE, {
+ x,
+ y: 2.3,
+ w: 4,
+ h: 3.4,
+ fill: { color: C.surface, transparency: 40 },
+ line: { color: C.border, width: 0.5 },
+ rectRadius: 0.12,
+ });
+ s.addText(`0${i + 1}`, {
+ x: x + 0.25,
+ y: 2.45,
+ w: 1,
+ h: 0.4,
+ fontFace: MONO,
+ fontSize: 12,
+ color: C.accent,
+ bold: true,
+ });
+ s.addText(p.head, {
+ x: x + 0.25,
+ y: 2.9,
+ w: 3.6,
+ h: 0.5,
+ fontFace: FONT,
+ fontSize: 20,
+ color: C.text1,
+ bold: true,
+ charSpacing: -0.5,
+ });
+ s.addText(p.body, {
+ x: x + 0.25,
+ y: 3.55,
+ w: 3.6,
+ h: 2,
+ fontFace: FONT,
+ fontSize: 13,
+ color: C.text2,
+ valign: "top",
+ });
+ });
+
+ s.addText(
+ "You don't need a foundation model on every desk. You need a pipeline that knows the KB cold, runs fast, and keeps the operator in the loop.",
+ {
+ x: 0.55,
+ y: 6.0,
+ w: W - 1.1,
+ h: 0.8,
+ fontFace: FONT,
+ fontSize: 15,
+ color: C.text2,
+ italic: true,
+ valign: "top",
+ },
+ );
+
+ notes(
+ s,
+ "Frame the three pillars. They map 1:1 to the feature pillars on the one-pager.",
+ );
+}
+
+// =========================================================
+// SLIDE 04 — ARCHITECTURE
+// =========================================================
+{
+ const s = pptx.addSlide({ masterName: "BASE" });
+ pageChip(s, 4);
+ eyebrow(s, "CHAPTER 03 · ARCHITECTURE");
+ title(s, "The pipeline — five stages, all local.");
+
+ // 5-stage pipeline diagram
+ const stages = [
+ { label: "INTENT", sub: "logreg", time: "3 ms" },
+ { label: "RETRIEVE", sub: "TF-IDF", time: "22 ms" },
+ { label: "RERANK", sub: "MiniLM", time: "48 ms" },
+ { label: "DRAFT", sub: "llama3.1-8b", time: "1.2 s" },
+ { label: "LEARN", sub: "feedback", time: "loop" },
+ ];
+ const boxW = 2.2;
+ const gap = 0.3;
+ const totalW = boxW * stages.length + gap * (stages.length - 1);
+ const startX = (W - totalW) / 2;
+ const stageY = 2.6;
+
+ stages.forEach((st, i) => {
+ const x = startX + i * (boxW + gap);
+ const isActive = i === 3;
+ s.addShape(pptx.shapes.ROUNDED_RECTANGLE, {
+ x,
+ y: stageY,
+ w: boxW,
+ h: 1.5,
+ fill: {
+ color: isActive ? C.accent : C.surface,
+ transparency: isActive ? 70 : 40,
+ },
+ line: {
+ color: isActive ? C.accent : C.border,
+ width: isActive ? 1.25 : 0.5,
+ },
+ rectRadius: 0.12,
+ });
+ s.addText(st.label, {
+ x,
+ y: stageY + 0.2,
+ w: boxW,
+ h: 0.35,
+ fontFace: MONO,
+ fontSize: 11,
+ color: isActive ? C.accent : C.text3,
+ bold: true,
+ align: "center",
+ charSpacing: 3,
+ });
+ s.addText(st.sub, {
+ x,
+ y: stageY + 0.6,
+ w: boxW,
+ h: 0.4,
+ fontFace: FONT,
+ fontSize: 17,
+ color: C.text1,
+ bold: true,
+ align: "center",
+ });
+ s.addText(st.time, {
+ x,
+ y: stageY + 1.05,
+ w: boxW,
+ h: 0.35,
+ fontFace: MONO,
+ fontSize: 12,
+ color: C.text2,
+ align: "center",
+ });
+ // Arrow between boxes
+ if (i < stages.length - 1) {
+ s.addShape(pptx.shapes.RIGHT_TRIANGLE, {
+ x: x + boxW + 0.06,
+ y: stageY + 0.65,
+ w: 0.2,
+ h: 0.2,
+ fill: { color: C.accent },
+ line: { color: C.accent, width: 0 },
+ rotate: 90,
+ });
+ }
+ });
+
+ // Context tray below pipeline
+ s.addText(
+ [
+ {
+ text: "Runtime: ",
+ options: { color: C.text3, bold: true, charSpacing: 2 },
+ },
+ { text: "Tauri 2 shell · Rust sidecar · React 19 frontend · Ollama (llama3.1-8b) · SQLCipher SQLite", options: { color: C.text2 } },
+ ],
+ {
+ x: 0.55,
+ y: 4.6,
+ w: W - 1.1,
+ h: 0.5,
+ fontFace: FONT,
+ fontSize: 13,
+ valign: "top",
+ },
+ );
+
+ statRow(
+ s,
+ [
+ { label: "END-TO-END P95", value: "1.8s", note: "full draft, hybrid search + 8B token gen" },
+ { label: "P50 HYBRID SEARCH", value: "22ms", note: "TF-IDF + MiniLM-L6 rerank" },
+ { label: "MEMORY FOOTPRINT", value: "~5GB", note: "llama3.1-8b q4 + app + indexes" },
+ { label: "DATA EXFIL", value: "0 B", note: "everything stays on the machine" },
+ ],
+ 5.4,
+ );
+
+ notes(
+ s,
+ [
+ "Walk through left to right. Emphasize the latency budget — each box is a specific choice (logreg over BERT for intent, MiniLM cross-encoder over dense ANN, llama3.1-8b over 70B).",
+ "Close with 'zero bytes leave the machine' — that's the tenant story.",
+ ].join("\n"),
+ );
+}
+
+// =========================================================
+// SLIDE 05 — DEMO: THE WORKSPACE
+// =========================================================
+{
+ const s = pptx.addSlide({ masterName: "BASE" });
+ pageChip(s, 5);
+ eyebrow(s, "CHAPTER 04 · DEMO");
+ title(s, "The workspace — composer, answer, triage.");
+
+ // Hero screenshot
+ s.addImage({
+ path: SHOT("01-workspace.png"),
+ x: 0.55,
+ y: 2.25,
+ w: 8.5,
+ h: 5.31, // 16:10 ratio ≈ 8.5 × (1800/2880) = 5.31
+ sizing: { type: "contain", w: 8.5, h: 5.31 },
+ });
+
+ // Annotation callouts on the right
+ const callouts = [
+ {
+ n: "01",
+ head: "Composer",
+ body: "Paste a ticket, pick the intent chip, set response length — ⌘↵ generates a draft.",
+ },
+ {
+ n: "02",
+ head: "Hero answer",
+ body: "16px / 1.65 prose at 70ch. Inline [n] pills click through to the cited source.",
+ },
+ {
+ n: "03",
+ head: "Triage rail",
+ body: "Workflow · signals · alternatives · feedback · context — all in one column.",
+ },
+ ];
+ callouts.forEach((c, i) => {
+ const y = 2.3 + i * 1.8;
+ s.addShape(pptx.shapes.ROUNDED_RECTANGLE, {
+ x: 9.4,
+ y,
+ w: 3.4,
+ h: 1.65,
+ fill: { color: C.surface, transparency: 40 },
+ line: { color: C.border, width: 0.5 },
+ rectRadius: 0.1,
+ });
+ s.addText(c.n, {
+ x: 9.55,
+ y: y + 0.1,
+ w: 0.6,
+ h: 0.3,
+ fontFace: MONO,
+ fontSize: 10,
+ color: C.accent,
+ bold: true,
+ charSpacing: 2,
+ });
+ s.addText(c.head, {
+ x: 9.55,
+ y: y + 0.4,
+ w: 3.1,
+ h: 0.35,
+ fontFace: FONT,
+ fontSize: 15,
+ color: C.text1,
+ bold: true,
+ });
+ s.addText(c.body, {
+ x: 9.55,
+ y: y + 0.78,
+ w: 3.1,
+ h: 0.85,
+ fontFace: FONT,
+ fontSize: 11,
+ color: C.text2,
+ valign: "top",
+ });
+ });
+
+ notes(
+ s,
+ "Live demo pause point. Scroll the draft, click a citation, show the hover state on a KB source.",
+ );
+}
+
+// =========================================================
+// SLIDE 06 — ML INTENT
+// =========================================================
+{
+ const s = pptx.addSlide({ masterName: "BASE" });
+ pageChip(s, 6);
+ eyebrow(s, "CHAPTER 05 · ML INTENT");
+ title(s, "Why logreg + TF-IDF beat embeddings here.");
+
+ bullets(
+ s,
+ [
+ "Logistic regression over TF-IDF bigrams — 3 ms on-device, 0.914 macro-F1 across policy / howto / access / incident / runbook.",
+ "Calibrated with Platt scaling so the softmax score actually means what it claims — at ≥0.80 the hit rate is 0.88 empirically.",
+ "Feature weights are inspectable: every routing decision is a ranked list of tokens, not a dense vector. Easy to debug, retrain.",
+ "Dense embeddings would have matched F1 at ~50× the latency and ~500× the model size. Wrong tool for the budget.",
+ ],
+ { y: 2.3, w: 7.8, fontSize: 14 },
+ );
+
+ // Intent screenshot thumb
+ s.addImage({
+ path: SHOT("03-intent.png"),
+ x: 8.6,
+ y: 2.3,
+ w: 4.2,
+ h: 2.63,
+ sizing: { type: "contain", w: 4.2, h: 2.63 },
+ });
+ s.addText("Live classifier trace for AS-4218", {
+ x: 8.6,
+ y: 4.98,
+ w: 4.2,
+ h: 0.3,
+ fontFace: MONO,
+ fontSize: 9,
+ color: C.text3,
+ align: "center",
+ });
+
+ statRow(
+ s,
+ [
+ { label: "MACRO-F1", value: "0.914", note: "40-case eval suite #4812" },
+ { label: "LATENCY", value: "3 ms", note: "per ticket, on-device" },
+ { label: "MODEL SIZE", value: "4 MB", note: "vs 450MB+ for a small BERT" },
+ ],
+ 5.5,
+ );
+
+ notes(
+ s,
+ "Be ready for the 'why not embeddings' question. The answer: latency budget + auditability. Also note calibration matters more than raw F1 for routing.",
+ );
+}
+
+// =========================================================
+// SLIDE 07 — HYBRID SEARCH
+// =========================================================
+{
+ const s = pptx.addSlide({ masterName: "BASE" });
+ pageChip(s, 7);
+ eyebrow(s, "CHAPTER 06 · HYBRID SEARCH");
+ title(s, "Sub-25 ms retrieval over 3,500+ articles.");
+
+ // Left column: explanation
+ bullets(
+ s,
+ [
+ "Stage 1 — TF-IDF returns ~14 candidates in 22 ms. Cheap, deterministic, no GPU.",
+ "Stage 2 — ms-marco-MiniLM-L-6-v2 cross-encoder reranks the candidates in 48 ms on CPU.",
+ "Top-4 survive into the draft as the LLM's context. Each one carries a citable title + heading path.",
+ "The reranker is the quality lever — TF-IDF alone would cite topically relevant but semantically wrong articles.",
+ ],
+ { y: 2.3, x: 0.55, w: 7, fontSize: 15 },
+ );
+
+ // Right: latency breakdown diagram
+ s.addShape(pptx.shapes.ROUNDED_RECTANGLE, {
+ x: 7.9,
+ y: 2.3,
+ w: 4.9,
+ h: 3.5,
+ fill: { color: C.surface, transparency: 40 },
+ line: { color: C.border, width: 0.5 },
+ rectRadius: 0.12,
+ });
+ s.addText("LATENCY BUDGET · p50", {
+ x: 8.1,
+ y: 2.45,
+ w: 4.5,
+ h: 0.3,
+ fontFace: MONO,
+ fontSize: 10,
+ color: C.accent,
+ bold: true,
+ charSpacing: 2,
+ });
+
+ const lat = [
+ { label: "Intent", v: 3, color: C.info },
+ { label: "TF-IDF retrieval", v: 22, color: C.accent },
+ { label: "MiniLM rerank", v: 48, color: C.good },
+ { label: "Context build", v: 4, color: C.warn },
+ ];
+ const totalMs = 77;
+ lat.forEach((row, i) => {
+ const y = 2.85 + i * 0.65;
+ s.addText(row.label, {
+ x: 8.1,
+ y,
+ w: 1.8,
+ h: 0.4,
+ fontFace: FONT,
+ fontSize: 12,
+ color: C.text2,
+ valign: "middle",
+ });
+ // Bar
+ const barW = (row.v / totalMs) * 2.5;
+ s.addShape(pptx.shapes.ROUNDED_RECTANGLE, {
+ x: 9.9,
+ y: y + 0.1,
+ w: barW,
+ h: 0.2,
+ fill: { color: row.color },
+ line: { color: row.color, width: 0 },
+ rectRadius: 0.04,
+ });
+ s.addText(`${row.v} ms`, {
+ x: 12.1,
+ y,
+ w: 0.7,
+ h: 0.4,
+ fontFace: MONO,
+ fontSize: 11,
+ color: C.text1,
+ bold: true,
+ valign: "middle",
+ });
+ });
+
+ s.addText("End to retrieval: 77 ms · then LLM draft streams in 1.2 s", {
+ x: 8.1,
+ y: 5.45,
+ w: 4.5,
+ h: 0.3,
+ fontFace: MONO,
+ fontSize: 10,
+ color: C.text3,
+ });
+
+ statRow(
+ s,
+ [
+ { label: "P50 HYBRID SEARCH", value: "22ms", note: "TF-IDF candidate retrieval" },
+ { label: "P95 HYBRID SEARCH", value: "46ms", note: "measured on M3 MBP" },
+ { label: "KB ARTICLES", value: "3,500+", note: "local SQLite, 46s reindex" },
+ ],
+ 5.9,
+ 1.2,
+ );
+
+ notes(
+ s,
+ "Key message: cross-encoder is slow but cheap here because it only sees 14 candidates. That's the architectural trick.",
+ );
+}
+
+// =========================================================
+// SLIDE 08 — TRUST GATING
+// =========================================================
+{
+ const s = pptx.addSlide({ masterName: "BASE" });
+ pageChip(s, 8);
+ eyebrow(s, "CHAPTER 07 · TRUST GATING");
+ title(s, "The model is allowed to say 'I don't know.'");
+
+ // Three mode cards
+ const modes = [
+ {
+ head: "ANSWER",
+ color: C.good,
+ body: "Confidence ≥ 0.80 and all claims grounded. Draft ships with inline [n] citations.",
+ },
+ {
+ head: "CLARIFY",
+ color: C.warn,
+ body: "0.60–0.79 or partial grounding. The draft asks one targeted clarifying question back.",
+ },
+ {
+ head: "ABSTAIN",
+ color: C.bad,
+ body: "Below threshold or unsupported. Flag the ticket as a KB gap candidate and surface to the operator.",
+ },
+ ];
+ modes.forEach((m, i) => {
+ const x = 0.55 + i * 4.15;
+ s.addShape(pptx.shapes.ROUNDED_RECTANGLE, {
+ x,
+ y: 2.3,
+ w: 4,
+ h: 2.4,
+ fill: { color: C.surface, transparency: 40 },
+ line: { color: m.color, width: 0.75 },
+ rectRadius: 0.12,
+ });
+ s.addText(m.head, {
+ x: x + 0.3,
+ y: 2.5,
+ w: 3.5,
+ h: 0.5,
+ fontFace: MONO,
+ fontSize: 13,
+ color: m.color,
+ bold: true,
+ charSpacing: 3,
+ });
+ s.addText(m.body, {
+ x: x + 0.3,
+ y: 3.1,
+ w: 3.5,
+ h: 1.5,
+ fontFace: FONT,
+ fontSize: 13,
+ color: C.text2,
+ valign: "top",
+ });
+ });
+
+ bullets(
+ s,
+ [
+ "Inline citations are generated into the prompt, not post-hoc — so the model can't cite a doc it didn't see.",
+ "Grounded-claims check runs a per-sentence match against retrieved chunks; unsupported sentences get flagged.",
+ "Operators thumbs-up / thumbs-down every draft. Thumbs-down feeds straight into the KB gap analyzer (next slide).",
+ ],
+ { y: 5.0, fontSize: 14 },
+ );
+
+ notes(
+ s,
+ "This is the section that lands with IT security audiences. Emphasize 'grounded' — citations are real files, not invented URLs.",
+ );
+}
+
+// =========================================================
+// SLIDE 09 — SELF-IMPROVING LOOP
+// =========================================================
+{
+ const s = pptx.addSlide({ masterName: "BASE" });
+ pageChip(s, 9);
+ eyebrow(s, "CHAPTER 08 · FEEDBACK LOOP");
+ title(s, "Low-confidence queries become the KB backlog.");
+
+ // KB gap dashboard screenshot
+ s.addImage({
+ path: SHOT("04-kb-gap.png"),
+ x: 0.55,
+ y: 2.3,
+ w: 7.5,
+ h: 4.69,
+ sizing: { type: "contain", w: 7.5, h: 4.69 },
+ });
+
+ // Right column explanation
+ bullets(
+ s,
+ [
+ "Every abstained or low-confidence query lands in a cluster.",
+ "Clusters ranked by impact = affected tickets × retrieval miss rate.",
+ "Top clusters become a prioritized list of KB articles to write.",
+ "Writers fill the gap → next week's confidence distribution shifts right.",
+ "The loop is measurable: 14-day view shows grounded-vs-abstained trend.",
+ ],
+ { x: 8.4, y: 2.3, w: 4.4, h: 4.7, fontSize: 13 },
+ );
+
+ notes(
+ s,
+ "The compound story: every confident draft is a deflection, every abstention is a lead on what to write next. Both outcomes are wins.",
+ );
+}
+
+// =========================================================
+// SLIDE 10 — OPS SURFACE
+// =========================================================
+{
+ const s = pptx.addSlide({ masterName: "BASE" });
+ pageChip(s, 10);
+ eyebrow(s, "CHAPTER 09 · OPS");
+ title(s, "Yes, a desktop app needs a deploy story.");
+
+ // Left: ops screenshot
+ s.addImage({
+ path: SHOT("05-ops.png"),
+ x: 0.55,
+ y: 2.3,
+ w: 6.2,
+ h: 3.87,
+ sizing: { type: "contain", w: 6.2, h: 3.87 },
+ });
+ s.addText("Deploy / rollback surface", {
+ x: 0.55,
+ y: 6.2,
+ w: 6.2,
+ h: 0.3,
+ fontFace: MONO,
+ fontSize: 10,
+ color: C.text3,
+ align: "center",
+ });
+
+ // Right: eval screenshot
+ s.addImage({
+ path: SHOT("06-eval.png"),
+ x: 7.0,
+ y: 2.3,
+ w: 5.8,
+ h: 3.62,
+ sizing: { type: "contain", w: 5.8, h: 3.62 },
+ });
+ s.addText("Eval harness · run #4812", {
+ x: 7.0,
+ y: 5.95,
+ w: 5.8,
+ h: 0.3,
+ fontFace: MONO,
+ fontSize: 10,
+ color: C.text3,
+ align: "center",
+ });
+
+ s.addText(
+ "Canary on 10% → guardrails on p95 latency, error rate, and grounding score → auto-promote. 90-second rollback SLO.",
+ {
+ x: 0.55,
+ y: 6.6,
+ w: W - 1.1,
+ h: 0.5,
+ fontFace: FONT,
+ fontSize: 14,
+ color: C.text2,
+ italic: true,
+ valign: "top",
+ },
+ );
+
+ notes(
+ s,
+ "Talk about the eval gate specifically — grounding ≥ 0.90, faithfulness ≥ 0.95, safety refusals 100%. These are release blockers.",
+ );
+}
+
+// =========================================================
+// SLIDE 11 — WHAT I LEARNED
+// =========================================================
+{
+ const s = pptx.addSlide({ masterName: "BASE" });
+ pageChip(s, 11);
+ eyebrow(s, "CHAPTER 10 · LESSONS");
+ title(s, "Five things I didn't expect.");
+
+ const lessons = [
+ {
+ n: "01",
+ head: "Local-first is a UX decision, not just a security one.",
+ body: "Operators trust a tool more when they can literally turn their Wi-Fi off and it still works. The privacy story lands emotionally.",
+ },
+ {
+ n: "02",
+ head: "Prompt-cache hits are the real latency win.",
+ body: "The intent + retrieval output is cached per-ticket — second generations are 3× faster. Worth more than model quantization.",
+ },
+ {
+ n: "03",
+ head: "Logreg is not a downgrade — it's a feature.",
+ body: "Inspectable weights mean every routing decision is defensible. 'Why did you send this to the policy lane' has a concrete answer.",
+ },
+ {
+ n: "04",
+ head: "Tauri + Rust is the right desktop stack in 2026.",
+ body: "Bundle size, Apple notarization, and Rust FFI for the ML sidecar made iteration 2-3× faster than the Electron alternative.",
+ },
+ {
+ n: "05",
+ head: "The feedback loop only works if rating is one click.",
+ body: "Anything longer than thumbs-up / thumbs-down gets skipped. All the KB gap data comes from that single-click surface.",
+ },
+ ];
+ lessons.forEach((l, i) => {
+ const y = 2.3 + i * 0.88;
+ s.addText(l.n, {
+ x: 0.55,
+ y,
+ w: 0.7,
+ h: 0.5,
+ fontFace: MONO,
+ fontSize: 16,
+ color: C.accent,
+ bold: true,
+ valign: "top",
+ });
+ s.addText(l.head, {
+ x: 1.3,
+ y: y - 0.05,
+ w: 11.5,
+ h: 0.45,
+ fontFace: FONT,
+ fontSize: 16,
+ color: C.text1,
+ bold: true,
+ valign: "top",
+ });
+ s.addText(l.body, {
+ x: 1.3,
+ y: y + 0.4,
+ w: 11.5,
+ h: 0.5,
+ fontFace: FONT,
+ fontSize: 12,
+ color: C.text2,
+ valign: "top",
+ });
+ });
+
+ notes(
+ s,
+ "Pick one to spend extra time on depending on the audience: #1 for IT leaders, #2 for ML eng, #4 for devs.",
+ );
+}
+
+// =========================================================
+// SLIDE 12 — RESOURCES + Q&A
+// =========================================================
+{
+ const s = pptx.addSlide({ masterName: "BASE" });
+ pageChip(s, 12);
+
+ s.addText("● THANKS FOR WATCHING", {
+ x: 0.55,
+ y: 1.2,
+ w: 10,
+ h: 0.4,
+ fontFace: MONO,
+ fontSize: 12,
+ color: C.accent,
+ charSpacing: 3,
+ });
+
+ s.addText("Questions?", {
+ x: 0.55,
+ y: 1.7,
+ w: 12,
+ h: 1.4,
+ fontFace: FONT,
+ fontSize: 72,
+ color: C.text1,
+ bold: true,
+ charSpacing: -2,
+ });
+
+ s.addText(
+ "Open source · MIT licensed · runs on any M-series MacBook with Ollama installed.",
+ {
+ x: 0.55,
+ y: 3.2,
+ w: 12,
+ h: 0.6,
+ fontFace: FONT,
+ fontSize: 18,
+ color: C.text2,
+ },
+ );
+
+ // Resource cards
+ const resources = [
+ {
+ label: "REPO",
+ value: "github.com/saagpatel/AssistSupport",
+ note: "229 commits · v1.2.0 · MIT",
+ },
+ {
+ label: "DECK + ONE-PAGER",
+ value: "portfolio drop",
+ note: "PDF + slide deck + screenshot set",
+ },
+ {
+ label: "CONNECT",
+ value: "in/saagarpatel",
+ note: "DMs open — IT platform + local AI",
+ },
+ ];
+ resources.forEach((r, i) => {
+ const x = 0.55 + i * 4.15;
+ s.addShape(pptx.shapes.ROUNDED_RECTANGLE, {
+ x,
+ y: 4.3,
+ w: 4,
+ h: 1.9,
+ fill: { color: C.surface, transparency: 40 },
+ line: { color: C.border, width: 0.5 },
+ rectRadius: 0.12,
+ });
+ s.addText(r.label, {
+ x: x + 0.25,
+ y: 4.45,
+ w: 3.6,
+ h: 0.3,
+ fontFace: MONO,
+ fontSize: 10,
+ color: C.accent,
+ bold: true,
+ charSpacing: 2,
+ });
+ s.addText(r.value, {
+ x: x + 0.25,
+ y: 4.8,
+ w: 3.6,
+ h: 0.6,
+ fontFace: FONT,
+ fontSize: 16,
+ color: C.text1,
+ bold: true,
+ });
+ s.addText(r.note, {
+ x: x + 0.25,
+ y: 5.4,
+ w: 3.6,
+ h: 0.65,
+ fontFace: FONT,
+ fontSize: 11,
+ color: C.text2,
+ valign: "top",
+ });
+ });
+
+ s.addText(
+ "Built with Tauri 2 · React 19 · TypeScript · Rust · SQLCipher · Ollama · TF-IDF + MiniLM-L-6-v2 · logreg intent classifier",
+ {
+ x: 0.55,
+ y: 6.5,
+ w: W - 1.1,
+ h: 0.5,
+ fontFace: MONO,
+ fontSize: 11,
+ color: C.text3,
+ align: "center",
+ charSpacing: 1,
+ },
+ );
+
+ notes(
+ s,
+ "Leave this on screen for the full Q&A window. Repeat the repo URL verbally once or twice.",
+ );
+}
+
+// =========================================================
+// WRITE
+// =========================================================
+const outPath = join(__dirname, "AssistSupport-LinkedIn-Live.pptx");
+await pptx.writeFile({ fileName: outPath });
+console.log(`✓ wrote ${outPath}`);
diff --git a/docs/deck/package-lock.json b/docs/deck/package-lock.json
new file mode 100644
index 0000000..5889a6c
--- /dev/null
+++ b/docs/deck/package-lock.json
@@ -0,0 +1,171 @@
+{
+ "name": "assistsupport-deck",
+ "version": "0.0.1",
+ "lockfileVersion": 3,
+ "requires": true,
+ "packages": {
+ "": {
+ "name": "assistsupport-deck",
+ "version": "0.0.1",
+ "dependencies": {
+ "pptxgenjs": "^3.12.0"
+ }
+ },
+ "node_modules/@types/node": {
+ "version": "18.19.130",
+ "resolved": "https://registry.npmjs.org/@types/node/-/node-18.19.130.tgz",
+ "integrity": "sha512-GRaXQx6jGfL8sKfaIDD6OupbIHBr9jv7Jnaml9tB7l4v068PAOXqfcujMMo5PhbIs6ggR1XODELqahT2R8v0fg==",
+ "license": "MIT",
+ "dependencies": {
+ "undici-types": "~5.26.4"
+ }
+ },
+ "node_modules/core-util-is": {
+ "version": "1.0.3",
+ "resolved": "https://registry.npmjs.org/core-util-is/-/core-util-is-1.0.3.tgz",
+ "integrity": "sha512-ZQBvi1DcpJ4GDqanjucZ2Hj3wEO5pZDS89BWbkcrvdxksJorwUDDZamX9ldFkp9aw2lmBDLgkObEA4DWNJ9FYQ==",
+ "license": "MIT"
+ },
+ "node_modules/https": {
+ "version": "1.0.0",
+ "resolved": "https://registry.npmjs.org/https/-/https-1.0.0.tgz",
+ "integrity": "sha512-4EC57ddXrkaF0x83Oj8sM6SLQHAWXw90Skqu2M4AEWENZ3F02dFJE/GARA8igO79tcgYqGrD7ae4f5L3um2lgg==",
+ "license": "ISC"
+ },
+ "node_modules/image-size": {
+ "version": "1.2.1",
+ "resolved": "https://registry.npmjs.org/image-size/-/image-size-1.2.1.tgz",
+ "integrity": "sha512-rH+46sQJ2dlwfjfhCyNx5thzrv+dtmBIhPHk0zgRUukHzZ/kRueTJXoYYsclBaKcSMBWuGbOFXtioLpzTb5euw==",
+ "license": "MIT",
+ "dependencies": {
+ "queue": "6.0.2"
+ },
+ "bin": {
+ "image-size": "bin/image-size.js"
+ },
+ "engines": {
+ "node": ">=16.x"
+ }
+ },
+ "node_modules/immediate": {
+ "version": "3.0.6",
+ "resolved": "https://registry.npmjs.org/immediate/-/immediate-3.0.6.tgz",
+ "integrity": "sha512-XXOFtyqDjNDAQxVfYxuF7g9Il/IbWmmlQg2MYKOH8ExIT1qg6xc4zyS3HaEEATgs1btfzxq15ciUiY7gjSXRGQ==",
+ "license": "MIT"
+ },
+ "node_modules/inherits": {
+ "version": "2.0.4",
+ "resolved": "https://registry.npmjs.org/inherits/-/inherits-2.0.4.tgz",
+ "integrity": "sha512-k/vGaX4/Yla3WzyMCvTQOXYeIHvqOKtnqBduzTHpzpQZzAskKMhZ2K+EnBiSM9zGSoIFeMpXKxa4dYeZIQqewQ==",
+ "license": "ISC"
+ },
+ "node_modules/isarray": {
+ "version": "1.0.0",
+ "resolved": "https://registry.npmjs.org/isarray/-/isarray-1.0.0.tgz",
+ "integrity": "sha512-VLghIWNM6ELQzo7zwmcg0NmTVyWKYjvIeM83yjp0wRDTmUnrM678fQbcKBo6n2CJEF0szoG//ytg+TKla89ALQ==",
+ "license": "MIT"
+ },
+ "node_modules/jszip": {
+ "version": "3.10.1",
+ "resolved": "https://registry.npmjs.org/jszip/-/jszip-3.10.1.tgz",
+ "integrity": "sha512-xXDvecyTpGLrqFrvkrUSoxxfJI5AH7U8zxxtVclpsUtMCq4JQ290LY8AW5c7Ggnr/Y/oK+bQMbqK2qmtk3pN4g==",
+ "license": "(MIT OR GPL-3.0-or-later)",
+ "dependencies": {
+ "lie": "~3.3.0",
+ "pako": "~1.0.2",
+ "readable-stream": "~2.3.6",
+ "setimmediate": "^1.0.5"
+ }
+ },
+ "node_modules/lie": {
+ "version": "3.3.0",
+ "resolved": "https://registry.npmjs.org/lie/-/lie-3.3.0.tgz",
+ "integrity": "sha512-UaiMJzeWRlEujzAuw5LokY1L5ecNQYZKfmyZ9L7wDHb/p5etKaxXhohBcrw0EYby+G/NA52vRSN4N39dxHAIwQ==",
+ "license": "MIT",
+ "dependencies": {
+ "immediate": "~3.0.5"
+ }
+ },
+ "node_modules/pako": {
+ "version": "1.0.11",
+ "resolved": "https://registry.npmjs.org/pako/-/pako-1.0.11.tgz",
+ "integrity": "sha512-4hLB8Py4zZce5s4yd9XzopqwVv/yGNhV1Bl8NTmCq1763HeK2+EwVTv+leGeL13Dnh2wfbqowVPXCIO0z4taYw==",
+ "license": "(MIT AND Zlib)"
+ },
+ "node_modules/pptxgenjs": {
+ "version": "3.12.0",
+ "resolved": "https://registry.npmjs.org/pptxgenjs/-/pptxgenjs-3.12.0.tgz",
+ "integrity": "sha512-ZozkYKWb1MoPR4ucw3/aFYlHkVIJxo9czikEclcUVnS4Iw/M+r+TEwdlB3fyAWO9JY1USxJDt0Y0/r15IR/RUA==",
+ "license": "MIT",
+ "dependencies": {
+ "@types/node": "^18.7.3",
+ "https": "^1.0.0",
+ "image-size": "^1.0.0",
+ "jszip": "^3.7.1"
+ }
+ },
+ "node_modules/process-nextick-args": {
+ "version": "2.0.1",
+ "resolved": "https://registry.npmjs.org/process-nextick-args/-/process-nextick-args-2.0.1.tgz",
+ "integrity": "sha512-3ouUOpQhtgrbOa17J7+uxOTpITYWaGP7/AhoR3+A+/1e9skrzelGi/dXzEYyvbxubEF6Wn2ypscTKiKJFFn1ag==",
+ "license": "MIT"
+ },
+ "node_modules/queue": {
+ "version": "6.0.2",
+ "resolved": "https://registry.npmjs.org/queue/-/queue-6.0.2.tgz",
+ "integrity": "sha512-iHZWu+q3IdFZFX36ro/lKBkSvfkztY5Y7HMiPlOUjhupPcG2JMfst2KKEpu5XndviX/3UhFbRngUPNKtgvtZiA==",
+ "license": "MIT",
+ "dependencies": {
+ "inherits": "~2.0.3"
+ }
+ },
+ "node_modules/readable-stream": {
+ "version": "2.3.8",
+ "resolved": "https://registry.npmjs.org/readable-stream/-/readable-stream-2.3.8.tgz",
+ "integrity": "sha512-8p0AUk4XODgIewSi0l8Epjs+EVnWiK7NoDIEGU0HhE7+ZyY8D1IMY7odu5lRrFXGg71L15KG8QrPmum45RTtdA==",
+ "license": "MIT",
+ "dependencies": {
+ "core-util-is": "~1.0.0",
+ "inherits": "~2.0.3",
+ "isarray": "~1.0.0",
+ "process-nextick-args": "~2.0.0",
+ "safe-buffer": "~5.1.1",
+ "string_decoder": "~1.1.1",
+ "util-deprecate": "~1.0.1"
+ }
+ },
+ "node_modules/safe-buffer": {
+ "version": "5.1.2",
+ "resolved": "https://registry.npmjs.org/safe-buffer/-/safe-buffer-5.1.2.tgz",
+ "integrity": "sha512-Gd2UZBJDkXlY7GbJxfsE8/nvKkUEU1G38c1siN6QP6a9PT9MmHB8GnpscSmMJSoF8LOIrt8ud/wPtojys4G6+g==",
+ "license": "MIT"
+ },
+ "node_modules/setimmediate": {
+ "version": "1.0.5",
+ "resolved": "https://registry.npmjs.org/setimmediate/-/setimmediate-1.0.5.tgz",
+ "integrity": "sha512-MATJdZp8sLqDl/68LfQmbP8zKPLQNV6BIZoIgrscFDQ+RsvK/BxeDQOgyxKKoh0y/8h3BqVFnCqQ/gd+reiIXA==",
+ "license": "MIT"
+ },
+ "node_modules/string_decoder": {
+ "version": "1.1.1",
+ "resolved": "https://registry.npmjs.org/string_decoder/-/string_decoder-1.1.1.tgz",
+ "integrity": "sha512-n/ShnvDi6FHbbVfviro+WojiFzv+s8MPMHBczVePfUpDJLwoLT0ht1l4YwBCbi8pJAveEEdnkHyPyTP/mzRfwg==",
+ "license": "MIT",
+ "dependencies": {
+ "safe-buffer": "~5.1.0"
+ }
+ },
+ "node_modules/undici-types": {
+ "version": "5.26.5",
+ "resolved": "https://registry.npmjs.org/undici-types/-/undici-types-5.26.5.tgz",
+ "integrity": "sha512-JlCMO+ehdEIKqlFxk6IfVoAUVmgz7cU7zD/h9XZ0qzeosSHmUJVOzSQvvYSYWXkFXC+IfLKSIffhv0sVZup6pA==",
+ "license": "MIT"
+ },
+ "node_modules/util-deprecate": {
+ "version": "1.0.2",
+ "resolved": "https://registry.npmjs.org/util-deprecate/-/util-deprecate-1.0.2.tgz",
+ "integrity": "sha512-EPD5q1uXyFxJpCrLnCc1nHnq3gOa6DZBocAIiI2TaSCA7VCJ1UJDMagCzIkXNsUYfD1daK//LTEQ8xiIbrHtcw==",
+ "license": "MIT"
+ }
+ }
+}
diff --git a/docs/deck/package.json b/docs/deck/package.json
new file mode 100644
index 0000000..cba1f7d
--- /dev/null
+++ b/docs/deck/package.json
@@ -0,0 +1,13 @@
+{
+ "name": "assistsupport-deck",
+ "version": "0.0.1",
+ "private": true,
+ "type": "module",
+ "description": "Local builder for the LinkedIn Live deck — kept out of the root package.json to avoid polluting app deps.",
+ "scripts": {
+ "build": "node build.mjs"
+ },
+ "dependencies": {
+ "pptxgenjs": "^3.12.0"
+ }
+}
diff --git a/docs/one-pager/AssistSupport-one-pager.pdf b/docs/one-pager/AssistSupport-one-pager.pdf
new file mode 100644
index 0000000..04759aa
Binary files /dev/null and b/docs/one-pager/AssistSupport-one-pager.pdf differ
diff --git a/docs/one-pager/AssistSupport-one-pager.png b/docs/one-pager/AssistSupport-one-pager.png
new file mode 100644
index 0000000..8893fbe
Binary files /dev/null and b/docs/one-pager/AssistSupport-one-pager.png differ
diff --git a/docs/one-pager/README.md b/docs/one-pager/README.md
new file mode 100644
index 0000000..623b0fa
--- /dev/null
+++ b/docs/one-pager/README.md
@@ -0,0 +1,92 @@
+# Landscape-Letter One-pager
+
+Session 3 of the AssistSupport portfolio pass. A single-page landscape
+letter (11in × 8.5in) that sits next to the screenshot set as the
+print-ready portfolio leave-behind.
+
+## Output
+
+| File | Purpose |
+| ---------------------------------------------------------- | ------------------------------------- |
+| [AssistSupport-one-pager.pdf](AssistSupport-one-pager.pdf) | Print-ready landscape letter, 1 page. |
+| [AssistSupport-one-pager.png](AssistSupport-one-pager.png) | 2× PNG preview (2112 × 1632) for web. |
+| [one-pager.html](one-pager.html) | Source. Regenerate via the script. |
+| [generate.mjs](generate.mjs) | Playwright-based PDF + PNG generator. |
+
+## Layout
+
+```
+┌──────────────────────────────────────────────────────────────────┐
+│ [A] AssistSupport ● runs on mac · Tauri 2 · portfolio │
+├──────────────────────────────────────────────────────────────────┤
+│ YOUR SUPPORT TEAM'S SECOND BRAIN │
+│ │
+│ ML-powered answers from ┌────────────────┐│
+│ your own knowledge base │ ││
+│ — in under 25ms, without │ HERO SHOT ││
+│ sending a single query to │ (workspace) ││
+│ the cloud. │ ││
+│ [Sub-paragraph explaining the stack…] └────────────────┘│
+│ │
+│ FIVE FEATURE PILLARS │
+│ [01 ML intent] [02 Hybrid] [03 Trust] [04 Feedback] [05 Local] │
+│ │
+│ ──────────────────────────────────────────────────────────────── │
+│ 25% │ <25ms │ 3,500+ │
+│ ticket deflection│ hybrid search p50 │ KB articles indexed │
+│ ──────────────────────────────────────────────────────────────── │
+│ Tauri · React · TS · Rust · SQLCipher · Ollama github.com/… │
+└──────────────────────────────────────────────────────────────────┘
+```
+
+## Design-system continuity
+
+Uses the same tokens as the Workspace redesign and the screenshot set:
+
+- Palette: `--as-surface-0/1`, `--as-glass-2`, teal `--as-accent-1`
+- Type: IBM Plex Sans + JetBrains Mono
+- Shell glow: radial gradients from `--as-glow-1`, `--as-glow-2`
+- Single accent: teal is the only decorative color; status colors (good
+ / warn / info) are not used on this page, which keeps the piece
+ visually calm for print.
+
+The hero screenshot embedded on the page is
+[`docs/screenshots/renders/01-workspace.png`](../screenshots/renders/01-workspace.png)
+from session 2 — if that screenshot changes, re-run
+[`generate.mjs`](generate.mjs) and the one-pager picks it up.
+
+## Content
+
+- **Tagline:** "Your support team's second brain" (eyebrow) +
+ "ML-powered answers from your own knowledge base — in under 25ms,
+ without sending a single query to the cloud." (headline)
+- **Five feature pillars:** ML intent classification · Sub-25ms hybrid
+ search · Trust-gated responses · Self-improving feedback loop ·
+ Local-first & encrypted. Each pillar carries a one-line body and
+ a small mono stat tag (e.g. `0.914 macro-F1`, `22ms p50`,
+ `0.93 grounded · 0.96 faithful`).
+- **Impact strip (3 columns):**
+ - **25%** ticket deflection — benchmark from prior Aisera deployment
+ - **<25ms** hybrid search p50 — measured on M3 MBP, eval run #4812
+ - **3,500+** KB articles indexed — nightly reindex, 46s
+- **Footer:** tech stack chips (Tauri 2 · React 19 · TypeScript · Rust
+ · SQLCipher · Ollama · TF-IDF + MiniLM) plus repo URL.
+
+## Regenerating
+
+```bash
+# from repo root
+node docs/one-pager/generate.mjs
+```
+
+PDF is written at 11in × 8.5in landscape with `preferCSSPageSize: true`
+so the CSS `@page` rule drives the sheet. Background colors are
+preserved via `-webkit-print-color-adjust: exact`. PNG preview is
+captured at 2× device pixel ratio (2112 × 1632) so the same source
+file doubles as a web-ready hero image.
+
+## What's next
+
+Session 4 turns the one-pager positioning, the feature pillars, and the
+screenshot set into a 12-slide deck for a LinkedIn Live titled
+_Running a local-first support agent on a Mac_.
diff --git a/docs/one-pager/generate.mjs b/docs/one-pager/generate.mjs
new file mode 100644
index 0000000..f26f67b
--- /dev/null
+++ b/docs/one-pager/generate.mjs
@@ -0,0 +1,66 @@
+/**
+ * generate.mjs — render one-pager.html to landscape-letter PDF + 2× PNG
+ * preview via headless Chromium.
+ *
+ * Run from the repo root:
+ * node docs/one-pager/generate.mjs
+ *
+ * Outputs:
+ * docs/one-pager/AssistSupport-one-pager.pdf (11in × 8.5in landscape)
+ * docs/one-pager/AssistSupport-one-pager.png (2112 × 1632 preview, 2×)
+ */
+
+import { chromium } from "@playwright/test";
+import { dirname, join } from "node:path";
+import { fileURLToPath, pathToFileURL } from "node:url";
+
+const __dirname = dirname(fileURLToPath(import.meta.url));
+const SRC = join(__dirname, "one-pager.html");
+const PDF = join(__dirname, "AssistSupport-one-pager.pdf");
+const PNG = join(__dirname, "AssistSupport-one-pager.png");
+
+const CSS_W = 1056; // 11in @ 96dpi
+const CSS_H = 816; // 8.5in @ 96dpi
+
+const browser = await chromium.launch();
+
+// --- PNG preview (2× raster) ---
+{
+ const ctx = await browser.newContext({
+ viewport: { width: CSS_W, height: CSS_H },
+ deviceScaleFactor: 2,
+ colorScheme: "dark",
+ });
+ const page = await ctx.newPage();
+ await page.goto(pathToFileURL(SRC).href, { waitUntil: "networkidle" });
+ await page.waitForTimeout(500);
+ await page.screenshot({
+ path: PNG,
+ clip: { x: 0, y: 0, width: CSS_W, height: CSS_H },
+ });
+ await ctx.close();
+ console.log(`✓ ${PNG} (2× PNG preview)`);
+}
+
+// --- PDF (vector, landscape-letter) ---
+{
+ const ctx = await browser.newContext({
+ viewport: { width: CSS_W, height: CSS_H },
+ colorScheme: "dark",
+ });
+ const page = await ctx.newPage();
+ await page.goto(pathToFileURL(SRC).href, { waitUntil: "networkidle" });
+ await page.waitForTimeout(500);
+ await page.pdf({
+ path: PDF,
+ format: "Letter",
+ landscape: true,
+ printBackground: true,
+ margin: { top: "0", right: "0", bottom: "0", left: "0" },
+ preferCSSPageSize: true,
+ });
+ await ctx.close();
+ console.log(`✓ ${PDF} (11in × 8.5in landscape)`);
+}
+
+await browser.close();
diff --git a/docs/one-pager/one-pager.html b/docs/one-pager/one-pager.html
new file mode 100644
index 0000000..679079f
--- /dev/null
+++ b/docs/one-pager/one-pager.html
@@ -0,0 +1,631 @@
+
+
+
+
+ AssistSupport — One-pager
+
+
+
+
+
+
+
+
+
A
+
+
AssistSupport
+
local-first · v1.2.0 · MIT
+
+
+
+ runs on your MacBook
+ Tauri 2 + Rust
+ portfolio-grade
+
+
+
+
+
+
+
Your support team's second brain
+
+ ML-powered answers from your own knowledge base — in
+ under 25 ms, without sending a single query to the cloud.
+
+
+ Local LLM inference plus a hybrid ML search pipeline that drafts
+ accurate, KB-grounded IT support responses entirely on the
+ operator's laptop. SQLCipher AES-256 at rest, a self-improving
+ feedback loop, and an ops surface for deploy, rollback, and
+ eval — all shipped as a single Tauri app.
+
+ Five feature pillars
+ — local inference · hybrid search · trust-gated · self-improving · encrypted
+
+
+
+
01
+
ML intent classification
+
+ A calibrated logistic-regression classifier routes every
+ ticket — policy, howto, access, incident — in under 5 ms.
+
+
+ 0.914 macro-F1
+
+
+
+
+
02
+
Sub-25 ms hybrid search
+
+ TF-IDF retrieval plus a ms-marco-MiniLM-L-6-v2 cross-encoder
+ reranker — citations are real files, not hallucinations.
+
+
22 ms p50 · 46 ms p95
+
+
+
+
03
+
Trust-gated responses
+
+ Confidence modes (answer · clarify · abstain) plus inline [n]
+ citations link every sentence back to its source.
+
+
+ 0.93 grounded · 0.96 faithful
+
+
+
+
+
04
+
Self-improving feedback loop
+
+ Low-confidence queries get clustered into KB gaps, ranked by
+ impact, and turned into a prioritized list of articles.
+
+
+ 14 gap clusters · 87 tickets
+
+
+
+
+
05
+
Local-first & encrypted
+
+ SQLCipher AES-256 at rest, local Ollama inference. Zero data
+ leaves the machine. Ships with deploy / rollback / eval ops.
+
+
+ SQLCipher · Ollama · Tauri 2
+
+
+
+
+
+
+
+
+
Ticket deflection
+
+ 25
+ %
+
+
+ of tier-1 tickets never reach a human — drafts accepted and
+ pasted into Jira as-is.
+
+
benchmark: prior Aisera deployment
+
+
+
Hybrid search latency
+
+ <25
+ ms
+
+
+ p50 across TF-IDF candidate retrieval plus cross-encoder
+ reranker — instant in the composer.
+
+
+ measured on M3 MacBook Pro · eval run #4812
+
+
+
+
KB articles indexed
+
+ 3,500
+ +
+
+
+ local SQLite index of policies, runbooks, how-tos, and incident
+ retrospectives — refreshed nightly.
+
+
nightly reindex · 46 s · see Ops surface
+
+
+
+
+
+
+
+
diff --git a/docs/portfolio/README.md b/docs/portfolio/README.md
new file mode 100644
index 0000000..4b1fa06
--- /dev/null
+++ b/docs/portfolio/README.md
@@ -0,0 +1,143 @@
+# AssistSupport Portfolio Pass
+
+Single entry point for the four-session portfolio build. Everything in
+this folder is meta-documentation — the actual artifacts live in their
+respective session folders and are linked below.
+
+## The four artifacts
+
+| # | Artifact | Session folder | Primary output |
+| --- | ----------------------------------------------- | ----------------------------------------------- | ---------------------------------------------------------------------------------------------- |
+| 1 | Workspace redesign — Claude Code handoff bundle | [`docs/redesign/`](../redesign/README.md) | [`WorkspaceHeroLayout.tsx`](../../src/features/workspace/WorkspaceHeroLayout.tsx) + CSS + spec |
+| 2 | 6-panel 2× portfolio screenshot set | [`docs/screenshots/`](../screenshots/README.md) | Six 2880×1800 PNGs + 2×3 contact sheet + captions |
+| 3 | Landscape-letter one-pager PDF | [`docs/one-pager/`](../one-pager/README.md) | [`AssistSupport-one-pager.pdf`](../one-pager/AssistSupport-one-pager.pdf) (11in × 8.5in) |
+| 4 | 12-slide LinkedIn Live deck | [`docs/deck/`](../deck/README.md) | [`AssistSupport-LinkedIn-Live.pptx`](../deck/AssistSupport-LinkedIn-Live.pptx) + PDF preview |
+
+## Shared design system
+
+All four artifacts consume the same token set from the live app —
+[`src/styles/revamp/tokens.css`](../../src/styles/revamp/tokens.css).
+No artifact introduces new tokens.
+
+| Role | Token / value |
+| --------------- | ------------------------------------------------------- |
+| Background | `--as-surface-0` `#0B0D10` → `--as-surface-1` `#0F1218` |
+| Surfaces | `--as-glass-1/2/3` translucent panels |
+| Border | `--as-border-1/2` |
+| Text | `--as-text-1/2/3` (opacity ramps from 0.92 to 0.56) |
+| Accent (single) | `--as-accent-1` teal `#4FD1C5` |
+| Status | `--as-good/warn/bad/info` — functional only |
+| Headings / body | IBM Plex Sans |
+| Code / metrics | JetBrains Mono |
+| Shell glow | `--as-glow-1/2` radial gradients |
+
+The design rule across every artifact: **teal is the only decorative
+color.** Status colors carry meaning (confidence tone, release-gate
+status, KB-gap flags) but are never used for decoration.
+
+## How the pieces connect
+
+```
+ ┌────────────────────────────────────────────────┐
+ │ tokens.css (live app · single source) │
+ └──────────────────────────┬─────────────────────┘
+ │
+ ┌───────────────────────┼──────────────────────┐
+ │ │ │
+ ▼ ▼ ▼
+ Session 1 Session 2 Session 4
+ Workspace redesign ───► Screenshot set ───► LinkedIn Live deck
+ (new React + CSS) (6 × 2× PNGs) (embeds the PNGs)
+ │
+ ▼
+ Session 3
+ One-pager PDF
+ (embeds panel 01 as hero)
+```
+
+If the workspace redesign lands on master, re-running session 2's
+capture script regenerates every screenshot; sessions 3 and 4 then
+pick up the new screenshots on their next build. The whole portfolio
+re-syncs from a single source.
+
+## Voice
+
+Engineering-professional across all four artifacts:
+
+- No emojis
+- No marketing superlatives
+- Specific numbers: `22 ms p50`, `0.914 macro-F1`, `3,500+ articles`,
+ `25% deflection`, `90-second rollback SLO`
+- Pronouns first-person singular only in the deck (sessions 1–3 are
+ product-voice, session 4 is speaker-voice)
+- Citations are real — every number traces back to either the README,
+ the eval harness, or a prior production benchmark
+
+## Regeneration commands
+
+```bash
+# Session 1 — verify handoff bundle compiles
+pnpm install
+pnpm ui:typecheck
+
+# Session 2 — rerender six panels + contact sheet
+node docs/screenshots/capture.mjs
+
+# Session 3 — rerender one-pager PDF + PNG preview
+node docs/one-pager/generate.mjs
+
+# Session 4 — rebuild the PPTX (+ optional PDF)
+cd docs/deck && npm run build
+soffice --headless --convert-to pdf AssistSupport-LinkedIn-Live.pptx
+```
+
+## Inventory
+
+```
+docs/
+├── portfolio/
+│ └── README.md ← this file
+├── redesign/
+│ ├── README.md
+│ ├── SPEC.md
+│ ├── INTEGRATION.md
+│ └── ACCEPTANCE.md
+├── screenshots/
+│ ├── README.md
+│ ├── CAPTIONS.md
+│ ├── shell.css
+│ ├── capture.mjs
+│ ├── panels/
+│ │ ├── 01-workspace.html
+│ │ ├── 02-queue.html
+│ │ ├── 03-intent.html
+│ │ ├── 04-kb-gap.html
+│ │ ├── 05-ops.html
+│ │ └── 06-eval.html
+│ └── out/
+│ ├── 01-workspace.png (2880 × 1800)
+│ ├── 02-queue.png (2880 × 1800)
+│ ├── 03-intent.png (2880 × 1800)
+│ ├── 04-kb-gap.png (2880 × 1800)
+│ ├── 05-ops.png (2880 × 1800)
+│ ├── 06-eval.png (2880 × 1800)
+│ └── contact-sheet.png (2880 × 2700)
+├── one-pager/
+│ ├── README.md
+│ ├── one-pager.html
+│ ├── generate.mjs
+│ ├── AssistSupport-one-pager.pdf (11in × 8.5in landscape)
+│ └── AssistSupport-one-pager.png (2112 × 1632 preview)
+└── deck/
+ ├── README.md
+ ├── build.mjs
+ ├── package.json
+ ├── AssistSupport-LinkedIn-Live.pptx (editable, 12 slides)
+ └── AssistSupport-LinkedIn-Live.pdf (PDF preview)
+
+src/
+├── features/workspace/
+│ └── WorkspaceHeroLayout.tsx (new, drop-in for ClaudeDesignWorkspace)
+└── styles/revamp/
+ └── workspaceHero.css (new, scoped under .wsx)
+```
diff --git a/docs/redesign/ACCEPTANCE.md b/docs/redesign/ACCEPTANCE.md
new file mode 100644
index 0000000..f5511f0
--- /dev/null
+++ b/docs/redesign/ACCEPTANCE.md
@@ -0,0 +1,121 @@
+# Workspace Redesign — Acceptance Checklist
+
+The implementing agent is done when every box below is checked. Items
+are grouped so each group can be validated independently.
+
+## Layout
+
+- [ ] At viewport ≥ 1280px, the Workspace tab renders exactly three
+ regions: composer (full width, sticky top), answer hero (left
+ column), triage rail (right column, 340px fixed).
+- [ ] At viewport 900–1279px, the rail narrows to 300px but stays in
+ the right column.
+- [ ] At viewport < 900px, the rail stacks below the answer column,
+ composer stays sticky.
+- [ ] At viewport < 640px, composer footer wraps: chips row above,
+ length + generate row below.
+- [ ] The composer stays pinned at the top while the answer scrolls.
+- [ ] Answer column and rail scroll independently; neither causes
+ the other to re-layout.
+
+## Composer
+
+- [ ] Ticket micro-header shows `{KEY} · {ISSUE_TYPE}` in monospace on
+ the left, summary in the center, priority + auto-detected intent
+ badge on the right.
+- [ ] The blue→violet avatar gradient from `ClaudeDesignWorkspace` is
+ not present anywhere in the new layout.
+- [ ] Textarea min-height 104px, max-height before scroll 240px,
+ `aria-label="Ticket or issue description"`.
+- [ ] Intent chips are a `role="radiogroup"` of 4 options (Policy,
+ Howto, Access, Incident); toggling writes `likely_category` to
+ `caseIntake`.
+- [ ] Length segmented control has 3 options (Short, Medium, Long) and
+ is itself a `role="radiogroup"`.
+- [ ] Generate button shows `⌘↵` kbd pill; becomes a Cancel button
+ while `generating === true`.
+- [ ] Generate is disabled when `!modelLoaded || !input.trim()` and the
+ `title` attribute explains why.
+
+## Answer hero
+
+- [ ] Prose body renders at 16px / 1.65 IBM Plex Sans, clamped to
+ `max-width: 70ch`.
+- [ ] Paragraph gap is 14px.
+- [ ] Inline `[n]` markers render as accent citation pills (same
+ visual as `.cdw .cite`) and invoke `onNavigateToSource` with the
+ source title or file path.
+- [ ] Inline code and fenced code render in JetBrains Mono with the
+ surfaces described in `SPEC.md §3`.
+- [ ] Intent + confidence strip shows only when `confidence` is not
+ null; tone switches on `confidence.mode` (answer/clarify/abstain).
+- [ ] Metrics row shows tok/s, sources count, word count, context %,
+ and claims-supported ratio, all with tabular numerals.
+- [ ] When no response exists yet, the answer column shows the empty
+ state helper text (see spec for copy).
+- [ ] Streaming dot appears at the tail of the prose during streaming.
+- [ ] Answer actions: Regenerate (ghost), Save template (ghost),
+ spacer, Copy response (primary).
+- [ ] Sources block is hidden when `sources.length === 0`, otherwise
+ renders a vertical list of numbered rows that match the
+ inline-citation numbering.
+
+## Triage rail
+
+- [ ] Workflow card is vertical, 4 steps; current step highlighted
+ with `--as-accent-surface-1`, completed steps use
+ `--as-good-surface`.
+- [ ] Signals card shows confidence %, grounded-claims ratio + bar,
+ retrieval latency if provided.
+- [ ] Alternatives card is hidden when `alternatives.length === 0`.
+ When shown, each alt has label `ALT N`, a clamped 2-line
+ preview, and a "Use this" ghost button.
+- [ ] Feedback card renders thumbs up / thumbs down buttons with
+ `aria-pressed`; clicking invokes `onRateResponse` if provided.
+- [ ] `Flag as KB gap` ghost button spans the full width of the
+ feedback card and invokes `onFlagKbGap` if provided.
+- [ ] Context card contains Audience, Tone, Urgency, Environment
+ controls — these are **not** present anywhere else in the
+ layout.
+- [ ] Footer shows `loadedModelName` in monospace, context
+ utilization %, and a small placeholder for the last-run
+ timestamp.
+- [ ] If the rail's content exceeds viewport height, the rail
+ scrolls independently.
+
+## Design system
+
+- [ ] No new tokens are added to `src/styles/revamp/tokens.css`.
+- [ ] The new CSS file only references tokens with the `--as-` prefix.
+- [ ] Single accent: no fill, gradient, or outline in the layout uses
+ purple, blue, magenta, or gradient decoration. Teal is the only
+ accent; status colors (good/warn/bad/info) are only used to
+ communicate status.
+- [ ] `@media (prefers-reduced-motion: reduce)` collapses all
+ transitions to 0ms (inherited from existing design-tokens.css).
+- [ ] `@media (prefers-reduced-transparency: reduce)` still produces a
+ readable layout (solid surfaces from the revamp shell rule).
+
+## Accessibility
+
+- [ ] All interactive elements show the shell's `:focus-visible`
+ outline (`2px solid var(--as-focus)`, 2px offset).
+- [ ] Confidence gauge exposes `aria-label` with percent + tone.
+- [ ] Intent chips, length segmented, and rail thumbs expose correct
+ `role`/`aria-checked`/`aria-pressed`.
+- [ ] Answer prose passes axe with no new contrast violations
+ introduced (`pnpm ui:test:a11y`).
+
+## Quality gates
+
+- [ ] `pnpm typecheck` passes.
+- [ ] `pnpm lint` passes.
+- [ ] `pnpm test` passes, including any new `WorkspaceHeroLayout.test.tsx`.
+- [ ] `pnpm perf:workspace` passes with no new regressions.
+- [ ] `pnpm health:repo` passes end-to-end before the PR is opened.
+
+## Rollback
+
+- [ ] `ClaudeDesignWorkspace.tsx` and its CSS still exist.
+- [ ] Flipping `ASSISTSUPPORT_REVAMP_WORKSPACE_HERO` to `false`
+ restores the previous layout without code changes.
diff --git a/docs/redesign/INTEGRATION.md b/docs/redesign/INTEGRATION.md
new file mode 100644
index 0000000..e94c908
--- /dev/null
+++ b/docs/redesign/INTEGRATION.md
@@ -0,0 +1,120 @@
+# Workspace Redesign — Integration Guide
+
+Step-by-step wiring for the implementing agent. The redesign ships
+behind a revamp feature flag so the current `ClaudeDesignWorkspace`
+layout stays reachable for comparison and rollback.
+
+## 1. Feature flag
+
+Add a new entry alongside the existing revamp flags in
+[`src/features/revamp/flags.ts`](../../src/features/revamp/flags.ts):
+
+```ts
+export interface RevampFlags {
+ // ...existing flags...
+ ASSISTSUPPORT_REVAMP_WORKSPACE_HERO?: boolean;
+}
+```
+
+Default value: `true` in dev, read from the same
+`ASSISTSUPPORT_*` env convention the other revamp flags use. The flag
+is checked inside `DraftTab.tsx` only.
+
+## 2. Swap the renderer in DraftTab
+
+In [`src/components/Draft/DraftTab.tsx`](../../src/components/Draft/DraftTab.tsx):
+
+1. Add a sibling import next to the existing `ClaudeDesignWorkspace`:
+
+ ```ts
+ import { WorkspaceHeroLayout } from "../../features/workspace/WorkspaceHeroLayout";
+ ```
+
+2. Resolve the flag once (the file already calls `resolveRevampFlags()`
+ elsewhere — reuse that reference). At the point that currently
+ builds `claudeDesignWorkspacePanel`, branch on the flag:
+
+ ```tsx
+ const workspacePanel = revampFlags.ASSISTSUPPORT_REVAMP_WORKSPACE_HERO ? (
+
+ ) : (
+ claudeDesignWorkspacePanel
+ );
+ ```
+
+3. Replace the single return site that renders
+ `claudeDesignWorkspacePanel` with `workspacePanel`.
+
+The three new props (`onRateResponse`, `onFlagKbGap`,
+`retrievalLatencyMs`) are optional. If the Draft tab does not already
+have handlers for them the rail renders as informational only — no
+wiring is strictly required for the first commit.
+
+## 3. CSS import
+
+The new component imports its own CSS at the top of the file
+(`import "../../styles/revamp/workspaceHero.css";`) — no change
+required in `App.css` or `styles/revamp/index.css`.
+
+## 4. Tests
+
+1. Duplicate the existing
+ `src/features/workspace/ClaudeDesignWorkspace.tsx` test coverage
+ (currently exercised via the DraftTab component tests) onto the new
+ renderer. A drop-in `WorkspaceHeroLayout.test.tsx` next to the
+ component is the expected location.
+2. Run the Workspace performance suite to confirm there's no regression:
+ ```bash
+ pnpm perf:workspace
+ ```
+3. Run the repo health path before opening a PR:
+ ```bash
+ pnpm health:repo
+ ```
+
+Visual regression snapshots in `tests/ui/*.spec.ts` will need an
+`--update-snapshots` pass (`pnpm ui:test:visual:update`) once the
+redesign is landed behind the flag **and** the flag is turned on in
+the test harness.
+
+## 5. Rollback plan
+
+The old renderer is not deleted. To roll back:
+
+1. Flip `ASSISTSUPPORT_REVAMP_WORKSPACE_HERO` to `false` at the env
+ level, or
+2. Revert the single `workspacePanel` branch in `DraftTab.tsx`.
+
+No token, shell, or other tab is touched — the redesign cannot break
+surfaces outside the Draft tab.
+
+## 6. Out of scope for this change
+
+- Do not consolidate the rail into a shared component with the
+ existing `TicketWorkspaceRail.tsx`. That component powers the Queue
+ context, not the Draft tab. Sharing would require a larger refactor
+ and is explicitly a non-goal here.
+- Do not move audience / tone / urgency / environment into a separate
+ settings modal. They belong inside the rail.
+- Do not delete `ClaudeDesignWorkspace.tsx` or its CSS. Keep both
+ around until the redesign has been running on `true` for two release
+ cycles, then remove in a dedicated cleanup PR.
+
+## 7. Commit hygiene
+
+Recommended commit sequence (keeps each commit atomic and easy to
+revert):
+
+1. `feat(workspace): add WorkspaceHeroLayout renderer`
+2. `feat(workspace): scoped hero-layout CSS`
+3. `feat(revamp): wire WORKSPACE_HERO flag into DraftTab`
+4. `test(workspace): cover WorkspaceHeroLayout render paths`
+
+Branch name convention: `codex/feat/workspace-hero-layout`.
diff --git a/docs/redesign/README.md b/docs/redesign/README.md
new file mode 100644
index 0000000..60a4006
--- /dev/null
+++ b/docs/redesign/README.md
@@ -0,0 +1,107 @@
+# Workspace Redesign — Claude Code Handoff Bundle
+
+Session 1 of the AssistSupport portfolio pass. This bundle redesigns the
+primary Workspace (Draft) screen so the **AI-drafted answer** becomes the
+hero surface of the application, and the composer and triage/feedback
+controls are organized around it.
+
+## What this bundle replaces
+
+Today the Draft tab is rendered by
+[`ClaudeDesignWorkspace.tsx`](../../src/features/workspace/ClaudeDesignWorkspace.tsx),
+which uses a two-column grid (Query + Context | Response + Sources +
+Alternatives). Both columns have roughly equal visual weight, the answer
+body is 13.5px, and feedback/rating controls are scattered across the
+right column together with citations.
+
+This redesign introduces a drop-in replacement,
+[`WorkspaceHeroLayout.tsx`](../../src/features/workspace/WorkspaceHeroLayout.tsx),
+with a three-region geometry:
+
+```
+┌──────────────────────────────────────────────────────────────┐
+│ COMPOSER (sticky, full-width) │
+│ ticket micro-header · textarea · intent chips · length · ⌘↵│
+├──────────────────────────────────────────┬───────────────────┤
+│ │ │
+│ ANSWER HERO (center, readable column) │ TRIAGE RAIL │
+│ · intent + confidence gauge │ · workflow │
+│ · AI draft (16px / 1.65, 70ch) │ · signals │
+│ · inline [n] citations │ · alternatives │
+│ · sources cited (beneath draft) │ · feedback │
+│ · regenerate · copy · save template │ · model/perf │
+│ │ │
+└──────────────────────────────────────────┴───────────────────┘
+```
+
+The answer column and the right rail scroll independently; the composer
+stays sticky at the top of the viewport while the operator scrolls
+through a long multi-paragraph draft.
+
+## Why these changes
+
+1. **Readability-first hero.** The AI-drafted answer is what the
+ operator will actually paste into Jira. The redesign lifts body text
+ from 13.5px / 1.55 to 16px / 1.65 and clamps line length to 70ch so
+ multi-paragraph drafts read like prose rather than a form field.
+2. **Clear quality loop.** Confidence, grounded-claims breakdown,
+ alternatives, rating capture, and KB-gap flag are consolidated into a
+ single right rail — the feedback loop lives in one place instead of
+ being mixed in with citations.
+3. **Sources stay next to the draft.** Citations and their numbered
+ source list live in the answer column so `[1]`, `[2]` markers remain
+ within eye-tracking distance of the source entries.
+4. **Single-accent discipline.** The redesign drops the
+ blue→violet avatar gradient from the old ticket card and pushes all
+ decoration through the teal accent (`--as-accent-1`). Status colors
+ (good / warn / bad / info) remain functional-only.
+
+## What ships in this bundle
+
+| Path | Purpose |
+| -------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------- |
+| [`docs/redesign/README.md`](./README.md) | This overview. |
+| [`docs/redesign/SPEC.md`](./SPEC.md) | Layout spec, typography scale, component inventory, a11y. |
+| [`docs/redesign/INTEGRATION.md`](./INTEGRATION.md) | How to wire the new component into `DraftTab.tsx`. |
+| [`docs/redesign/ACCEPTANCE.md`](./ACCEPTANCE.md) | Acceptance checklist for the implementing agent. |
+| [`src/features/workspace/WorkspaceHeroLayout.tsx`](../../src/features/workspace/WorkspaceHeroLayout.tsx) | The new 3-region renderer. Same props as `ClaudeDesignWorkspace`. |
+| [`src/styles/revamp/workspaceHero.css`](../../src/styles/revamp/workspaceHero.css) | Styles scoped under `.wsx`. |
+
+The existing `ClaudeDesignWorkspace.tsx` and its CSS are **left in
+place** so the redesign can ship behind a flag and be A/B'd or rolled
+back without a git revert.
+
+## Design system continuity
+
+This redesign reuses the existing revamp token set
+([`src/styles/revamp/tokens.css`](../../src/styles/revamp/tokens.css))
+unchanged. No new tokens are introduced and no existing tokens are
+renamed. The new CSS only consumes:
+
+- `--as-surface-*`, `--as-border-*`, `--as-text-*`
+- `--as-glass-1/2/3`
+- `--as-accent-1/2` + `--as-accent-surface-1` + `--as-accent-border-1`
+- `--as-good/warn/bad/info-*`
+- `--as-font-sans`, `--as-font-mono`
+- `--as-space-*`, `--as-radius-*`, `--as-shadow-1`, `--as-focus`
+
+This keeps the new screen visually identical in palette and rhythm to
+the rest of the revamped shell (Queue / Knowledge / Analytics / Ops /
+Settings) and means the same shell continues to cover accent swap,
+density swap, and reduced-transparency media queries.
+
+## Next sessions (context for the implementing agent)
+
+This bundle is the first of four coordinated deliverables for the
+AssistSupport portfolio pass:
+
+1. **Session 1 (this bundle)** — Workspace redesign.
+2. **Session 2** — 6-panel screenshot set.
+3. **Session 3** — Landscape-letter one-pager PDF.
+4. **Session 4** — 12-slide LinkedIn Live deck.
+
+All four share the same design system: teal accent, warm-graphite dark
+surfaces, IBM Plex Sans + JetBrains Mono. When the implementing agent
+works on sessions 2-4 the screenshots will be captured from the UI
+produced here, so any deviation from the spec in this bundle will
+propagate into the collateral.
diff --git a/docs/redesign/SPEC.md b/docs/redesign/SPEC.md
new file mode 100644
index 0000000..be7c088
--- /dev/null
+++ b/docs/redesign/SPEC.md
@@ -0,0 +1,259 @@
+# Workspace Redesign — Layout + Visual Spec
+
+Reference spec for `WorkspaceHeroLayout.tsx` + `workspaceHero.css`.
+Scope: the Workspace (Draft) tab only. Shell, Queue, Knowledge, Ops,
+Analytics, and Settings tabs are unchanged.
+
+## 1. Grid geometry
+
+Desktop viewport ≥ 1280px:
+
+| Region | Row | Column | Size |
+| ----------- | --- | ---------- | --------------------- |
+| Composer | 1 | full width | `auto` height, sticky |
+| Answer hero | 2 | col 1 | `minmax(0, 1fr)` |
+| Triage rail | 2 | col 2 | `340px` fixed |
+
+Column gap: `24px`. Composer bottom margin: `20px`. Main container
+horizontal padding: `28px`. Overall `cdw`-style scroll container is
+removed; the answer column and rail scroll independently so the
+composer stays in view during long drafts.
+
+Breakpoints:
+
+- `≥1280px` — 3-region layout as above.
+- `900–1279px` — rail collapses to `300px`; composer stays full width.
+- `<900px` — rail stacks below answer column (`grid-template-columns: 1fr`), composer stays sticky.
+- `<640px` — composer footer wraps: chips row above, length + generate row below.
+
+## 2. Composer region
+
+Sticky top of the scroll container. Background uses `--as-glass-2`
+with a `--backdrop-blur: 12px` to visually separate it from the
+scrolling answer.
+
+Children (top-to-bottom):
+
+1. **Ticket micro-header** (single row, 32px tall)
+ - Left: `AS-4218 · REQUEST` (monospace, 11px, uppercase, `--as-text-3`)
+ - Center: ticket summary (14px / 1.3 semibold, truncated)
+ - Right: priority badge + auto-detected intent badge
+ - The blue→violet avatar gradient from `ClaudeDesignWorkspace` is
+ removed; if an avatar is rendered it uses a solid
+ `--as-accent-surface-1` background with accent-1 text.
+2. **Query field** (textarea, 104px min height, 240px max height before scroll, 15px / 1.5)
+3. **Composer footer** (flex row)
+ - Left: intent chip row (`.wsx__chips` — same visual language as
+ `.cdw .chip`)
+ - Right: response-length segmented control + Generate button with
+ `⌘↵` kbd pill. When generating, replaced by Cancel button.
+
+## 3. Answer hero region
+
+Center column. Max inner content width: `720px`, centered within the
+column so the prose never exceeds 70ch. Outer column has the full
+1fr width so the right rail can sit flush.
+
+Vertical stack:
+
+1. **Intent + confidence strip** (when a confidence object exists)
+ - Height 48px, single row
+ - Left: `ML INTENT` label (11px mono, letter-spacing 1px, uppercase)
+ - derived intent class (e.g. `policy / removable_media`)
+ - Right: confidence gauge (`Grounded` / `Needs clarify` / `Abstain`
+ pill + horizontal bar + numeric percent, tabular-nums)
+ - Tone switches on `confidence.mode`: answer → good, clarify → warn,
+ ood → bad
+2. **Metrics row** (tok/s · sources · words · ctx util · claims supported · model name)
+ - 11.5px, all metrics use monospace numerals
+3. **Answer prose**
+ - 16px / 1.65 IBM Plex Sans
+ - `max-width: 70ch`
+ - Paragraph gap: 14px
+ - H2 inside draft: 17px / 1.35 semibold, 24px top margin, 6px bottom
+ - H3 inside draft: 15px / 1.4 semibold
+ - Inline code: JetBrains Mono 14px / 1.45 on `rgba(255, 255, 255, 0.04)` with `--as-radius-1` and 2px/4px padding
+ - Fenced code: JetBrains Mono 13.5px / 1.55 on `--as-glass-3`, 12px padding, `--as-radius-2`
+ - Citation pills: 11px mono, `--as-accent-surface-1` background,
+ `--as-accent-border-1` border, `--as-accent-1` text, 4px radius,
+ 2px horizontal margin. Same visual as `.cdw .cite`.
+ - Empty state (no draft yet): 320px min height, centered helper text
+ at 14px `--as-text-3`.
+4. **Answer actions** (bottom of prose block)
+ - Flex row: Regenerate (ghost), Save template (ghost), spacer, Copy response (primary)
+5. **Sources block**
+ - Heading: `Cited sources · click to open` (12px semibold)
+ - Vertical list of `.wsx__source` rows (numbered pill + title + path + score)
+ - Behaves like `.cdw .source` but the number pill uses the same
+ typography as inline citations so they visually connect.
+
+Streaming dot matches existing `.streaming-dot` semantics — 7px accent
+disc with 1.2s pulse.
+
+## 4. Triage rail region
+
+Right column. Width `340px` at ≥1280px, `300px` at 900–1279px, full
+width stacked below the answer at <900px.
+
+Vertical stack (each block is a `.wsx__railCard` with 14px padding, 12px gap between cards):
+
+1. **Workflow progress (vertical)**
+ - Replaces the horizontal `.ws-strip` from `ClaudeDesignWorkspace`.
+ - 4 steps (Triage · Classify · Draft · Send to Jira), each shown
+ as a row with a 20px numbered circle and a label + short status.
+ - Current step highlighted with `--as-accent-surface-1` background,
+ completed steps use `--as-good-surface` number circle.
+2. **Signals**
+ - Confidence summary (the same numeric %, shown smaller: 20px mono semibold)
+ - Grounded claims: `{supported}/{total} claims supported` with a
+ horizontal mini-bar.
+ - Retrieval latency: `{ms}ms hybrid search` (monospace).
+3. **Alternatives**
+ - Hidden when `alternatives.length === 0`.
+ - Stacked vertically instead of horizontal chips.
+ - Each alt: label `ALT 1` (10px mono uppercase) + first 120 chars of
+ preview (12px / 1.35) + "Use this" ghost button.
+4. **Feedback**
+ - Thumbs up / thumbs down buttons (36x36, accent when selected).
+ - `Flag as KB gap` ghost button full width beneath the thumbs row.
+ - Optional 1-line comment field that appears after a thumb is clicked.
+5. **Context**
+ - Audience + Tone + Urgency + Environment selects. These move out of
+ the answer column (where the current design puts them in a
+ "Context" panel) and into the rail so the answer column is prose-only.
+6. **Model / perf footer**
+ - `loadedModelName` in monospace, context utilization %, last-run
+ timestamp. Small, 11px, `--as-text-3`.
+
+The rail never exceeds the viewport height; if the combined cards
+overflow it scrolls independently of the answer column.
+
+## 5. Typography tokens used
+
+| Role | Family | Size | Line | Weight | Tracking |
+| ------------------- | ---------------- | ------ | ---- | ------ | -------- |
+| Answer prose | `--as-font-sans` | 16px | 1.65 | 400 | normal |
+| Answer prose strong | `--as-font-sans` | 16px | 1.65 | 600 | normal |
+| Answer H2 | `--as-font-sans` | 17px | 1.35 | 600 | -0.1px |
+| Answer H3 | `--as-font-sans` | 15px | 1.4 | 600 | normal |
+| Inline code | `--as-font-mono` | 14px | 1.45 | 400 | 0 |
+| Fenced code | `--as-font-mono` | 13.5px | 1.55 | 400 | 0 |
+| Citation pill | `--as-font-mono` | 11px | 1 | 600 | 0.4px |
+| Ticket summary | `--as-font-sans` | 14px | 1.3 | 600 | -0.1px |
+| Ticket id | `--as-font-mono` | 11px | 1.2 | 500 | 0.4px |
+| Composer textarea | `--as-font-sans` | 15px | 1.5 | 400 | 0 |
+| Chip | `--as-font-sans` | 11.5px | 1.2 | 500 | 0 |
+| Segmented | `--as-font-sans` | 12px | 1 | 500 | 0 |
+| Rail card title | `--as-font-sans` | 12px | 1.2 | 600 | 0.3px |
+| Rail stat value | `--as-font-mono` | 20px | 1 | 600 | tabular |
+| Rail label | `--as-font-sans` | 11px | 1.2 | 500 | 0.2px |
+| Footer meta | `--as-font-mono` | 11px | 1.3 | 400 | 0 |
+
+## 6. Color discipline
+
+Single accent: teal `--as-accent-1` (`#4fd1c5`). No gradient
+decorations anywhere in the redesign. Specifically:
+
+- Ticket avatar uses `--as-accent-surface-1` background with
+ `--as-accent-1` initials instead of the `linear-gradient(135deg,
+#60a5fa, #a78bfa)` from the current design.
+- Confidence bar uses `linear-gradient(90deg, var(--as-good),
+var(--as-accent-1))` **only when** confidence mode is `answer`. In
+ `clarify` it uses a solid `--as-warn`, in `abstain` a solid `--as-bad`.
+- Panel backgrounds never use accent fills; accent is reserved for
+ interactive affordances (chip-on, primary button, citation pills,
+ gauge bar, focus ring).
+
+Status colors `--as-good`, `--as-warn`, `--as-bad`, `--as-info` remain
+functional-only (confidence tone, KB-gap flag, error toast, info
+badges).
+
+## 7. Motion
+
+All transitions use `150ms ease`:
+
+- Chip on/off
+- Button hover
+- Textarea focus
+
+Streaming dot: existing 1.2s pulse from `.cdw .streaming-dot`.
+
+Rail cards fade-slide in 120ms when their underlying data first
+populates (`opacity: 0 → 1`, `transform: translateY(4px) → 0`). No
+entrance animation on composer or answer — they're always present.
+
+Honors `@media (prefers-reduced-motion: reduce)` — all motion collapses
+to 0ms via the existing rule in `design-tokens.css`.
+
+## 8. Accessibility
+
+- Composer textarea has `aria-label="Ticket or issue description"`.
+- Intent chips and length segmented control use `role="radiogroup"` with
+ `role="radio"` + `aria-checked`, same as the current
+ `ClaudeDesignWorkspace`.
+- Confidence gauge has `aria-label` announcing the percent and tone.
+- Citation pills are `