feat: real-time context/token usage display in top bar and channel header#554
feat: real-time context/token usage display in top bar and channel header#554Joshf225 wants to merge 9 commits intospacedriveapp:mainfrom
Conversation
Adds browser-based OAuth authentication for GitHub Copilot as an alternative to manually providing a PAT. Uses GitHub's OAuth 2.0 Device Authorization Grant (RFC 8628) with the same client ID as OpenCode. - New module github_copilot_oauth: device code request, background token polling, credential persistence with 0600 permissions - LlmManager: prefer OAuth credentials over static PAT when both exist, lazy load from disk on init - API: start/status/delete endpoints for Copilot OAuth sessions, background poller with configurable interval, ProviderStatus updated with github_copilot_oauth field
…ange The previous client ID (Ov23li8tweQw6odWQebz) was not whitelisted by GitHub for the copilot_internal/v2/token endpoint, causing 404s on every token exchange. Switch to 01ab8ac9400c4e429b23 (the VS Code GitHub Copilot extension app), which GitHub has whitelisted. This is consistent with how neovim/copilot.vim and other third-party Copilot integrations handle the same constraint. Also fix three pre-existing collapsible_if clippy warnings in model.rs by collapsing nested if-let/if blocks using && as clippy suggests.
- Wrap Markdown component with React.memo to skip re-renders when props are unchanged - Hoist remarkPlugins/rehypePlugins/components to module-level stable refs so referential equality holds across renders - Move input state into FloatingChatInput so typing no longer triggers a re-render of the full timeline - Replace useEffect([value]) reflow pattern with a single on-mount native input event listener Also add .spacebot-dev/ to .gitignore to keep dev-instance config out of version control.
…ader Tracks estimated token usage per channel and surfaces it in the UI: - Compactor emits ContextUsage ProcessEvent after each turn - StatusBlock grows estimated_tokens/context_window/usage_ratio fields populated via channel_states snapshot in the channels API - ApiState fans out ContextUsage as an SSE event; useChannelLiveState consumes it - AgentTopBar and ChannelDetail show '<used>k / <window>k' with neutral/yellow/red colouring at 60%/80% thresholds - tokens.ts util provides formatTokens and getTokenUsageColor helpers
|
No actionable comments were generated in the recent review. 🎉 ℹ️ Recent review info⚙️ Run configurationConfiguration used: Organization UI Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (1)
🚧 Files skipped from review as they are similar to previous changes (1)
WalkthroughAdds GitHub Copilot OAuth (device/browser) support, context-usage telemetry and SSE events, frontend Copilot UI and token indicators, compactor emission of context-usage events, OpenAI Responses parsing changes, and assorted UI/component/typing updates across backend and interface layers. Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Suggested reviewers
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 12
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/llm/manager.rs (1)
72-110:⚠️ Potential issue | 🟠 Major
set_instance_dir()still never updatesself.instance_dir.The new Copilot OAuth loading works, but every later persistence path in this type still checks
self.instance_dir. That means a manager built vianew()+set_instance_dir()can load Copilot OAuth state once, yet it still won't persist refreshed OAuth credentials or exchanged Copilot API tokens back to disk.This needs an interior-mutable
instance_dir(or a&mut selfinitializer) before the new OAuth flow is reliable across refreshes/restarts.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/llm/manager.rs` around lines 72 - 110, set_instance_dir doesn't store the passed instance_dir into the struct, so later persistence (e.g., save_credentials, Copilot token refresh) still sees no instance_dir; change the Manager's instance_dir field to interior-mutable (e.g., RwLock<Option<PathBuf>> or Mutex<Option<PathBuf>>) and in set_instance_dir call set the stored instance_dir (clone or move as appropriate) — update all methods that read self.instance_dir to await/read the lock (e.g., where save_credentials, exchange_copilot_token, or other persistence paths check self.instance_dir) so refreshed OAuth credentials and tokens are persisted to disk.
🧹 Nitpick comments (2)
src/llm/model.rs (1)
3711-3718: Error message may be misleading when output is not an array.The check
response["output"].as_array().is_some()determines success, but the error message refers to "missing response.completed event." Given the earlier default initialization ("output": []), this error would only trigger if something setsoutputto a non-array value, which is unlikely. The error message doesn't accurately describe this edge case.Consider updating the message to reflect what's actually being checked, or simplifying the logic since the error path may be unreachable given the current flow.
💡 Suggested clarification
if response["output"].as_array().is_some() { Ok(response) } else { Err(CompletionError::ProviderError(format!( - "{provider_label} Responses SSE stream missing response.completed event.\nBody: {}", + "{provider_label} Responses SSE stream has invalid output format (expected array).\nBody: {}", truncate_body(response_text) ))) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/llm/model.rs` around lines 3711 - 3718, The current check uses response["output"].as_array().is_some() but the error text says "missing response.completed event," which is misleading; update the Err branch in the block that constructs CompletionError (using provider_label, response_text, truncate_body, and response) to either change the condition or improve the message to accurately reflect the cause (e.g., "response.output is not an array" or "unexpected response.output type") so it matches the actual check, or remove the unreachable branch if you confirm output can never be non-array given the earlier initialization.src/agent/compactor.rs (1)
60-64: Avoid double history scans for token estimation in the same turn.
check_and_compact()estimates tokens, thenemit_context_usage()estimates again immediately after. Consider extracting a shared usage snapshot helper and reusing it to reduce per-turn overhead.Also applies to: 117-123
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/agent/compactor.rs` around lines 60 - 64, The code performs two back-to-back scans of self.history via estimate_history_tokens in check_and_compact() and emit_context_usage(); extract a single shared snapshot helper (e.g., get_history_usage_snapshot or history_usage_snapshot) that takes &self and context_window and returns (usage, estimated_tokens) by acquiring history.read().await once, then call that helper from both check_and_compact() and emit_context_usage() (or have check_and_compact() return the snapshot to be forwarded) so both functions reuse the same estimated_tokens and avoid double scans; reference functions: check_and_compact, emit_context_usage, estimate_history_tokens, and history.read().await.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@interface/src/hooks/useChannelLiveState.ts`:
- Around line 195-202: The contextUsage object is being created without ensuring
snapshot.usage_ratio exists, causing undefined values and NaN in downstream
math; update the guard in useChannelLiveState so it checks all three telemetry
fields (snapshot.estimated_tokens, snapshot.context_window, and
snapshot.usage_ratio) before constructing contextUsage, and when present cast
each to number (estimatedTokens, contextWindow, usageRatio) to avoid storing
undefined; then leave the assignment to next[channelId] = { ...existing,
workers, branches, contextUsage } unchanged.
In `@interface/src/router.tsx`:
- Around line 75-77: The code uses getPortalSessionId(agentId) unconditionally
which makes the portal-session fallback apply on every agent page; change the
logic so the portal-session fallback is only used when the current route is the
chat route (so preserve "channel detail > chat tab > nothing"). Locate the
calculation of relevantChannelId (symbols: relevantChannelId, channelIdFromPath,
getPortalSessionId, contextUsage, liveStates) and modify it to check the current
route (e.g., a route/tab indicator or location/pathname) and only call/assign
getPortalSessionId(agentId) when that route indicates the chat view; otherwise
set relevantChannelId to channelIdFromPath or null so contextUsage reflects the
intended behavior. Ensure you don't call getPortalSessionId on non-chat routes.
In `@interface/src/routes/ChannelDetail.tsx`:
- Line 355: The comment in ChannelDetail.tsx mentions "token usage" but the JSX
only renders activity indicators, typing status, and the cortex toggle; update
the code to either (A) add a token usage UI element inside the right-side block
(in the ChannelDetail component) — e.g., render a TokenUsageBadge/element
showing currentTokenCount and tokenLimit obtained from props/state (or from the
channel/store) with accessible text/tooltip — or (B) remove "token usage" from
the comment if you don't want to display it; modify the JSX where the comment
sits so the comment and rendered UI stay consistent and reference the
ChannelDetail component and the right-side indicators section when making the
change.
In `@interface/src/routes/Settings.tsx`:
- Around line 582-641: monitorCopilotOAuth currently hard-stops after 360
attempts (~12 minutes) causing false timeouts; modify monitorCopilotOAuth (and
the device-code flow) to use the server-provided expiry instead of a fixed
attempt count: have the device-code response return/expose expires_in to the
client and pass it into monitorCopilotOAuth, then replace the fixed for-loop
logic (in monitorCopilotOAuth) with a time-based loop that checks Date.now()
against start + expires_in*1000 (or otherwise trust
api.copilotOAuthBrowserStatus until it returns done) while still honoring
signal.aborted; ensure cleanup calls (setCopilotDeviceCodeInfo,
setCopilotDeviceCodeCopied, setCopilotOAuthMessage, setIsPollingCopilotOAuth)
remain the same.
- Around line 673-701: The handler can resume after the await even if the dialog
was closed, so add a short-lived session guard: create a per-request token
(e.g., copilotOAuthSessionRef or a local sessionId) before calling
startCopilotOAuthMutation.mutateAsync({ ... }) and store it in a ref; after the
await, verify the stored token still matches (or that the dialog is still open)
before calling copilotOAuthAbortRef.current?.abort(), creating a new
AbortController, calling setCopilotDeviceCodeInfo, or invoking
monitorCopilotOAuth(result.state, ...). Ensure the cleanup path clears or
changes the session token so stale results are ignored.
In `@src/agent/channel.rs`:
- Around line 1550-1553: The code currently calls
self.compactor.emit_context_usage().await only in two non-suppressed paths, but
suppressed early-return branches for Observe/MentionOnly can mutate history and
exit without emitting context usage; locate the branches that return early when
a turn is suppressed (the Observe/MentionOnly handling code and the other
suppressed-return branch referenced near the other emit call) and ensure you
call self.compactor.emit_context_usage().await (handling the Result as in the
existing calls) before each early return (or install a RAII/drop guard that
calls emit_context_usage() on exit); also apply the same fix at the other
occurrence where emit_context_usage() is used to cover the code paths noted (the
second location analogous to lines 2061-2064).
In `@src/api/channels.rs`:
- Around line 248-271: The code currently mutates the shared StatusBlock
(status_block) for telemetry; instead, read and clone the block locally and
serialize that clone so you never write into the shared structure: replace the
write lock on status_block (status_block.write().await) with a read and clone
(let mut local_block = status_block.read().await.clone()), set
local_block.estimated_tokens/context_window/usage_ratio from the computed
token_data (or reset them to 0/appropriate defaults if channel_state is
missing), and only compute usage_ratio when context_window > 0 (otherwise set
usage_ratio = 0 or a safe sentinel) before calling
serde_json::to_value(&local_block). This uses StatusBlock, status_block,
channel_states/channel_state, crate::agent::compactor::estimate_history_tokens,
and serde_json::to_value as reference points.
In `@src/api/providers.rs`:
- Around line 1856-1880: The removal flow for "github-copilot-oauth" in
src/api/providers.rs only deletes credential files and clears manager caches but
doesn't update routing/default model settings, so finalize_copilot_oauth()'s
changes can leave the system pointing at unusable github-copilot/... models;
update the removal path to either (a) reset routing and defaults to a
known-working provider/model (e.g., fallback to openai/whatever default) by
invoking the same logic used by finalize_copilot_oauth() in reverse (update
stored default model/agent config and persist it) and ensure state.llm_manager
is informed, or (b) prevent removal when the current default agent/model is a
github-copilot model unless a PAT or alternative provider is configured; modify
the code around the credential removal block (references:
finalize_copilot_oauth(), state.llm_manager, clear_copilot_oauth_credentials(),
clear_copilot_token()) to implement one of these fixes and return an appropriate
error message if blocking removal.
- Around line 1000-1021: Abandoned Copilot device OAuth flows spawn background
tasks that keep polling GitHub until expires_at because there is no server-side
cancellation; add a cancellation path and make the background worker respect it
by (1) exposing a new API handler that accepts the state_key and flips the
session's status from DeviceOAuthSessionStatus::Pending to a new
Cancelled/Aborted status (or sets an explicit cancelled flag on the
DeviceOAuthSession stored in COPILOT_DEVICE_OAUTH_SESSIONS), and (2) modify
run_copilot_device_oauth_background to read the session from
COPILOT_DEVICE_OAUTH_SESSIONS (by state_key) at each poll and immediately stop
polling/return if status != Pending (or cancelled flag is set). Ensure the
cancel endpoint updates the same in-memory map and that
run_copilot_device_oauth_background uses the same state_key/state lookup to
detect cancellation promptly.
In `@src/api/system.rs`:
- Around line 128-137: Fix inconsistent indentation of match arms for ApiEvent
variants in the match that maps events to string keys: align the arms for
ApiEvent::ToolCompleted, ApiEvent::ConfigReloaded, ApiEvent::AgentMessageSent,
ApiEvent::AgentMessageReceived, ApiEvent::TaskUpdated,
ApiEvent::OpenCodePartUpdated, ApiEvent::WorkerText,
ApiEvent::CortexChatMessage, and ApiEvent::ContextUsage so they use the same
leading whitespace as the earlier arms (e.g., the arms for
ApiEvent::ToolStarted, ApiEvent::ToolStartedWithArgs, etc.); update the spacing
before the variant names in that match block to be consistent with the rest of
the match expression.
In `@src/github_copilot_oauth.rs`:
- Around line 81-102: The reqwest::Client is created without timeouts in both
request_device_code() and poll_device_token(), which can hang indefinitely;
update both functions to build the client with explicit timeouts (use
reqwest::Client::builder()) setting a connect_timeout (e.g., 10s) and an overall
request timeout (e.g., 30s) via Duration::from_secs, then call build() and use
that client for the requests; ensure you import std::time::Duration and handle
any build errors consistently with the existing context/error patterns in those
functions.
In `@src/llm/model.rs`:
- Around line 938-955: The streaming path forgets to set reasoning.effort, so
replicate the same mapping from call_openai_responses in
stream_openai_responses: compute effort via self.routing.as_ref().map(|r|
r.thinking_effort_for_model(&self.model_name)).unwrap_or("auto"), map it to
openai_effort ("max"|"high" => "high", "medium" => "medium", "low" => "low", _
=> "medium"), and add body["reasoning"] = serde_json::json!({ "effort":
openai_effort }); when building the streaming request body in
stream_openai_responses to match the non-streaming behavior and avoid empty
outputs.
---
Outside diff comments:
In `@src/llm/manager.rs`:
- Around line 72-110: set_instance_dir doesn't store the passed instance_dir
into the struct, so later persistence (e.g., save_credentials, Copilot token
refresh) still sees no instance_dir; change the Manager's instance_dir field to
interior-mutable (e.g., RwLock<Option<PathBuf>> or Mutex<Option<PathBuf>>) and
in set_instance_dir call set the stored instance_dir (clone or move as
appropriate) — update all methods that read self.instance_dir to await/read the
lock (e.g., where save_credentials, exchange_copilot_token, or other persistence
paths check self.instance_dir) so refreshed OAuth credentials and tokens are
persisted to disk.
---
Nitpick comments:
In `@src/agent/compactor.rs`:
- Around line 60-64: The code performs two back-to-back scans of self.history
via estimate_history_tokens in check_and_compact() and emit_context_usage();
extract a single shared snapshot helper (e.g., get_history_usage_snapshot or
history_usage_snapshot) that takes &self and context_window and returns (usage,
estimated_tokens) by acquiring history.read().await once, then call that helper
from both check_and_compact() and emit_context_usage() (or have
check_and_compact() return the snapshot to be forwarded) so both functions reuse
the same estimated_tokens and avoid double scans; reference functions:
check_and_compact, emit_context_usage, estimate_history_tokens, and
history.read().await.
In `@src/llm/model.rs`:
- Around line 3711-3718: The current check uses
response["output"].as_array().is_some() but the error text says "missing
response.completed event," which is misleading; update the Err branch in the
block that constructs CompletionError (using provider_label, response_text,
truncate_body, and response) to either change the condition or improve the
message to accurately reflect the cause (e.g., "response.output is not an array"
or "unexpected response.output type") so it matches the actual check, or remove
the unreachable branch if you confirm output can never be non-array given the
earlier initialization.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: e18f69a2-a6f4-494c-a95d-3c5edce9a218
📒 Files selected for processing (28)
.gitignoreinterface/src/api/client.tsinterface/src/api/schema.d.tsinterface/src/api/types.tsinterface/src/components/Markdown.tsxinterface/src/components/ModelSelect.tsxinterface/src/components/WebChatPanel.tsxinterface/src/hooks/useChannelLiveState.tsinterface/src/lib/providerIcons.tsxinterface/src/router.tsxinterface/src/routes/ChannelDetail.tsxinterface/src/routes/Settings.tsxinterface/src/utils/tokens.tssrc/agent/channel.rssrc/agent/channel_history.rssrc/agent/compactor.rssrc/agent/cortex.rssrc/agent/status.rssrc/api/channels.rssrc/api/providers.rssrc/api/server.rssrc/api/state.rssrc/api/system.rssrc/config/load.rssrc/github_copilot_oauth.rssrc/lib.rssrc/llm/manager.rssrc/llm/model.rs
| const contextUsage = snapshot.estimated_tokens !== undefined && snapshot.context_window !== undefined | ||
| ? { | ||
| estimatedTokens: snapshot.estimated_tokens as number, | ||
| contextWindow: snapshot.context_window as number, | ||
| usageRatio: snapshot.usage_ratio as number, | ||
| } | ||
| : null; | ||
| next[channelId] = { ...existing, workers, branches, contextUsage }; |
There was a problem hiding this comment.
Gate contextUsage on all three telemetry fields.
StatusBlockSnapshot marks usage_ratio optional too, but this branch only checks estimated_tokens and context_window. If a snapshot arrives without usage_ratio, the cast stores undefined and the top-bar/header math turns into NaN/wrong color classes.
Suggested fix
- const contextUsage = snapshot.estimated_tokens !== undefined && snapshot.context_window !== undefined
+ const contextUsage =
+ snapshot.estimated_tokens !== undefined &&
+ snapshot.context_window !== undefined &&
+ snapshot.usage_ratio !== undefined
? {
estimatedTokens: snapshot.estimated_tokens as number,
contextWindow: snapshot.context_window as number,
usageRatio: snapshot.usage_ratio as number,
}
: null;📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const contextUsage = snapshot.estimated_tokens !== undefined && snapshot.context_window !== undefined | |
| ? { | |
| estimatedTokens: snapshot.estimated_tokens as number, | |
| contextWindow: snapshot.context_window as number, | |
| usageRatio: snapshot.usage_ratio as number, | |
| } | |
| : null; | |
| next[channelId] = { ...existing, workers, branches, contextUsage }; | |
| const contextUsage = | |
| snapshot.estimated_tokens !== undefined && | |
| snapshot.context_window !== undefined && | |
| snapshot.usage_ratio !== undefined | |
| ? { | |
| estimatedTokens: snapshot.estimated_tokens as number, | |
| contextWindow: snapshot.context_window as number, | |
| usageRatio: snapshot.usage_ratio as number, | |
| } | |
| : null; | |
| next[channelId] = { ...existing, workers, branches, contextUsage }; |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@interface/src/hooks/useChannelLiveState.ts` around lines 195 - 202, The
contextUsage object is being created without ensuring snapshot.usage_ratio
exists, causing undefined values and NaN in downstream math; update the guard in
useChannelLiveState so it checks all three telemetry fields
(snapshot.estimated_tokens, snapshot.context_window, and snapshot.usage_ratio)
before constructing contextUsage, and when present cast each to number
(estimatedTokens, contextWindow, usageRatio) to avoid storing undefined; then
leave the assignment to next[channelId] = { ...existing, workers, branches,
contextUsage } unchanged.
| // Determine the relevant channel ID: channel detail > chat tab > nothing | ||
| const relevantChannelId = channelIdFromPath || getPortalSessionId(agentId); | ||
| const contextUsage = liveStates[relevantChannelId]?.contextUsage ?? null; |
There was a problem hiding this comment.
Restrict the portal-session fallback to the chat route.
getPortalSessionId(agentId) runs on every agent page right now, so once a portal session exists the top bar will show chat token usage while you're on memories/config/tasks/etc. That doesn't match the channel detail > chat tab > nothing behavior in the comment.
Suggested fix
- const relevantChannelId = channelIdFromPath || getPortalSessionId(agentId);
+ const isChatRoute = /\/agents\/[^/]+\/chat(?:\/|$)/.test(location.pathname);
+ const relevantChannelId =
+ channelIdFromPath || (isChatRoute ? getPortalSessionId(agentId) : null);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@interface/src/router.tsx` around lines 75 - 77, The code uses
getPortalSessionId(agentId) unconditionally which makes the portal-session
fallback apply on every agent page; change the logic so the portal-session
fallback is only used when the current route is the chat route (so preserve
"channel detail > chat tab > nothing"). Locate the calculation of
relevantChannelId (symbols: relevantChannelId, channelIdFromPath,
getPortalSessionId, contextUsage, liveStates) and modify it to check the current
route (e.g., a route/tab indicator or location/pathname) and only call/assign
getPortalSessionId(agentId) when that route indicates the chat view; otherwise
set relevantChannelId to channelIdFromPath or null so contextUsage reflects the
intended behavior. Ensure you don't call getPortalSessionId on non-chat routes.
| )} | ||
|
|
||
| {/* Right side: activity indicators + typing + cortex toggle */} | ||
| {/* Right side: activity indicators + typing + token usage + cortex toggle */} |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Check if ChannelDetail actually renders token/context usage somewhere
rg -n "contextUsage|context_usage|formatTokens|getTokenUsageColor" interface/src/routes/ChannelDetail.tsxRepository: spacedriveapp/spacebot
Length of output: 48
🏁 Script executed:
# Get context around line 355 in ChannelDetail.tsx
sed -n '340,370p' interface/src/routes/ChannelDetail.tsxRepository: spacedriveapp/spacebot
Length of output: 1380
🏁 Script executed:
# Search for JSX that renders indicators/right side of the channel
rg -A 20 -B 5 "Right side|activity indicators" interface/src/routes/ChannelDetail.tsxRepository: spacedriveapp/spacebot
Length of output: 1179
🏁 Script executed:
# Get more lines to see the complete right-side indicators section
sed -n '355,410p' interface/src/routes/ChannelDetail.tsxRepository: spacedriveapp/spacebot
Length of output: 2390
Comment mentions "token usage" but rendering is not implemented.
The comment at line 355 was updated to include "token usage" in the list of indicators, but the JSX renders only activity indicators, typing status, and the cortex toggle. Token usage display is not implemented in this section. Either add the token usage rendering code or update the comment to remove this reference.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@interface/src/routes/ChannelDetail.tsx` at line 355, The comment in
ChannelDetail.tsx mentions "token usage" but the JSX only renders activity
indicators, typing status, and the cortex toggle; update the code to either (A)
add a token usage UI element inside the right-side block (in the ChannelDetail
component) — e.g., render a TokenUsageBadge/element showing currentTokenCount
and tokenLimit obtained from props/state (or from the channel/store) with
accessible text/tooltip — or (B) remove "token usage" from the comment if you
don't want to display it; modify the JSX where the comment sits so the comment
and rendered UI stay consistent and reference the ChannelDetail component and
the right-side indicators section when making the change.
| const monitorCopilotOAuth = async (stateToken: string, signal: AbortSignal) => { | ||
| setIsPollingCopilotOAuth(true); | ||
| setCopilotOAuthMessage(null); | ||
| try { | ||
| for (let attempt = 0; attempt < 360; attempt += 1) { | ||
| if (signal.aborted) return; | ||
| const status = await api.copilotOAuthBrowserStatus(stateToken); | ||
| if (signal.aborted) return; | ||
| if (status.done) { | ||
| setCopilotDeviceCodeInfo(null); | ||
| setCopilotDeviceCodeCopied(false); | ||
| if (status.success) { | ||
| setCopilotOAuthMessage({ | ||
| text: status.message || "GitHub Copilot OAuth configured.", | ||
| type: "success", | ||
| }); | ||
| queryClient.invalidateQueries({ queryKey: ["providers"] }); | ||
| setTimeout(() => { | ||
| queryClient.invalidateQueries({ queryKey: ["agents"] }); | ||
| queryClient.invalidateQueries({ queryKey: ["overview"] }); | ||
| }, 3000); | ||
| } else { | ||
| setCopilotOAuthMessage({ | ||
| text: status.message || "Sign-in failed.", | ||
| type: "error", | ||
| }); | ||
| } | ||
| return; | ||
| } | ||
| await new Promise((resolve) => { | ||
| const onAbort = () => { | ||
| clearTimeout(timer); | ||
| resolve(undefined); | ||
| }; | ||
| const timer = setTimeout(() => { | ||
| signal.removeEventListener("abort", onAbort); | ||
| resolve(undefined); | ||
| }, 2000); | ||
| signal.addEventListener("abort", onAbort, { once: true }); | ||
| }); | ||
| } | ||
| if (signal.aborted) return; | ||
| setCopilotDeviceCodeInfo(null); | ||
| setCopilotDeviceCodeCopied(false); | ||
| setCopilotOAuthMessage({ | ||
| text: "Sign-in timed out. Please try again.", | ||
| type: "error", | ||
| }); | ||
| } catch (error: any) { | ||
| if (signal.aborted) return; | ||
| setCopilotDeviceCodeInfo(null); | ||
| setCopilotDeviceCodeCopied(false); | ||
| setCopilotOAuthMessage({ | ||
| text: `Failed to verify sign-in: ${error.message}`, | ||
| type: "error", | ||
| }); | ||
| } finally { | ||
| setIsPollingCopilotOAuth(false); | ||
| } | ||
| }; |
There was a problem hiding this comment.
Don't hard-stop the Copilot device flow at 12 minutes.
This loop always times out after 360 * 2s = 720s, but the backend starts Copilot sessions with expires_in from the device-code response and defaults that to 900 seconds. Users who complete authorization in the last three minutes will get a false timeout here while the server still considers the code valid. Return expires_in to the client, or let the server status endpoint be the source of truth for expiry instead of using a fixed attempt count.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@interface/src/routes/Settings.tsx` around lines 582 - 641,
monitorCopilotOAuth currently hard-stops after 360 attempts (~12 minutes)
causing false timeouts; modify monitorCopilotOAuth (and the device-code flow) to
use the server-provided expiry instead of a fixed attempt count: have the
device-code response return/expose expires_in to the client and pass it into
monitorCopilotOAuth, then replace the fixed for-loop logic (in
monitorCopilotOAuth) with a time-based loop that checks Date.now() against start
+ expires_in*1000 (or otherwise trust api.copilotOAuthBrowserStatus until it
returns done) while still honoring signal.aborted; ensure cleanup calls
(setCopilotDeviceCodeInfo, setCopilotDeviceCodeCopied, setCopilotOAuthMessage,
setIsPollingCopilotOAuth) remain the same.
| const handleStartCopilotOAuth = async () => { | ||
| setCopilotOAuthMessage(null); | ||
| setCopilotDeviceCodeInfo(null); | ||
| setCopilotDeviceCodeCopied(false); | ||
| try { | ||
| const result = await startCopilotOAuthMutation.mutateAsync({ | ||
| model: COPILOT_OAUTH_DEFAULT_MODEL, | ||
| }); | ||
| if (!result.success || !result.user_code || !result.verification_url || !result.state) { | ||
| setCopilotOAuthMessage({ | ||
| text: result.message || "Failed to start device sign-in", | ||
| type: "error", | ||
| }); | ||
| return; | ||
| } | ||
|
|
||
| copilotOAuthAbortRef.current?.abort(); | ||
| const abort = new AbortController(); | ||
| copilotOAuthAbortRef.current = abort; | ||
|
|
||
| setCopilotDeviceCodeInfo({ | ||
| userCode: result.user_code, | ||
| verificationUrl: result.verification_url, | ||
| }); | ||
| void monitorCopilotOAuth(result.state, abort.signal); | ||
| } catch (error: any) { | ||
| setCopilotOAuthMessage({ text: `Failed: ${error.message}`, type: "error" }); | ||
| } | ||
| }; |
There was a problem hiding this comment.
Guard the post-await path when the dialog has already been closed.
If the user closes the dialog before startCopilotOAuthMutation.mutateAsync() resolves, the cleanup in Lines 720-729 runs first, but this handler still proceeds to set copilotDeviceCodeInfo and launch monitorCopilotOAuth afterward. That revives hidden state for an abandoned flow and starts polling again after cancel. Track a request/session token or current open state before applying the result.
Also applies to: 720-735
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@interface/src/routes/Settings.tsx` around lines 673 - 701, The handler can
resume after the await even if the dialog was closed, so add a short-lived
session guard: create a per-request token (e.g., copilotOAuthSessionRef or a
local sessionId) before calling startCopilotOAuthMutation.mutateAsync({ ... })
and store it in a ref; after the await, verify the stored token still matches
(or that the dialog is still open) before calling
copilotOAuthAbortRef.current?.abort(), creating a new AbortController, calling
setCopilotDeviceCodeInfo, or invoking monitorCopilotOAuth(result.state, ...).
Ensure the cleanup path clears or changes the session token so stale results are
ignored.
| COPILOT_DEVICE_OAUTH_SESSIONS.write().await.insert( | ||
| state_key.clone(), | ||
| DeviceOAuthSession { | ||
| expires_at, | ||
| status: DeviceOAuthSessionStatus::Pending, | ||
| }, | ||
| ); | ||
|
|
||
| let state_clone = state.clone(); | ||
| let state_key_clone = state_key.clone(); | ||
| let device_code_value = device_code.device_code.clone(); | ||
| tokio::spawn(async move { | ||
| run_copilot_device_oauth_background( | ||
| state_clone, | ||
| state_key_clone, | ||
| device_code_value, | ||
| poll_interval, | ||
| expires_at, | ||
| model, | ||
| ) | ||
| .await; | ||
| }); |
There was a problem hiding this comment.
Abandoned Copilot sign-ins keep polling GitHub until expiry.
This background task has no server-side cancellation path. Closing the dialog only stops the browser-side poller in interface/src/routes/Settings.tsx Lines 720-735, so every abandoned attempt keeps hitting GitHub until expires_at. That creates unnecessary outbound traffic and leaves stale pending sessions around. Please add a cancel endpoint or another way to flip the session out of Pending when the client abandons the flow.
Also applies to: 1032-1109
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/api/providers.rs` around lines 1000 - 1021, Abandoned Copilot device
OAuth flows spawn background tasks that keep polling GitHub until expires_at
because there is no server-side cancellation; add a cancellation path and make
the background worker respect it by (1) exposing a new API handler that accepts
the state_key and flips the session's status from
DeviceOAuthSessionStatus::Pending to a new Cancelled/Aborted status (or sets an
explicit cancelled flag on the DeviceOAuthSession stored in
COPILOT_DEVICE_OAUTH_SESSIONS), and (2) modify
run_copilot_device_oauth_background to read the session from
COPILOT_DEVICE_OAUTH_SESSIONS (by state_key) at each poll and immediately stop
polling/return if status != Pending (or cancelled flag is set). Ensure the
cancel endpoint updates the same in-memory map and that
run_copilot_device_oauth_background uses the same state_key/state lookup to
detect cancellation promptly.
| // GitHub Copilot OAuth credentials are stored as a separate JSON file, | ||
| // not in the TOML config, so handle removal separately (like openai-chatgpt). | ||
| if provider == "github-copilot-oauth" { | ||
| let instance_dir = (**state.instance_dir.load()).clone(); | ||
| let cred_path = crate::github_copilot_oauth::credentials_path(&instance_dir); | ||
| if cred_path.exists() { | ||
| tokio::fs::remove_file(&cred_path) | ||
| .await | ||
| .map_err(|_| StatusCode::INTERNAL_SERVER_ERROR)?; | ||
| } | ||
| // Also clear the cached Copilot API token since it was derived from OAuth. | ||
| let token_path = crate::github_copilot_auth::credentials_path(&instance_dir); | ||
| if token_path.exists() { | ||
| tokio::fs::remove_file(&token_path) | ||
| .await | ||
| .map_err(|_| StatusCode::INTERNAL_SERVER_ERROR)?; | ||
| } | ||
| if let Some(manager) = state.llm_manager.read().await.as_ref() { | ||
| manager.clear_copilot_oauth_credentials().await; | ||
| manager.clear_copilot_token().await; | ||
| } | ||
| return Ok(Json(ProviderUpdateResponse { | ||
| success: true, | ||
| message: "GitHub Copilot OAuth credentials removed".into(), | ||
| })); |
There was a problem hiding this comment.
Disconnecting Copilot OAuth leaves routing pointed at Copilot models.
finalize_copilot_oauth() rewrites defaults and the default agent to the selected github-copilot/... model, but this removal path only deletes credentials/token files. If the user disconnects Copilot and does not also have a PAT configured, new conversations can keep routing to a provider that can no longer authenticate. Reset the routing to a working model, or block removal while Copilot is still selected.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/api/providers.rs` around lines 1856 - 1880, The removal flow for
"github-copilot-oauth" in src/api/providers.rs only deletes credential files and
clears manager caches but doesn't update routing/default model settings, so
finalize_copilot_oauth()'s changes can leave the system pointing at unusable
github-copilot/... models; update the removal path to either (a) reset routing
and defaults to a known-working provider/model (e.g., fallback to
openai/whatever default) by invoking the same logic used by
finalize_copilot_oauth() in reverse (update stored default model/agent config
and persist it) and ensure state.llm_manager is informed, or (b) prevent removal
when the current default agent/model is a github-copilot model unless a PAT or
alternative provider is configured; modify the code around the credential
removal block (references: finalize_copilot_oauth(), state.llm_manager,
clear_copilot_oauth_credentials(), clear_copilot_token()) to implement one of
these fixes and return an appropriate error message if blocking removal.
| ApiEvent::ToolCompleted { .. } => "tool_completed", | ||
| ApiEvent::ConfigReloaded => "config_reloaded", | ||
| ApiEvent::AgentMessageSent { .. } => "agent_message_sent", | ||
| ApiEvent::AgentMessageReceived { .. } => "agent_message_received", | ||
| ApiEvent::TaskUpdated { .. } => "task_updated", | ||
| ApiEvent::OpenCodePartUpdated { .. } => "opencode_part_updated", | ||
| ApiEvent::WorkerText { .. } => "worker_text", | ||
| ApiEvent::CortexChatMessage { .. } => "cortex_chat_message", | ||
| ApiEvent::ContextUsage { .. } => "context_usage", | ||
| }; |
There was a problem hiding this comment.
Inconsistent indentation on these match arms.
Lines 128-137 have an extra leading space compared to the preceding arms (lines 117-127). This appears to be a formatting error.
🧹 Proposed fix for indentation
ApiEvent::ToolStarted { .. } => "tool_started",
- ApiEvent::ToolCompleted { .. } => "tool_completed",
- ApiEvent::ConfigReloaded => "config_reloaded",
- ApiEvent::AgentMessageSent { .. } => "agent_message_sent",
- ApiEvent::AgentMessageReceived { .. } => "agent_message_received",
- ApiEvent::TaskUpdated { .. } => "task_updated",
- ApiEvent::OpenCodePartUpdated { .. } => "opencode_part_updated",
- ApiEvent::WorkerText { .. } => "worker_text",
- ApiEvent::CortexChatMessage { .. } => "cortex_chat_message",
- ApiEvent::ContextUsage { .. } => "context_usage",
- };
+ ApiEvent::ToolCompleted { .. } => "tool_completed",
+ ApiEvent::ConfigReloaded => "config_reloaded",
+ ApiEvent::AgentMessageSent { .. } => "agent_message_sent",
+ ApiEvent::AgentMessageReceived { .. } => "agent_message_received",
+ ApiEvent::TaskUpdated { .. } => "task_updated",
+ ApiEvent::OpenCodePartUpdated { .. } => "opencode_part_updated",
+ ApiEvent::WorkerText { .. } => "worker_text",
+ ApiEvent::CortexChatMessage { .. } => "cortex_chat_message",
+ ApiEvent::ContextUsage { .. } => "context_usage",
+ };📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| ApiEvent::ToolCompleted { .. } => "tool_completed", | |
| ApiEvent::ConfigReloaded => "config_reloaded", | |
| ApiEvent::AgentMessageSent { .. } => "agent_message_sent", | |
| ApiEvent::AgentMessageReceived { .. } => "agent_message_received", | |
| ApiEvent::TaskUpdated { .. } => "task_updated", | |
| ApiEvent::OpenCodePartUpdated { .. } => "opencode_part_updated", | |
| ApiEvent::WorkerText { .. } => "worker_text", | |
| ApiEvent::CortexChatMessage { .. } => "cortex_chat_message", | |
| ApiEvent::ContextUsage { .. } => "context_usage", | |
| }; | |
| ApiEvent::ToolCompleted { .. } => "tool_completed", | |
| ApiEvent::ConfigReloaded => "config_reloaded", | |
| ApiEvent::AgentMessageSent { .. } => "agent_message_sent", | |
| ApiEvent::AgentMessageReceived { .. } => "agent_message_received", | |
| ApiEvent::TaskUpdated { .. } => "task_updated", | |
| ApiEvent::OpenCodePartUpdated { .. } => "opencode_part_updated", | |
| ApiEvent::WorkerText { .. } => "worker_text", | |
| ApiEvent::CortexChatMessage { .. } => "cortex_chat_message", | |
| ApiEvent::ContextUsage { .. } => "context_usage", | |
| }; |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/api/system.rs` around lines 128 - 137, Fix inconsistent indentation of
match arms for ApiEvent variants in the match that maps events to string keys:
align the arms for ApiEvent::ToolCompleted, ApiEvent::ConfigReloaded,
ApiEvent::AgentMessageSent, ApiEvent::AgentMessageReceived,
ApiEvent::TaskUpdated, ApiEvent::OpenCodePartUpdated, ApiEvent::WorkerText,
ApiEvent::CortexChatMessage, and ApiEvent::ContextUsage so they use the same
leading whitespace as the earlier arms (e.g., the arms for
ApiEvent::ToolStarted, ApiEvent::ToolStartedWithArgs, etc.); update the spacing
before the variant names in that match block to be consistent with the rest of
the match expression.
| pub async fn request_device_code() -> Result<DeviceCodeResponse> { | ||
| let client = reqwest::Client::new(); | ||
| let response = client | ||
| .post(DEVICE_CODE_URL) | ||
| .header("Accept", "application/json") | ||
| .form(&[("client_id", CLIENT_ID), ("scope", SCOPE)]) | ||
| .send() | ||
| .await | ||
| .context("failed to send GitHub device code request")?; | ||
|
|
||
| let status = response.status(); | ||
| let body = response | ||
| .text() | ||
| .await | ||
| .context("failed to read GitHub device code response")?; | ||
|
|
||
| if !status.is_success() { | ||
| anyhow::bail!("GitHub device code request failed ({}): {}", status, body); | ||
| } | ||
|
|
||
| serde_json::from_str::<DeviceCodeResponse>(&body) | ||
| .context("failed to parse GitHub device code response") |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, let's see the full file structure and examine the two functions mentioned
cat -n src/github_copilot_oauth.rsRepository: spacedriveapp/spacebot
Length of output: 12607
🌐 Web query:
reqwest Client default timeout behavior Rust
💡 Result:
- Async
reqwest::Client(reqwest::Client::new()/reqwest::Client::builder()): by default there is no total request timeout—a request can wait indefinitely unless you set one viaClientBuilder::timeout(...)(or per-request viaRequestBuilder::timeout(...)).read_timeoutalso defaults to no timeout, andconnect_timeoutdefaults toNone(disabled). [1] - Blocking
reqwest::blocking::Client:ClientBuilder::timeout(...)defaults to 30 seconds (covers connect/read/write). PassNoneto disable it. [2] - Historical note: reqwest’s changelog records that a 30s default timeout was introduced in v0.8.0 (and could be disabled with
None). [3]
Sources: [1] [2] [3]
🏁 Script executed:
# Search for calls to request_device_code and poll_device_token
rg "request_device_code|poll_device_token" -A 3 -B 3 --type rustRepository: spacedriveapp/spacebot
Length of output: 3353
🏁 Script executed:
# Look at the actual call sites in providers.rs with more context
rg -B 5 -A 10 'github_copilot_oauth::request_device_code|github_copilot_oauth::poll_device_token' src/api/providers.rsRepository: spacedriveapp/spacebot
Length of output: 1315
🏁 Script executed:
# Check if there's any timeout wrapper or configuration at the call site
rg -B 15 "github_copilot_oauth::request_device_code" src/api/providers.rs | head -30Repository: spacedriveapp/spacebot
Length of output: 645
Add explicit timeouts to the GitHub OAuth HTTP client calls.
Both request_device_code() and poll_device_token() create a fresh reqwest::Client without timeout configuration. The async reqwest client has no default timeout, so a stalled upstream or broken network path can hang indefinitely—blocking the user's initial OAuth request or pinning a background polling task. Set explicit connect and request timeouts before shipping.
Suggested implementation
- let client = reqwest::Client::new();
+ let client = reqwest::Client::builder()
+ .connect_timeout(std::time::Duration::from_secs(10))
+ .timeout(std::time::Duration::from_secs(30))
+ .build()
+ .context("failed to build GitHub OAuth HTTP client")?;Applies to both request_device_code() (line 82) and poll_device_token() (line 125).
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| pub async fn request_device_code() -> Result<DeviceCodeResponse> { | |
| let client = reqwest::Client::new(); | |
| let response = client | |
| .post(DEVICE_CODE_URL) | |
| .header("Accept", "application/json") | |
| .form(&[("client_id", CLIENT_ID), ("scope", SCOPE)]) | |
| .send() | |
| .await | |
| .context("failed to send GitHub device code request")?; | |
| let status = response.status(); | |
| let body = response | |
| .text() | |
| .await | |
| .context("failed to read GitHub device code response")?; | |
| if !status.is_success() { | |
| anyhow::bail!("GitHub device code request failed ({}): {}", status, body); | |
| } | |
| serde_json::from_str::<DeviceCodeResponse>(&body) | |
| .context("failed to parse GitHub device code response") | |
| pub async fn request_device_code() -> Result<DeviceCodeResponse> { | |
| let client = reqwest::Client::builder() | |
| .connect_timeout(std::time::Duration::from_secs(10)) | |
| .timeout(std::time::Duration::from_secs(30)) | |
| .build() | |
| .context("failed to build GitHub OAuth HTTP client")?; | |
| let response = client | |
| .post(DEVICE_CODE_URL) | |
| .header("Accept", "application/json") | |
| .form(&[("client_id", CLIENT_ID), ("scope", SCOPE)]) | |
| .send() | |
| .await | |
| .context("failed to send GitHub device code request")?; | |
| let status = response.status(); | |
| let body = response | |
| .text() | |
| .await | |
| .context("failed to read GitHub device code response")?; | |
| if !status.is_success() { | |
| anyhow::bail!("GitHub device code request failed ({}): {}", status, body); | |
| } | |
| serde_json::from_str::<DeviceCodeResponse>(&body) | |
| .context("failed to parse GitHub device code response") |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/github_copilot_oauth.rs` around lines 81 - 102, The reqwest::Client is
created without timeouts in both request_device_code() and poll_device_token(),
which can hang indefinitely; update both functions to build the client with
explicit timeouts (use reqwest::Client::builder()) setting a connect_timeout
(e.g., 10s) and an overall request timeout (e.g., 30s) via Duration::from_secs,
then call build() and use that client for the requests; ensure you import
std::time::Duration and handle any build errors consistently with the existing
context/error patterns in those functions.
| // Set reasoning effort for models that support it. Without this, | ||
| // some endpoints (notably ChatGPT Plus OAuth) default to "none", | ||
| // which produces empty output arrays. | ||
| let effort = self | ||
| .routing | ||
| .as_ref() | ||
| .map(|r| r.thinking_effort_for_model(&self.model_name)) | ||
| .unwrap_or("auto"); | ||
| let openai_effort = match effort { | ||
| "max" | "high" => "high", | ||
| "medium" => "medium", | ||
| "low" => "low", | ||
| // "auto" or anything else → "medium" as a safe default that | ||
| // guarantees the model actually produces output. | ||
| _ => "medium", | ||
| }; | ||
| body["reasoning"] = serde_json::json!({ "effort": openai_effort }); | ||
|
|
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Check if stream_openai_responses sets reasoning effort
rg -n -A 10 'fn stream_openai_responses' src/llm/model.rs | head -50
# Verify the body construction in streaming doesn't include reasoning
rg -n '"reasoning"' src/llm/model.rsRepository: spacedriveapp/spacebot
Length of output: 1229
🏁 Script executed:
#!/bin/bash
# Get the full stream_openai_responses function to see body construction
sed -n '1033,1150p' src/llm/model.rs | head -120Repository: spacedriveapp/spacebot
Length of output: 4590
Add reasoning effort setting to the streaming path.
The reasoning.effort field is set in call_openai_responses (non-streaming, lines 938-955) to prevent "empty output arrays" from endpoints that default to "none". However, the streaming counterpart stream_openai_responses (lines 1033–1127) does not include this setting. Since both paths use the same OpenAI Responses API endpoints, the streaming path could encounter the same issue. Add the same reasoning effort mapping logic to the streaming body construction.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/llm/model.rs` around lines 938 - 955, The streaming path forgets to set
reasoning.effort, so replicate the same mapping from call_openai_responses in
stream_openai_responses: compute effort via self.routing.as_ref().map(|r|
r.thinking_effort_for_model(&self.model_name)).unwrap_or("auto"), map it to
openai_effort ("max"|"high" => "high", "medium" => "medium", "low" => "low", _
=> "medium"), and add body["reasoning"] = serde_json::json!({ "effort":
openai_effort }); when building the streaming request body in
stream_openai_responses to match the non-streaming behavior and avoid empty
outputs.
…atPanel Check event.nativeEvent.isComposing before preventing default and submitting, so the confirming Enter commits the IME candidate instead of sending the message.
Move textareaRef.current read inside adjustHeight so it still sees the current element after mount, and change the useEffect dep array from [value] to [] so the listener is added/removed only once instead of on every keystroke.
Summary
Compactoremits aContextUsageProcessEventafter each channel turn, carryingestimated_tokens,context_window, andusage_ratio.ApiStatefans this out as acontext_usageSSE event.StatusBlockalso grows these three fields so the initial channel-status snapshot is populated viachannel_statesin the channels API.useChannelLiveStateconsumescontext_usageSSE events and updates per-channelcontextUsagestate.AgentTopBarreads the active channel's usage and renders<used>k / <window>kinline.ChannelDetailshows the same in the channel header. Colour thresholds: neutral below 60 %, yellow 60–80 %, red 80 %+.interface/src/utils/tokens.tsprovidesformatTokens(k-suffix above 1 000) andgetTokenUsageColorhelpers.Verification
./scripts/preflight.sh✓./scripts/gate-pr.sh✓ (798/798 lib tests, fmt, clippy, integration compile)