feat: add token usage counter per session#30
Closed
aguung wants to merge 1 commit into
Closed
Conversation
Tracks prompt and completion token counts from both OpenAI-compatible and Anthropic SSE streams and accumulates them per session in the frontend store. Backend: - TokenUsage / ChatUsageEvent / UsageAccumulator structs in chat_service.rs using serde::Serialize (no serde_json::json! to avoid clippy::disallowed_methods) - stream_openai_sse: requests stream_options.include_usage=true so the final SSE chunk carries usage; parsed in parse_openai_sse_line - stream_anthropic_sse: captures input_tokens from message_start and output_tokens from message_delta events - Emits chat-usage Tauri event after each completed completion - Also fixes stream_anthropic_sse to return error on missing message_stop (same as the pending PR enowdev#26) Frontend: - TokenUsage / ChatUsageEvent types added to types/index.ts - useChatStore: sessionUsage record, addTokenUsage (cumulative per-session sum), clearSessionUsage - AppShell: listens for chat-usage, calls addTokenUsage - ChatHeader: shows total token badge next to session title; tooltip shows split prompt / completion counts; formats as 4.2k for readability
Author
|
Closing — contains duplicate changes from #26 that will cause conflict. Will rebase on top of fix/anthropic-stream-completion and reopen. |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Description
Tracks prompt and completion token counts from both OpenAI-compatible and Anthropic SSE streams, accumulates them per session, and displays the total as a badge in the chat header.
Backend (
src-tauri/src/services/chat_service.rs)TokenUsage,ChatUsageEvent, andUsageAccumulatorstructs using#[derive(serde::Serialize)]— noserde_json::json!()to stay clear ofclippy::disallowed_methodsstream_openai_sse: addsstream_options.include_usage: trueto the request so the final SSE chunk carries usage; captured inparse_openai_sse_linestream_anthropic_sse: capturesinput_tokensfrommessage_startandoutput_tokensfrommessage_deltaevents(String, Option<TokenUsage>)instead ofStringsend_message_inneremitschat-usageTauri event after each completed responsemessage_stopguard fix (stream ends without completion signal → explicit error) from the pending PR fix: detect incomplete Anthropic SSE stream and deduplicate system prompt #26Frontend
src/types/index.ts:TokenUsageandChatUsageEventinterfacessrc/stores/useChatStore.ts:sessionUsage: Record<string, TokenUsage>,addTokenUsage(cumulative per-session sum),clearSessionUsagesrc/components/layout/AppShell.tsx: listens forchat-usageevent, callsaddTokenUsagesrc/components/layout/ChatHeader.tsx: token badge next to session title; tooltip shows split prompt/completion counts; formats as4.2kfor large numbersPreview
Type of Change
How Has This Been Tested?
bunx tsc --noEmitpasses (TypeScript) — only pre-existingbaseUrldeprecation warningcargo clippy -- -D warningspasses (Rust) — could not run in current environment due to missing GTK system deps on WSLManual verification:
UsageAccumulator::finish()returnsNonewhen both fields are 0 (no spurious events for providers that don't send usage)addTokenUsageaccumulates across multiple turns per session (not reset per message)clearSessionUsageis available for callers to reset on session deletestream_options.include_usageis a standard OpenAI API field, silently ignored by providers that don't support itmessage_start.message.usage.input_tokensandmessage_delta.usage.output_tokensChecklist