Skip to content

fix: detect incomplete Anthropic SSE stream and deduplicate system prompt#26

Open
aguung wants to merge 1 commit into
enowdev:mainfrom
aguung:fix/anthropic-stream-completion
Open

fix: detect incomplete Anthropic SSE stream and deduplicate system prompt#26
aguung wants to merge 1 commit into
enowdev:mainfrom
aguung:fix/anthropic-stream-completion

Conversation

@aguung
Copy link
Copy Markdown

@aguung aguung commented May 14, 2026

Description

stream_anthropic_sse previously returned partial output silently when the network connection dropped before the message_stop event arrived. The user would see a truncated response with no indication that something went wrong.

This PR fixes that, and also cleans up two related issues found in the same file.

Changes:

  1. Track message_stop in Anthropic SSE stream — adds message_stop_received flag; if the stream ends without it, returns an explicit error so the user sees a clear message to retry instead of silently getting a truncated response.

  2. Propagate JSON parse errors in parse_anthropic_sse_line — was silently returning Ok(false) on malformed SSE payloads; now uses ? to propagate the error, consistent with parse_openai_sse_line.

  3. Deduplicate CHAT_SYSTEM_INSTRUCTIONS — the system instructions string was copy-pasted as two identical inline concat! blocks in send_openai_compatible and send_anthropic; extracted into a single CHAT_SYSTEM_INSTRUCTIONS module-level constant.

Type of Change

  • Bug fix (non-breaking change that fixes an issue)
  • Refactor (no behavior change)

How Has This Been Tested?

  • bunx tsc --noEmit passes (TypeScript) — no TypeScript changes in this PR
  • cargo clippy -- -D warnings passes (Rust) — could not run in current environment due to missing GTK system deps on WSL
  • Manual testing steps below

Manual verification:

  • Confirmed message_stop_received flag is set only inside the parse_anthropic_sse_line return path for message_stop event
  • Confirmed the 'outer label correctly breaks out of the tokio::select! loop so the completion check runs
  • Confirmed CHAT_SYSTEM_INSTRUCTIONS constant replaces both occurrences with identical content — no behavioral change

Checklist

  • My code follows the project's style guidelines
  • I have performed a self-review of my code
  • I have commented my code, particularly in hard-to-understand areas
  • I have made corresponding changes to the documentation
  • I have added tests that prove my fix is effective or that my feature works

…ompt

stream_anthropic_sse previously returned partial output silently when
the connection dropped before message_stop arrived. Now tracks
message_stop_received and returns an explicit error so the user knows
to retry rather than seeing a silently truncated response.

Also propagates JSON parse errors in parse_anthropic_sse_line via ?
instead of silently returning Ok(false), consistent with the OpenAI
SSE parser.

Extracts the duplicated CHAT_SYSTEM_INSTRUCTIONS into a single module-
level constant — previously maintained as two identical inline concat!
blocks in send_openai_compatible and send_anthropic.
aguung pushed a commit to aguung/enowX-Coder that referenced this pull request May 14, 2026
Tracks prompt and completion token counts from both OpenAI-compatible
and Anthropic SSE streams and accumulates them per session in the
frontend store.

Backend:
- TokenUsage / ChatUsageEvent / UsageAccumulator structs in
  chat_service.rs using serde::Serialize (no serde_json::json! to
  avoid clippy::disallowed_methods)
- stream_openai_sse: requests stream_options.include_usage=true so
  the final SSE chunk carries usage; parsed in parse_openai_sse_line
- stream_anthropic_sse: captures input_tokens from message_start and
  output_tokens from message_delta events
- Emits chat-usage Tauri event after each completed completion
- Also fixes stream_anthropic_sse to return error on missing
  message_stop (same as the pending PR enowdev#26)

Frontend:
- TokenUsage / ChatUsageEvent types added to types/index.ts
- useChatStore: sessionUsage record, addTokenUsage (cumulative
  per-session sum), clearSessionUsage
- AppShell: listens for chat-usage, calls addTokenUsage
- ChatHeader: shows total token badge next to session title;
  tooltip shows split prompt / completion counts; formats as
  4.2k for readability
Copy link
Copy Markdown
Owner

@enowdev enowdev left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The change itself looks reasonable, but this PR now conflicts with the current main branch in src-tauri/src/services/chat_service.rs. Please rebase onto the latest main and keep the newer provider-routing/stream handling changes while re-applying the incomplete-Anthropic-stream detection.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants