Skip to content

B/bug fixes#63

Merged
altic-dev merged 4 commits intomainfrom
b/bug_fixes
Dec 20, 2025
Merged

B/bug fixes#63
altic-dev merged 4 commits intomainfrom
b/bug_fixes

Conversation

@altic-dev
Copy link
Copy Markdown
Owner

@altic-dev altic-dev commented Dec 20, 2025

Description

Brief description of what this PR does.

Type of Change

  • 🐞 Bug fix (non-breaking change which fixes an issue)
  • ✨ New feature (non-breaking change which adds functionality)
  • 💥 Breaking change (fix or feature that would cause existing functionality to not work as expected)
  • 📝 Documentation update

Related Issues

Closes #(issue number)

Testing

  • Tested on Intel/Apple Silicon Mac
  • Tested on Apple Silicon Mac
  • Tested on macOS [version]
  • Ran linter locally: brew install swiftlint && swiftlint --strict --config .swiftlint.yml
  • Ran formatter locally: brew install swiftformat && swiftformat --config .swiftformat Sources

Screenshots / Video

Add screenshots or Video recording of the app after you have made your changes

image

Summary by CodeRabbit

  • New Features

    • "Save Transcription History" and "Enable Debug Logs" toggles in Settings
    • Better audio device selection with automatic fallback and safer refresh behavior
  • Bug Fixes

    • Fixed feedback placeholder interactivity to restore editor input
    • Safer transcription shutdown to avoid crashes and data races
  • Improvements

    • Improved logging throughout the app for clearer diagnostics
    • Better handling of reasoning tokens and model-specific thinking/response parsing

✏️ Tip: You can customize this high-level summary in your review settings.

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Dec 20, 2025

Walkthrough

Adds structured logging across the app, makes ASR shutdown async with coordinated task cancellation, introduces a separate thinking parser and maxTokens for reasoning models, adds a save-transcription-history setting and settings UI changes, and updates several UI and service call sites to use the new async/logging flows.

Changes

Cohort / File(s) Summary
Logging Migration
Sources/Fluid/AppDelegate.swift, Sources/Fluid/Services/DebugLogger.swift, Sources/Fluid/Services/SimpleUpdater.swift, Sources/Fluid/Services/ModelDownloader.swift, Sources/Fluid/Services/MenuBarManager.swift, Sources/Fluid/Services/TypingService.swift, Sources/Fluid/UI/MeetingTranscriptionView.swift
Replaced direct print calls with structured DebugLogger calls (info/debug/error/warning), added source attribution, and adjusted DebugLogger internals to early-exit when logging disabled and defer UI log construction.
ASR Service Async Shutdown
Sources/Fluid/Services/ASRService.swift, Sources/Fluid/ContentView.swift, Sources/Fluid/Services/GlobalHotkeyManager.swift
Converted stopWithoutTranscription() to async and added stopStreamingTimerAndAwait(); added currentOperationTask and cancelAndAwaitPending() in TranscriptionExecutor; updated call sites to await/launch async stops to ensure in-flight transcription tasks are canceled and awaited.
Thinking Parser & LLM changes
Sources/Fluid/Services/ThinkingParsers.swift, Sources/Fluid/Services/LLMClient.swift, Sources/Fluid/Services/CommandModeService.swift, Sources/Fluid/Services/RewriteModeService.swift
Added SeparateFieldThinkingParser and updated factory routing for specific models; extended LLMClient to extract reasoning from multiple fields and log requests as cURL; added maxTokens parameter (set to 32000 for reasoning models, nil otherwise) to LLMClient.Config usages.
Settings & Persistence
Sources/Fluid/Persistence/SettingsStore.swift, Sources/Fluid/UI/SettingsView.swift
Added saveTranscriptionHistory: Bool persisted to UserDefaults (defaults true); enableDebugLogs change now refreshes DebugLogger; Settings UI adds toggles for Save Transcription History and Enable Debug Logs, improves audio device pickers, device selection sync, cached default device names, and wraps device-change ASR restarts in async Tasks.
Hotkey & Event Handling
Sources/Fluid/Services/GlobalHotkeyManager.swift
Replaced prints with DebugLogger calls across setup/health/event paths; updated event handlers and shutdown flows to use the async ASR stop variants.
UI tweaks
Sources/Fluid/UI/FeedbackView.swift, Sources/Fluid/ContentView.swift
Feedback placeholder uses Group with conditional placeholder and adjusted hit-testing; ContentView wraps ASR stop calls in async Tasks, conditionally persists transcription history based on settings, and replaces prints with logger calls.
Model Downloader logs
Sources/Fluid/Networking/ModelDownloader.swift
Swapped download/load progress prints for DebugLogger (info/debug) messages; no control-flow changes.

Sequence Diagram(s)

sequenceDiagram
    participant Hotkey as GlobalHotkeyManager
    participant UI as ContentView / MenuBar
    participant ASR as ASRService
    participant Exec as TranscriptionExecutor

    Hotkey->>ASR: request stopWithoutTranscription() (async)
    activate ASR
    ASR->>ASR: stopStreamingTimerAndAwait()
    ASR->>Exec: cancelAndAwaitPending()
    activate Exec
    Exec-->>ASR: pending ops cancelled & awaited
    deactivate Exec
    ASR-->>Hotkey: stop complete (returned)
    deactivate ASR

    UI->>ASR: user stops recording (async Task)
    activate ASR
    ASR->>ASR: stopStreamingTimerAndAwait()
    ASR->>Exec: cancelAndAwaitPending()
    Exec-->>ASR: done
    ASR-->>UI: stopped
    deactivate ASR
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~50–60 minutes

  • Pay special attention to async cancellation and awaiting sequences in:
    • Sources/Fluid/Services/ASRService.swift
    • Sources/Fluid/Services/GlobalHotkeyManager.swift
    • Sources/Fluid/ContentView.swift
  • Verify parser routing and LLM reasoning field extraction in:
    • Sources/Fluid/Services/ThinkingParsers.swift
    • Sources/Fluid/Services/LLMClient.swift
  • Validate SettingsView device-sync logic and cached defaults to avoid UI race conditions.

Possibly related PRs

Poem

🐰 I hopped through logs and async streams,

Cancelled tasks and chased down dreams,
Reasoning fields now have their token,
History saved (unless you’ve spoken),
DebugLogger lights the trail—hip-hop hooray! 🥕✨

Pre-merge checks and finishing touches

❌ Failed checks (2 warnings, 1 inconclusive)
Check name Status Explanation Resolution
Description check ⚠️ Warning The PR description uses template placeholders ('Brief description of what this PR does' and 'Closes #(issue number)') without filling in actual content, making it incomplete despite having checkboxes marked. Complete the description by providing a clear summary of changes, specific issue numbers, and relevant context about the bug fixes and features being added.
Docstring Coverage ⚠️ Warning Docstring coverage is 42.50% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
Title check ❓ Inconclusive The PR title 'B/bug fixes' is vague and generic, using non-descriptive phrasing that doesn't convey meaningful information about the actual changes made. Replace the title with a specific description of the main change, such as 'Replace logging with DebugLogger and add transcription history setting' or a more specific summary of key changes.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch b/bug_fixes

📜 Recent review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 8b8ff2e and 3a15c21.

📒 Files selected for processing (2)
  • Sources/Fluid/Services/CommandModeService.swift (1 hunks)
  • Sources/Fluid/Services/RewriteModeService.swift (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
  • Sources/Fluid/Services/CommandModeService.swift
  • Sources/Fluid/Services/RewriteModeService.swift
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Build FluidVoice

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@altic-dev altic-dev mentioned this pull request Dec 20, 2025
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
Sources/Fluid/Services/TypingService.swift (1)

7-14: TypingService-specific logging control is non-functional.

The isLoggingEnabled static property (lines 7-10) is never referenced in the log() method. All logs unconditionally route through DebugLogger.shared.debug() regardless of the FLUID_TYPING_LOGS environment variable or enableTypingLogs UserDefaults setting. Either restore the guard to honor the intended per-service logging toggle, or remove the unused property and consolidate under global DebugLogger control:

 private func log(_ message: @autoclosure () -> String) {
+    guard Self.isLoggingEnabled else { return }
     DebugLogger.shared.debug(message(), source: "TypingService")
 }
Sources/Fluid/Services/ASRService.swift (1)

818-823: Remove the unused stopStreamingTimer() method at line 820.

This legacy synchronous version is not called anywhere in the codebase. Both stop() and stopWithoutTranscription() use the async stopStreamingTimerAndAwait() (lines 374, 435), making this dead code that should be deleted.

🧹 Nitpick comments (3)
Sources/Fluid/Services/LLMClient.swift (1)

326-331: Consider gating verbose delta logging behind a debug flag.

Logging the full delta for every streaming chunk can be noisy in production. This is useful for debugging reasoning field issues but may generate excessive log volume.

🔎 Suggested conditional logging
-            // DEBUG LOG: Show full delta to see all fields (e.g., 'reasoning', 'thought', 'delta_reasoning', etc.)
-            if let deltaData = try? JSONSerialization.data(withJSONObject: delta, options: [.fragmentsAllowed]),
-               let deltaString = String(data: deltaData, encoding: .utf8)
-            {
-                DebugLogger.shared.debug("LLMClient: Full Delta: \(deltaString)", source: "LLMClient")
-            }
+            // DEBUG LOG: Show full delta to see all fields (e.g., 'reasoning', 'thought', 'delta_reasoning', etc.)
+            #if DEBUG
+            if let deltaData = try? JSONSerialization.data(withJSONObject: delta, options: [.fragmentsAllowed]),
+               let deltaString = String(data: deltaData, encoding: .utf8)
+            {
+                DebugLogger.shared.debug("LLMClient: Full Delta: \(deltaString)", source: "LLMClient")
+            }
+            #endif
Sources/Fluid/Services/ThinkingParsers.swift (1)

64-70: Redundant condition check.

The check on lines 65-67 is redundant with isReasoningModel(model). Looking at the relevant snippet from SettingsStore.swift:585-594, isReasoningModel already checks for gpt-5, o1, o3, and gpt-oss. The additional condition re-checks the same patterns.

Consider simplifying:

Proposed simplification
-        // OpenAI reasoning models (o1, o3, gpt-5): use SeparateFieldThinkingParser
-        if SettingsStore.shared.isReasoningModel(model) &&
-            (modelLower.contains("gpt-5") || modelLower.contains("o1") || modelLower.contains("o3") || modelLower.contains("gpt-oss"))
-        {
+        // OpenAI reasoning models (o1, o3, gpt-5): use SeparateFieldThinkingParser
+        // Note: isReasoningModel also matches deepseek+reasoner, but those are handled above
+        if SettingsStore.shared.isReasoningModel(model) {
Sources/Fluid/UI/SettingsView.swift (1)

484-502: Consider extracting device selection sync logic to reduce duplication.

The same device selection priority logic (preference → system default → first available) is duplicated four times for input/output devices in both onChange handlers and onAppear. This could be extracted to a helper function.

Proposed helper function
private func syncDeviceSelection(
    currentUID: Binding<String>,
    devices: [AudioDevice.Device],
    preferredUID: String?,
    getDefaultUID: () -> String?
) {
    guard !devices.isEmpty else { return }
    let currentValid = devices.contains { $0.uid == currentUID.wrappedValue }
    guard !currentValid || currentUID.wrappedValue.isEmpty else { return }
    
    if let prefUID = preferredUID, devices.contains(where: { $0.uid == prefUID }) {
        currentUID.wrappedValue = prefUID
    } else if let defaultUID = getDefaultUID(), devices.contains(where: { $0.uid == defaultUID }) {
        currentUID.wrappedValue = defaultUID
    } else {
        currentUID.wrappedValue = devices.first?.uid ?? ""
    }
}

Also applies to: 527-544, 675-710

📜 Review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 33ca4b9 and 8b8ff2e.

📒 Files selected for processing (17)
  • Sources/Fluid/AppDelegate.swift (3 hunks)
  • Sources/Fluid/ContentView.swift (6 hunks)
  • Sources/Fluid/Networking/ModelDownloader.swift (5 hunks)
  • Sources/Fluid/Persistence/SettingsStore.swift (3 hunks)
  • Sources/Fluid/Services/ASRService.swift (4 hunks)
  • Sources/Fluid/Services/CommandModeService.swift (1 hunks)
  • Sources/Fluid/Services/DebugLogger.swift (1 hunks)
  • Sources/Fluid/Services/GlobalHotkeyManager.swift (7 hunks)
  • Sources/Fluid/Services/LLMClient.swift (5 hunks)
  • Sources/Fluid/Services/MenuBarManager.swift (2 hunks)
  • Sources/Fluid/Services/RewriteModeService.swift (1 hunks)
  • Sources/Fluid/Services/SimpleUpdater.swift (4 hunks)
  • Sources/Fluid/Services/ThinkingParsers.swift (2 hunks)
  • Sources/Fluid/Services/TypingService.swift (1 hunks)
  • Sources/Fluid/UI/FeedbackView.swift (1 hunks)
  • Sources/Fluid/UI/MeetingTranscriptionView.swift (3 hunks)
  • Sources/Fluid/UI/SettingsView.swift (5 hunks)
🧰 Additional context used
🧬 Code graph analysis (14)
Sources/Fluid/Services/DebugLogger.swift (1)
Sources/Fluid/Services/FileLogger.swift (1)
  • append (27-45)
Sources/Fluid/Services/GlobalHotkeyManager.swift (2)
Sources/Fluid/Services/DebugLogger.swift (4)
  • debug (153-155)
  • error (149-151)
  • info (141-143)
  • warning (145-147)
Sources/Fluid/Services/ASRService.swift (1)
  • stopWithoutTranscription (423-445)
Sources/Fluid/Services/ThinkingParsers.swift (2)
Sources/Fluid/Services/DebugLogger.swift (1)
  • debug (153-155)
Sources/Fluid/Persistence/SettingsStore.swift (1)
  • isReasoningModel (586-595)
Sources/Fluid/Services/SimpleUpdater.swift (1)
Sources/Fluid/Services/DebugLogger.swift (2)
  • error (149-151)
  • info (141-143)
Sources/Fluid/Persistence/SettingsStore.swift (1)
Sources/Fluid/Services/DebugLogger.swift (1)
  • refreshLoggingEnabled (71-73)
Sources/Fluid/UI/MeetingTranscriptionView.swift (1)
Sources/Fluid/Services/DebugLogger.swift (2)
  • error (149-151)
  • info (141-143)
Sources/Fluid/Services/MenuBarManager.swift (1)
Sources/Fluid/Services/DebugLogger.swift (2)
  • error (149-151)
  • info (141-143)
Sources/Fluid/ContentView.swift (3)
Sources/Fluid/Persistence/TranscriptionHistoryStore.swift (1)
  • addEntry (95-119)
Sources/Fluid/Services/DebugLogger.swift (1)
  • debug (153-155)
Sources/Fluid/Services/GlobalHotkeyManager.swift (1)
  • validateEventTapHealth (562-570)
Sources/Fluid/Services/CommandModeService.swift (1)
Sources/Fluid/Persistence/SettingsStore.swift (1)
  • isReasoningModel (586-595)
Sources/Fluid/Networking/ModelDownloader.swift (1)
Sources/Fluid/Services/DebugLogger.swift (2)
  • info (141-143)
  • debug (153-155)
Sources/Fluid/AppDelegate.swift (1)
Sources/Fluid/Services/DebugLogger.swift (2)
  • info (141-143)
  • debug (153-155)
Sources/Fluid/Services/TypingService.swift (1)
Sources/Fluid/Services/DebugLogger.swift (1)
  • debug (153-155)
Sources/Fluid/Services/RewriteModeService.swift (1)
Sources/Fluid/Persistence/SettingsStore.swift (1)
  • isReasoningModel (586-595)
Sources/Fluid/UI/SettingsView.swift (3)
Sources/Fluid/Services/AudioDeviceService.swift (4)
  • setDefaultInputDevice (88-92)
  • getDefaultInputDevice (78-81)
  • setDefaultOutputDevice (94-98)
  • getDefaultOutputDevice (83-86)
Sources/Fluid/Services/ASRService.swift (1)
  • stopWithoutTranscription (423-445)
Sources/Fluid/ContentView.swift (1)
  • startRecording (1474-1494)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Build FluidVoice
🔇 Additional comments (34)
Sources/Fluid/UI/FeedbackView.swift (2)

140-150: LGTM! Improved placeholder interactivity.

The changes correctly restore hit-testing to the underlying TextEditor by marking the overlay Group as .allowsHitTesting(false). The leading padding adjustment (4 points) helps visually align the placeholder with the editor's text input area.


277-309: LGTM! Consistent structured logging.

The migration to DebugLogger with appropriate log levels and source identifiers aligns with the PR-wide logging standardization.

Sources/Fluid/Persistence/SettingsStore.swift (2)

161-168: LGTM! Proper cache refresh.

The call to DebugLogger.shared.refreshLoggingEnabled() ensures the logger's cached setting is synchronized after the UserDefaults value is updated, preventing stale cache issues.


624-636: LGTM! Clean settings implementation.

The saveTranscriptionHistory property follows the established pattern for boolean settings, with a sensible default (true) that maintains backward compatibility.

Sources/Fluid/Services/DebugLogger.swift (2)

75-81: LGTM! Effective performance optimization.

The early exit prevents unnecessary processing when logging is disabled, while ensuring errors are always captured (lines 87-92) for debugging purposes. This is a good balance between performance and observability.


83-116: LGTM! Clean separation of file/console vs UI logging.

The refactored flow correctly:

  • Always writes to FileLogger and console (ensuring errors are captured even when UI logging is disabled)
  • Gates UI log updates behind the loggingEnabled check (preventing UI clutter)
  • Defers LogEntry construction until needed (minor performance win)
Sources/Fluid/Services/SimpleUpdater.swift (1)

227-371: LGTM! Clean logging migration.

All print statements have been replaced with appropriate DebugLogger calls using consistent source identifiers and proper log levels (error for failures, info for status updates).

Sources/Fluid/Services/RewriteModeService.swift (1)

209-219: LGTM! Proper token budget for reasoning models.

Setting maxTokens to 4096 for reasoning models (o1, o3, gpt-5, etc.) provides appropriate budgeting for their thinking/reasoning process, while allowing non-reasoning models to use their defaults.

Sources/Fluid/Services/MenuBarManager.swift (1)

257-257: LGTM! Consistent structured logging.

The migration to DebugLogger with appropriate log levels (error for setup failures, info for user actions) aligns with the PR-wide logging standardization.

Also applies to: 427-427

Sources/Fluid/AppDelegate.swift (1)

36-151: LGTM! Comprehensive logging migration.

All print statements have been replaced with appropriate DebugLogger calls using consistent source identifiers and proper log levels throughout the update check and alert flows.

Sources/Fluid/UI/MeetingTranscriptionView.swift (1)

188-188: LGTM! Logging migration looks good.

The replacement of print statements with DebugLogger calls is consistent with the app-wide logging standardization. Log levels are appropriate (error for failures, info for success), and the source identifier is correctly set to "MeetingTranscriptionView".

Also applies to: 286-288, 323-323

Sources/Fluid/ContentView.swift (4)

337-337: Async stop is correctly wrapped in Task.

The fire-and-forget Task { await self.asr.stopWithoutTranscription() } pattern is acceptable here since the ASR service guards against redundant stops internally, and user-initiated cancellation via Escape should not block the UI thread.


482-482: Consistent async handling for mode shortcut disabling.

Both command mode and rewrite mode shortcut disable handlers correctly wrap the async stop in a Task. This ensures proper teardown when users disable shortcuts while recording is active.

Also applies to: 498-498


533-534: Fire-and-forget on view disappear is acceptable but has a subtle timing consideration.

The onDisappear closure fires a Task that may outlive the view. This is generally fine since ASRService.stopWithoutTranscription() guards with isRunning and the service persists beyond the view's lifecycle. However, be aware that if the app terminates quickly after the view disappears, the async stop may not complete.


1386-1395: Good addition of conditional transcription history persistence.

The new SettingsStore.shared.saveTranscriptionHistory check properly gates history persistence. This respects user preferences and avoids unnecessary storage when the feature is disabled.

Sources/Fluid/Services/GlobalHotkeyManager.swift (4)

157-157: LGTM! Logging migration in setup path is well-structured.

The DebugLogger calls appropriately use:

  • debug for permission checks
  • error for tap creation failures
  • info for successful initialization

This provides good observability for troubleshooting hotkey setup issues.

Also applies to: 180-180, 186-186, 194-194, 199-199


234-239: Async stop correctly awaited in Escape key handler.

The await self.asrService.stopWithoutTranscription() call is properly executed within a Task { @MainActor in ... } block, ensuring proper async execution while not blocking the event tap callback.


535-535: Fallback stop path correctly uses async.

When no stopAndProcessCallback is set, the service now correctly awaits stopWithoutTranscription(). This aligns with the async signature change in ASRService.


573-573: Health check and lifecycle logging is appropriate.

The logging for reinitialization, health check failures/recoveries, and deinit provides good observability for debugging hotkey manager lifecycle issues in production.

Also applies to: 591-591, 595-595, 597-597, 611-611

Sources/Fluid/Services/LLMClient.swift (3)

125-125: Good addition of request logging for debugging.

The logRequest call provides valuable debugging information for API issues.


333-346: Good multi-field reasoning extraction.

The fallback chain for reasoning fields (reasoning_content, reasoning, thought, thinking) properly handles different LLM provider conventions (OpenAI, DeepSeek, etc.). This improves compatibility.


486-491: Consistent multi-field reasoning extraction for non-streaming responses.

Good consistency with the streaming path. Both paths now check the same set of reasoning field variants.

Sources/Fluid/Services/ThinkingParsers.swift (1)

295-316: LGTM!

The SeparateFieldThinkingParser implementation is clean and correctly handles models that use separate fields for reasoning content. The parser appropriately treats all received content as final content without tag parsing.

Sources/Fluid/Networking/ModelDownloader.swift (2)

106-149: LGTM!

The logging migration is well-structured with appropriate log levels: info for significant progress milestones (files to download, overall progress) and debug for detailed per-file progress. Source labels are consistent.


292-349: LGTM!

Model loading logging is appropriately categorized - info for key loading events and debug for detailed path information. This aligns with the project-wide DebugLogger migration.

Sources/Fluid/Services/ASRService.swift (5)

17-28: Task type erasure could be avoided.

The currentOperationTask uses Task<Any, Error> to erase the generic type from Task<T, Error>. This works but loses type information. However, since it's only used for cancellation (not value retrieval), this is acceptable.


30-39: LGTM!

The cancelAndAwaitPending() implementation correctly handles the cancellation flow: cancels any in-flight operation, waits for the task chain to complete, then clears references. This prevents use-after-free issues when the buffer is cleared.


364-384: LGTM! Critical race condition fix.

The reordering is correct: setting isRunning = false first signals in-flight chunks to abort, then stopping the engine prevents new audio, and finally awaiting ensures all pending transcription tasks complete before buffer access. This prevents the EXC_BAD_ACCESS crashes.


423-445: LGTM!

The stopWithoutTranscription() method follows the same safe shutdown pattern as stop(). The async conversion is properly handled by callers (ContentView wraps calls in Task).


802-816: LGTM!

The helper correctly implements two-phase async cleanup: first awaits the streaming task completion, then awaits the transcription executor. This ensures no in-flight work remains before buffer operations.

Sources/Fluid/UI/SettingsView.swift (4)

370-380: LGTM!

The new "Save Transcription History" toggle is properly bound to SettingsStore.shared.saveTranscriptionHistory following the same pattern as other settings toggles.


634-643: LGTM!

The "Enable Debug Logs" toggle is appropriately placed under Debug Settings and follows the established binding pattern.


477-481: LGTM!

The Task wrapper correctly handles the now-async stopWithoutTranscription() API, matching the pattern shown in the relevant code snippet from ContentView.


461-468: LGTM!

The "Loading..." placeholder when device lists are empty provides good UX feedback during device enumeration. The empty UID guard prevents invalid selections.

Also applies to: 510-517

Comment on lines +1786 to +1795
DebugLogger.shared.debug("Initial hotkey manager health check: \(self.hotkeyManagerInitialized)", source: "ContentView")

// If still not initialized and accessibility is enabled, try reinitializing
if !self.hotkeyManagerInitialized && self.accessibilityEnabled {
self.hotkeyManagerInitialized = self.hotkeyManager?.validateEventTapHealth() ?? false
if UserDefaults.standard.bool(forKey: "enableDebugLogs") {
print("[ContentView] Initial hotkey manager health check: \(self.hotkeyManagerInitialized)")
}
DebugLogger.shared.debug("Initial hotkey manager health check: \(self.hotkeyManagerInitialized)", source: "ContentView")

// If still not initialized and accessibility is enabled, try reinitializing
if !self.hotkeyManagerInitialized && self.accessibilityEnabled {
if UserDefaults.standard.bool(forKey: "enableDebugLogs") {
print("[ContentView] Hotkey manager not healthy, attempting reinitalization")
}
DebugLogger.shared.debug("Hotkey manager not healthy, attempting reinitalization", source: "ContentView")
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Duplicate debug log statements detected.

Lines 1786 and 1791 emit identical log messages ("Initial hotkey manager health check: ..."). This appears to be unintentional duplication—the same check and log occur twice in quick succession.

🔎 Suggested fix to remove duplicate logging
 await MainActor.run {
     self.hotkeyManagerInitialized = self.hotkeyManager?.validateEventTapHealth() ?? false
     DebugLogger.shared.debug("Initial hotkey manager health check: \(self.hotkeyManagerInitialized)", source: "ContentView")

     // If still not initialized and accessibility is enabled, try reinitializing
     if !self.hotkeyManagerInitialized && self.accessibilityEnabled {
-        self.hotkeyManagerInitialized = self.hotkeyManager?.validateEventTapHealth() ?? false
-        DebugLogger.shared.debug("Initial hotkey manager health check: \(self.hotkeyManagerInitialized)", source: "ContentView")
-
-        // If still not initialized and accessibility is enabled, try reinitializing
-        if !self.hotkeyManagerInitialized && self.accessibilityEnabled {
-            DebugLogger.shared.debug("Hotkey manager not healthy, attempting reinitalization", source: "ContentView")
-            self.hotkeyManager?.reinitialize()
-        }
+        DebugLogger.shared.debug("Hotkey manager not healthy, attempting reinitialization", source: "ContentView")
+        self.hotkeyManager?.reinitialize()
     }
 }

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In Sources/Fluid/ContentView.swift around lines 1786 to 1795 there are duplicate
debug logs that both print "Initial hotkey manager health check: ..."
back-to-back; remove the redundant log statement so the health check is only
logged once (preferably keep the log after you assign
self.hotkeyManagerInitialized from validateEventTapHealth() so you log the final
state), ensuring no other logic is changed.

streaming: enableStreaming,
tools: [TerminalService.toolDefinition],
temperature: isReasoningModel ? nil : 0.1,
maxTokens: isReasoningModel ? 4096 : nil, // Reasoning models like o1 need a budget for thoughts
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Increase maxTokens budget for reasoning models—current 4096 is significantly insufficient.

OpenAI recommends reserving at least 25,000 tokens for reasoning and outputs when experimenting with reasoning models, and you can adjust this buffer as you become familiar with token requirements. The current setting of 4096 is approximately 6x lower than this minimum recommendation. Depending on problem complexity, reasoning models may generate anywhere from a few hundred to tens of thousands of reasoning tokens, and reasoning tokens occupy space in the model's context window and are billed as output tokens. This budget leaves insufficient room for reasoning processes on moderately complex tasks and may result in incomplete responses with finish_reason='length'.

🤖 Prompt for AI Agents
In Sources/Fluid/Services/CommandModeService.swift around line 834, the
maxTokens for reasoning models is set to 4096 which is too low; update the code
to use a much larger token budget (e.g. at least 25000) for reasoning models
instead of 4096, and make this value configurable via a named constant or
configuration so it can be tuned per model; keep non-reasoning models using nil
(or a smaller default), and ensure any new constant respects model
context-window limits or falls back gracefully if the model cannot support the
requested maxTokens.

Comment on lines +350 to +357
// If we were in thinking mode via a separate field (not tag-based),
// receiving "content" usually means the thinking phase is over.
if state == .inThinking && reasoningField == nil && tagDetectionBuffer.isEmpty {
// This is a subtle heuristic: if we were thinking, didn't just get a reasoning field chunk,
// and have no partial tags buffered, we should check if this content chunk
// is the start of the final answer.
// For safety with tag-based parsers, we let the parser decide unless it's a known separate-field model.
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Empty if block with dead code comments.

The if block at lines 352-357 contains only comments and no executable code. This appears to be leftover from development. Either implement the heuristic described in the comments or remove the empty block.

🔎 Suggested cleanup
-                // If we were in thinking mode via a separate field (not tag-based),
-                // receiving "content" usually means the thinking phase is over.
-                if state == .inThinking && reasoningField == nil && tagDetectionBuffer.isEmpty {
-                    // This is a subtle heuristic: if we were thinking, didn't just get a reasoning field chunk,
-                    // and have no partial tags buffered, we should check if this content chunk
-                    // is the start of the final answer.
-                    // For safety with tag-based parsers, we let the parser decide unless it's a known separate-field model.
-                }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
// If we were in thinking mode via a separate field (not tag-based),
// receiving "content" usually means the thinking phase is over.
if state == .inThinking && reasoningField == nil && tagDetectionBuffer.isEmpty {
// This is a subtle heuristic: if we were thinking, didn't just get a reasoning field chunk,
// and have no partial tags buffered, we should check if this content chunk
// is the start of the final answer.
// For safety with tag-based parsers, we let the parser decide unless it's a known separate-field model.
}
🤖 Prompt for AI Agents
In Sources/Fluid/Services/LLMClient.swift around lines 350 to 357, there is an
empty if block (state == .inThinking && reasoningField == nil &&
tagDetectionBuffer.isEmpty) that only contains comments; remove the dead code or
implement the described heuristic. Either delete the entire conditional and its
comments if no action is needed, or add the intended logic to detect that a
content chunk likely starts the final answer (e.g., transition state from
.inThinking to .inAnswer or invoke the parser/handler for final-answer start)
and ensure any side effects (state change, buffer flush, parser call) are
covered and tested.

Comment on lines +614 to +630
private func logRequest(_ request: URLRequest) {
guard let url = request.url, let method = request.httpMethod else { return }

var bodyString = ""
if let body = request.httpBody {
bodyString = String(data: body, encoding: .utf8) ?? ""
}

var curl = "curl -X \(method) \"\(url.absoluteString)\" \\\n"
for (key, value) in request.allHTTPHeaderFields ?? [:] {
let maskedValue = key.lowercased().contains("auth") ? "Bearer [REDACTED]" : value
curl += " -H \"\(key): \(maskedValue)\" \\\n"
}
curl += " -d '\(bodyString)'"

DebugLogger.shared.info("LLMClient: Full Request as cURL:\n\(curl)", source: "LLMClient")
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Request logging masks auth tokens appropriately, but body may contain sensitive data.

The logRequest helper correctly redacts Authorization headers. However, the request body is logged in full (line 627), which could include user prompts or conversation history containing sensitive information. Consider truncating or omitting the body in production builds.

🔎 Suggested improvement for body logging
 private func logRequest(_ request: URLRequest) {
     guard let url = request.url, let method = request.httpMethod else { return }

     var bodyString = ""
     if let body = request.httpBody {
-        bodyString = String(data: body, encoding: .utf8) ?? ""
+        let fullBody = String(data: body, encoding: .utf8) ?? ""
+        // Truncate body to avoid logging sensitive conversation content
+        bodyString = fullBody.count > 200 ? String(fullBody.prefix(200)) + "...[truncated]" : fullBody
     }

     var curl = "curl -X \(method) \"\(url.absoluteString)\" \\\n"
     for (key, value) in request.allHTTPHeaderFields ?? [:] {
         let maskedValue = key.lowercased().contains("auth") ? "Bearer [REDACTED]" : value
         curl += "  -H \"\(key): \(maskedValue)\" \\\n"
     }
     curl += "  -d '\(bodyString)'"

     DebugLogger.shared.info("LLMClient: Full Request as cURL:\n\(curl)", source: "LLMClient")
 }
🤖 Prompt for AI Agents
In Sources/Fluid/Services/LLMClient.swift around lines 614 to 630, the
logRequest helper currently prints the full request body which can leak
sensitive user prompts or conversation history; update it to avoid logging full
bodies in non-debug builds by either omitting the body or truncating it to a
safe length and/or replacing it with a placeholder. Concretely: detect if the
build is non-DEBUG (or use a production flag) and in that case replace
bodyString with "[REDACTED]" (or for DEBUG keep only the first N characters and
append "…"), and ensure any JSON body keys likely to contain PII are masked if
you decide to log structured snippets; then log the resulting safe bodyString
instead of the raw body.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant