Conversation
WalkthroughAdds structured logging across the app, makes ASR shutdown async with coordinated task cancellation, introduces a separate thinking parser and maxTokens for reasoning models, adds a save-transcription-history setting and settings UI changes, and updates several UI and service call sites to use the new async/logging flows. Changes
Sequence Diagram(s)sequenceDiagram
participant Hotkey as GlobalHotkeyManager
participant UI as ContentView / MenuBar
participant ASR as ASRService
participant Exec as TranscriptionExecutor
Hotkey->>ASR: request stopWithoutTranscription() (async)
activate ASR
ASR->>ASR: stopStreamingTimerAndAwait()
ASR->>Exec: cancelAndAwaitPending()
activate Exec
Exec-->>ASR: pending ops cancelled & awaited
deactivate Exec
ASR-->>Hotkey: stop complete (returned)
deactivate ASR
UI->>ASR: user stops recording (async Task)
activate ASR
ASR->>ASR: stopStreamingTimerAndAwait()
ASR->>Exec: cancelAndAwaitPending()
Exec-->>ASR: done
ASR-->>UI: stopped
deactivate ASR
Estimated code review effort🎯 4 (Complex) | ⏱️ ~50–60 minutes
Possibly related PRs
Poem
Pre-merge checks and finishing touches❌ Failed checks (2 warnings, 1 inconclusive)
✨ Finishing touches
🧪 Generate unit tests (beta)
📜 Recent review detailsConfiguration used: defaults Review profile: CHILL Plan: Pro 📒 Files selected for processing (2)
🚧 Files skipped from review as they are similar to previous changes (2)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 4
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
Sources/Fluid/Services/TypingService.swift (1)
7-14: TypingService-specific logging control is non-functional.The
isLoggingEnabledstatic property (lines 7-10) is never referenced in thelog()method. All logs unconditionally route throughDebugLogger.shared.debug()regardless of theFLUID_TYPING_LOGSenvironment variable orenableTypingLogsUserDefaults setting. Either restore the guard to honor the intended per-service logging toggle, or remove the unused property and consolidate under global DebugLogger control:private func log(_ message: @autoclosure () -> String) { + guard Self.isLoggingEnabled else { return } DebugLogger.shared.debug(message(), source: "TypingService") }Sources/Fluid/Services/ASRService.swift (1)
818-823: Remove the unusedstopStreamingTimer()method at line 820.This legacy synchronous version is not called anywhere in the codebase. Both
stop()andstopWithoutTranscription()use the asyncstopStreamingTimerAndAwait()(lines 374, 435), making this dead code that should be deleted.
🧹 Nitpick comments (3)
Sources/Fluid/Services/LLMClient.swift (1)
326-331: Consider gating verbose delta logging behind a debug flag.Logging the full delta for every streaming chunk can be noisy in production. This is useful for debugging reasoning field issues but may generate excessive log volume.
🔎 Suggested conditional logging
- // DEBUG LOG: Show full delta to see all fields (e.g., 'reasoning', 'thought', 'delta_reasoning', etc.) - if let deltaData = try? JSONSerialization.data(withJSONObject: delta, options: [.fragmentsAllowed]), - let deltaString = String(data: deltaData, encoding: .utf8) - { - DebugLogger.shared.debug("LLMClient: Full Delta: \(deltaString)", source: "LLMClient") - } + // DEBUG LOG: Show full delta to see all fields (e.g., 'reasoning', 'thought', 'delta_reasoning', etc.) + #if DEBUG + if let deltaData = try? JSONSerialization.data(withJSONObject: delta, options: [.fragmentsAllowed]), + let deltaString = String(data: deltaData, encoding: .utf8) + { + DebugLogger.shared.debug("LLMClient: Full Delta: \(deltaString)", source: "LLMClient") + } + #endifSources/Fluid/Services/ThinkingParsers.swift (1)
64-70: Redundant condition check.The check on lines 65-67 is redundant with
isReasoningModel(model). Looking at the relevant snippet fromSettingsStore.swift:585-594,isReasoningModelalready checks forgpt-5,o1,o3, andgpt-oss. The additional condition re-checks the same patterns.Consider simplifying:
Proposed simplification
- // OpenAI reasoning models (o1, o3, gpt-5): use SeparateFieldThinkingParser - if SettingsStore.shared.isReasoningModel(model) && - (modelLower.contains("gpt-5") || modelLower.contains("o1") || modelLower.contains("o3") || modelLower.contains("gpt-oss")) - { + // OpenAI reasoning models (o1, o3, gpt-5): use SeparateFieldThinkingParser + // Note: isReasoningModel also matches deepseek+reasoner, but those are handled above + if SettingsStore.shared.isReasoningModel(model) {Sources/Fluid/UI/SettingsView.swift (1)
484-502: Consider extracting device selection sync logic to reduce duplication.The same device selection priority logic (preference → system default → first available) is duplicated four times for input/output devices in both
onChangehandlers andonAppear. This could be extracted to a helper function.Proposed helper function
private func syncDeviceSelection( currentUID: Binding<String>, devices: [AudioDevice.Device], preferredUID: String?, getDefaultUID: () -> String? ) { guard !devices.isEmpty else { return } let currentValid = devices.contains { $0.uid == currentUID.wrappedValue } guard !currentValid || currentUID.wrappedValue.isEmpty else { return } if let prefUID = preferredUID, devices.contains(where: { $0.uid == prefUID }) { currentUID.wrappedValue = prefUID } else if let defaultUID = getDefaultUID(), devices.contains(where: { $0.uid == defaultUID }) { currentUID.wrappedValue = defaultUID } else { currentUID.wrappedValue = devices.first?.uid ?? "" } }Also applies to: 527-544, 675-710
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (17)
Sources/Fluid/AppDelegate.swift(3 hunks)Sources/Fluid/ContentView.swift(6 hunks)Sources/Fluid/Networking/ModelDownloader.swift(5 hunks)Sources/Fluid/Persistence/SettingsStore.swift(3 hunks)Sources/Fluid/Services/ASRService.swift(4 hunks)Sources/Fluid/Services/CommandModeService.swift(1 hunks)Sources/Fluid/Services/DebugLogger.swift(1 hunks)Sources/Fluid/Services/GlobalHotkeyManager.swift(7 hunks)Sources/Fluid/Services/LLMClient.swift(5 hunks)Sources/Fluid/Services/MenuBarManager.swift(2 hunks)Sources/Fluid/Services/RewriteModeService.swift(1 hunks)Sources/Fluid/Services/SimpleUpdater.swift(4 hunks)Sources/Fluid/Services/ThinkingParsers.swift(2 hunks)Sources/Fluid/Services/TypingService.swift(1 hunks)Sources/Fluid/UI/FeedbackView.swift(1 hunks)Sources/Fluid/UI/MeetingTranscriptionView.swift(3 hunks)Sources/Fluid/UI/SettingsView.swift(5 hunks)
🧰 Additional context used
🧬 Code graph analysis (14)
Sources/Fluid/Services/DebugLogger.swift (1)
Sources/Fluid/Services/FileLogger.swift (1)
append(27-45)
Sources/Fluid/Services/GlobalHotkeyManager.swift (2)
Sources/Fluid/Services/DebugLogger.swift (4)
debug(153-155)error(149-151)info(141-143)warning(145-147)Sources/Fluid/Services/ASRService.swift (1)
stopWithoutTranscription(423-445)
Sources/Fluid/Services/ThinkingParsers.swift (2)
Sources/Fluid/Services/DebugLogger.swift (1)
debug(153-155)Sources/Fluid/Persistence/SettingsStore.swift (1)
isReasoningModel(586-595)
Sources/Fluid/Services/SimpleUpdater.swift (1)
Sources/Fluid/Services/DebugLogger.swift (2)
error(149-151)info(141-143)
Sources/Fluid/Persistence/SettingsStore.swift (1)
Sources/Fluid/Services/DebugLogger.swift (1)
refreshLoggingEnabled(71-73)
Sources/Fluid/UI/MeetingTranscriptionView.swift (1)
Sources/Fluid/Services/DebugLogger.swift (2)
error(149-151)info(141-143)
Sources/Fluid/Services/MenuBarManager.swift (1)
Sources/Fluid/Services/DebugLogger.swift (2)
error(149-151)info(141-143)
Sources/Fluid/ContentView.swift (3)
Sources/Fluid/Persistence/TranscriptionHistoryStore.swift (1)
addEntry(95-119)Sources/Fluid/Services/DebugLogger.swift (1)
debug(153-155)Sources/Fluid/Services/GlobalHotkeyManager.swift (1)
validateEventTapHealth(562-570)
Sources/Fluid/Services/CommandModeService.swift (1)
Sources/Fluid/Persistence/SettingsStore.swift (1)
isReasoningModel(586-595)
Sources/Fluid/Networking/ModelDownloader.swift (1)
Sources/Fluid/Services/DebugLogger.swift (2)
info(141-143)debug(153-155)
Sources/Fluid/AppDelegate.swift (1)
Sources/Fluid/Services/DebugLogger.swift (2)
info(141-143)debug(153-155)
Sources/Fluid/Services/TypingService.swift (1)
Sources/Fluid/Services/DebugLogger.swift (1)
debug(153-155)
Sources/Fluid/Services/RewriteModeService.swift (1)
Sources/Fluid/Persistence/SettingsStore.swift (1)
isReasoningModel(586-595)
Sources/Fluid/UI/SettingsView.swift (3)
Sources/Fluid/Services/AudioDeviceService.swift (4)
setDefaultInputDevice(88-92)getDefaultInputDevice(78-81)setDefaultOutputDevice(94-98)getDefaultOutputDevice(83-86)Sources/Fluid/Services/ASRService.swift (1)
stopWithoutTranscription(423-445)Sources/Fluid/ContentView.swift (1)
startRecording(1474-1494)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Build FluidVoice
🔇 Additional comments (34)
Sources/Fluid/UI/FeedbackView.swift (2)
140-150: LGTM! Improved placeholder interactivity.The changes correctly restore hit-testing to the underlying TextEditor by marking the overlay Group as
.allowsHitTesting(false). The leading padding adjustment (4 points) helps visually align the placeholder with the editor's text input area.
277-309: LGTM! Consistent structured logging.The migration to
DebugLoggerwith appropriate log levels and source identifiers aligns with the PR-wide logging standardization.Sources/Fluid/Persistence/SettingsStore.swift (2)
161-168: LGTM! Proper cache refresh.The call to
DebugLogger.shared.refreshLoggingEnabled()ensures the logger's cached setting is synchronized after the UserDefaults value is updated, preventing stale cache issues.
624-636: LGTM! Clean settings implementation.The
saveTranscriptionHistoryproperty follows the established pattern for boolean settings, with a sensible default (true) that maintains backward compatibility.Sources/Fluid/Services/DebugLogger.swift (2)
75-81: LGTM! Effective performance optimization.The early exit prevents unnecessary processing when logging is disabled, while ensuring errors are always captured (lines 87-92) for debugging purposes. This is a good balance between performance and observability.
83-116: LGTM! Clean separation of file/console vs UI logging.The refactored flow correctly:
- Always writes to FileLogger and console (ensuring errors are captured even when UI logging is disabled)
- Gates UI log updates behind the
loggingEnabledcheck (preventing UI clutter)- Defers
LogEntryconstruction until needed (minor performance win)Sources/Fluid/Services/SimpleUpdater.swift (1)
227-371: LGTM! Clean logging migration.All print statements have been replaced with appropriate
DebugLoggercalls using consistent source identifiers and proper log levels (error for failures, info for status updates).Sources/Fluid/Services/RewriteModeService.swift (1)
209-219: LGTM! Proper token budget for reasoning models.Setting
maxTokensto 4096 for reasoning models (o1, o3, gpt-5, etc.) provides appropriate budgeting for their thinking/reasoning process, while allowing non-reasoning models to use their defaults.Sources/Fluid/Services/MenuBarManager.swift (1)
257-257: LGTM! Consistent structured logging.The migration to
DebugLoggerwith appropriate log levels (error for setup failures, info for user actions) aligns with the PR-wide logging standardization.Also applies to: 427-427
Sources/Fluid/AppDelegate.swift (1)
36-151: LGTM! Comprehensive logging migration.All print statements have been replaced with appropriate
DebugLoggercalls using consistent source identifiers and proper log levels throughout the update check and alert flows.Sources/Fluid/UI/MeetingTranscriptionView.swift (1)
188-188: LGTM! Logging migration looks good.The replacement of
DebugLoggercalls is consistent with the app-wide logging standardization. Log levels are appropriate (error for failures, info for success), and the source identifier is correctly set to "MeetingTranscriptionView".Also applies to: 286-288, 323-323
Sources/Fluid/ContentView.swift (4)
337-337: Async stop is correctly wrapped in Task.The fire-and-forget
Task { await self.asr.stopWithoutTranscription() }pattern is acceptable here since the ASR service guards against redundant stops internally, and user-initiated cancellation via Escape should not block the UI thread.
482-482: Consistent async handling for mode shortcut disabling.Both command mode and rewrite mode shortcut disable handlers correctly wrap the async stop in a Task. This ensures proper teardown when users disable shortcuts while recording is active.
Also applies to: 498-498
533-534: Fire-and-forget on view disappear is acceptable but has a subtle timing consideration.The
onDisappearclosure fires a Task that may outlive the view. This is generally fine sinceASRService.stopWithoutTranscription()guards withisRunningand the service persists beyond the view's lifecycle. However, be aware that if the app terminates quickly after the view disappears, the async stop may not complete.
1386-1395: Good addition of conditional transcription history persistence.The new
SettingsStore.shared.saveTranscriptionHistorycheck properly gates history persistence. This respects user preferences and avoids unnecessary storage when the feature is disabled.Sources/Fluid/Services/GlobalHotkeyManager.swift (4)
157-157: LGTM! Logging migration in setup path is well-structured.The DebugLogger calls appropriately use:
debugfor permission checkserrorfor tap creation failuresinfofor successful initializationThis provides good observability for troubleshooting hotkey setup issues.
Also applies to: 180-180, 186-186, 194-194, 199-199
234-239: Async stop correctly awaited in Escape key handler.The
await self.asrService.stopWithoutTranscription()call is properly executed within aTask { @MainActor in ... }block, ensuring proper async execution while not blocking the event tap callback.
535-535: Fallback stop path correctly uses async.When no
stopAndProcessCallbackis set, the service now correctly awaitsstopWithoutTranscription(). This aligns with the async signature change in ASRService.
573-573: Health check and lifecycle logging is appropriate.The logging for reinitialization, health check failures/recoveries, and deinit provides good observability for debugging hotkey manager lifecycle issues in production.
Also applies to: 591-591, 595-595, 597-597, 611-611
Sources/Fluid/Services/LLMClient.swift (3)
125-125: Good addition of request logging for debugging.The
logRequestcall provides valuable debugging information for API issues.
333-346: Good multi-field reasoning extraction.The fallback chain for reasoning fields (
reasoning_content,reasoning,thought,thinking) properly handles different LLM provider conventions (OpenAI, DeepSeek, etc.). This improves compatibility.
486-491: Consistent multi-field reasoning extraction for non-streaming responses.Good consistency with the streaming path. Both paths now check the same set of reasoning field variants.
Sources/Fluid/Services/ThinkingParsers.swift (1)
295-316: LGTM!The
SeparateFieldThinkingParserimplementation is clean and correctly handles models that use separate fields for reasoning content. The parser appropriately treats all received content as final content without tag parsing.Sources/Fluid/Networking/ModelDownloader.swift (2)
106-149: LGTM!The logging migration is well-structured with appropriate log levels:
infofor significant progress milestones (files to download, overall progress) anddebugfor detailed per-file progress. Source labels are consistent.
292-349: LGTM!Model loading logging is appropriately categorized -
infofor key loading events anddebugfor detailed path information. This aligns with the project-wide DebugLogger migration.Sources/Fluid/Services/ASRService.swift (5)
17-28: Task type erasure could be avoided.The
currentOperationTaskusesTask<Any, Error>to erase the generic type fromTask<T, Error>. This works but loses type information. However, since it's only used for cancellation (not value retrieval), this is acceptable.
30-39: LGTM!The
cancelAndAwaitPending()implementation correctly handles the cancellation flow: cancels any in-flight operation, waits for the task chain to complete, then clears references. This prevents use-after-free issues when the buffer is cleared.
364-384: LGTM! Critical race condition fix.The reordering is correct: setting
isRunning = falsefirst signals in-flight chunks to abort, then stopping the engine prevents new audio, and finally awaiting ensures all pending transcription tasks complete before buffer access. This prevents the EXC_BAD_ACCESS crashes.
423-445: LGTM!The
stopWithoutTranscription()method follows the same safe shutdown pattern asstop(). The async conversion is properly handled by callers (ContentView wraps calls inTask).
802-816: LGTM!The helper correctly implements two-phase async cleanup: first awaits the streaming task completion, then awaits the transcription executor. This ensures no in-flight work remains before buffer operations.
Sources/Fluid/UI/SettingsView.swift (4)
370-380: LGTM!The new "Save Transcription History" toggle is properly bound to
SettingsStore.shared.saveTranscriptionHistoryfollowing the same pattern as other settings toggles.
634-643: LGTM!The "Enable Debug Logs" toggle is appropriately placed under Debug Settings and follows the established binding pattern.
477-481: LGTM!The
Taskwrapper correctly handles the now-asyncstopWithoutTranscription()API, matching the pattern shown in the relevant code snippet from ContentView.
461-468: LGTM!The "Loading..." placeholder when device lists are empty provides good UX feedback during device enumeration. The empty UID guard prevents invalid selections.
Also applies to: 510-517
| DebugLogger.shared.debug("Initial hotkey manager health check: \(self.hotkeyManagerInitialized)", source: "ContentView") | ||
|
|
||
| // If still not initialized and accessibility is enabled, try reinitializing | ||
| if !self.hotkeyManagerInitialized && self.accessibilityEnabled { | ||
| self.hotkeyManagerInitialized = self.hotkeyManager?.validateEventTapHealth() ?? false | ||
| if UserDefaults.standard.bool(forKey: "enableDebugLogs") { | ||
| print("[ContentView] Initial hotkey manager health check: \(self.hotkeyManagerInitialized)") | ||
| } | ||
| DebugLogger.shared.debug("Initial hotkey manager health check: \(self.hotkeyManagerInitialized)", source: "ContentView") | ||
|
|
||
| // If still not initialized and accessibility is enabled, try reinitializing | ||
| if !self.hotkeyManagerInitialized && self.accessibilityEnabled { | ||
| if UserDefaults.standard.bool(forKey: "enableDebugLogs") { | ||
| print("[ContentView] Hotkey manager not healthy, attempting reinitalization") | ||
| } | ||
| DebugLogger.shared.debug("Hotkey manager not healthy, attempting reinitalization", source: "ContentView") |
There was a problem hiding this comment.
Duplicate debug log statements detected.
Lines 1786 and 1791 emit identical log messages ("Initial hotkey manager health check: ..."). This appears to be unintentional duplication—the same check and log occur twice in quick succession.
🔎 Suggested fix to remove duplicate logging
await MainActor.run {
self.hotkeyManagerInitialized = self.hotkeyManager?.validateEventTapHealth() ?? false
DebugLogger.shared.debug("Initial hotkey manager health check: \(self.hotkeyManagerInitialized)", source: "ContentView")
// If still not initialized and accessibility is enabled, try reinitializing
if !self.hotkeyManagerInitialized && self.accessibilityEnabled {
- self.hotkeyManagerInitialized = self.hotkeyManager?.validateEventTapHealth() ?? false
- DebugLogger.shared.debug("Initial hotkey manager health check: \(self.hotkeyManagerInitialized)", source: "ContentView")
-
- // If still not initialized and accessibility is enabled, try reinitializing
- if !self.hotkeyManagerInitialized && self.accessibilityEnabled {
- DebugLogger.shared.debug("Hotkey manager not healthy, attempting reinitalization", source: "ContentView")
- self.hotkeyManager?.reinitialize()
- }
+ DebugLogger.shared.debug("Hotkey manager not healthy, attempting reinitialization", source: "ContentView")
+ self.hotkeyManager?.reinitialize()
}
}Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In Sources/Fluid/ContentView.swift around lines 1786 to 1795 there are duplicate
debug logs that both print "Initial hotkey manager health check: ..."
back-to-back; remove the redundant log statement so the health check is only
logged once (preferably keep the log after you assign
self.hotkeyManagerInitialized from validateEventTapHealth() so you log the final
state), ensuring no other logic is changed.
| streaming: enableStreaming, | ||
| tools: [TerminalService.toolDefinition], | ||
| temperature: isReasoningModel ? nil : 0.1, | ||
| maxTokens: isReasoningModel ? 4096 : nil, // Reasoning models like o1 need a budget for thoughts |
There was a problem hiding this comment.
Increase maxTokens budget for reasoning models—current 4096 is significantly insufficient.
OpenAI recommends reserving at least 25,000 tokens for reasoning and outputs when experimenting with reasoning models, and you can adjust this buffer as you become familiar with token requirements. The current setting of 4096 is approximately 6x lower than this minimum recommendation. Depending on problem complexity, reasoning models may generate anywhere from a few hundred to tens of thousands of reasoning tokens, and reasoning tokens occupy space in the model's context window and are billed as output tokens. This budget leaves insufficient room for reasoning processes on moderately complex tasks and may result in incomplete responses with finish_reason='length'.
🤖 Prompt for AI Agents
In Sources/Fluid/Services/CommandModeService.swift around line 834, the
maxTokens for reasoning models is set to 4096 which is too low; update the code
to use a much larger token budget (e.g. at least 25000) for reasoning models
instead of 4096, and make this value configurable via a named constant or
configuration so it can be tuned per model; keep non-reasoning models using nil
(or a smaller default), and ensure any new constant respects model
context-window limits or falls back gracefully if the model cannot support the
requested maxTokens.
| // If we were in thinking mode via a separate field (not tag-based), | ||
| // receiving "content" usually means the thinking phase is over. | ||
| if state == .inThinking && reasoningField == nil && tagDetectionBuffer.isEmpty { | ||
| // This is a subtle heuristic: if we were thinking, didn't just get a reasoning field chunk, | ||
| // and have no partial tags buffered, we should check if this content chunk | ||
| // is the start of the final answer. | ||
| // For safety with tag-based parsers, we let the parser decide unless it's a known separate-field model. | ||
| } |
There was a problem hiding this comment.
Empty if block with dead code comments.
The if block at lines 352-357 contains only comments and no executable code. This appears to be leftover from development. Either implement the heuristic described in the comments or remove the empty block.
🔎 Suggested cleanup
- // If we were in thinking mode via a separate field (not tag-based),
- // receiving "content" usually means the thinking phase is over.
- if state == .inThinking && reasoningField == nil && tagDetectionBuffer.isEmpty {
- // This is a subtle heuristic: if we were thinking, didn't just get a reasoning field chunk,
- // and have no partial tags buffered, we should check if this content chunk
- // is the start of the final answer.
- // For safety with tag-based parsers, we let the parser decide unless it's a known separate-field model.
- }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // If we were in thinking mode via a separate field (not tag-based), | |
| // receiving "content" usually means the thinking phase is over. | |
| if state == .inThinking && reasoningField == nil && tagDetectionBuffer.isEmpty { | |
| // This is a subtle heuristic: if we were thinking, didn't just get a reasoning field chunk, | |
| // and have no partial tags buffered, we should check if this content chunk | |
| // is the start of the final answer. | |
| // For safety with tag-based parsers, we let the parser decide unless it's a known separate-field model. | |
| } |
🤖 Prompt for AI Agents
In Sources/Fluid/Services/LLMClient.swift around lines 350 to 357, there is an
empty if block (state == .inThinking && reasoningField == nil &&
tagDetectionBuffer.isEmpty) that only contains comments; remove the dead code or
implement the described heuristic. Either delete the entire conditional and its
comments if no action is needed, or add the intended logic to detect that a
content chunk likely starts the final answer (e.g., transition state from
.inThinking to .inAnswer or invoke the parser/handler for final-answer start)
and ensure any side effects (state change, buffer flush, parser call) are
covered and tested.
| private func logRequest(_ request: URLRequest) { | ||
| guard let url = request.url, let method = request.httpMethod else { return } | ||
|
|
||
| var bodyString = "" | ||
| if let body = request.httpBody { | ||
| bodyString = String(data: body, encoding: .utf8) ?? "" | ||
| } | ||
|
|
||
| var curl = "curl -X \(method) \"\(url.absoluteString)\" \\\n" | ||
| for (key, value) in request.allHTTPHeaderFields ?? [:] { | ||
| let maskedValue = key.lowercased().contains("auth") ? "Bearer [REDACTED]" : value | ||
| curl += " -H \"\(key): \(maskedValue)\" \\\n" | ||
| } | ||
| curl += " -d '\(bodyString)'" | ||
|
|
||
| DebugLogger.shared.info("LLMClient: Full Request as cURL:\n\(curl)", source: "LLMClient") | ||
| } |
There was a problem hiding this comment.
Request logging masks auth tokens appropriately, but body may contain sensitive data.
The logRequest helper correctly redacts Authorization headers. However, the request body is logged in full (line 627), which could include user prompts or conversation history containing sensitive information. Consider truncating or omitting the body in production builds.
🔎 Suggested improvement for body logging
private func logRequest(_ request: URLRequest) {
guard let url = request.url, let method = request.httpMethod else { return }
var bodyString = ""
if let body = request.httpBody {
- bodyString = String(data: body, encoding: .utf8) ?? ""
+ let fullBody = String(data: body, encoding: .utf8) ?? ""
+ // Truncate body to avoid logging sensitive conversation content
+ bodyString = fullBody.count > 200 ? String(fullBody.prefix(200)) + "...[truncated]" : fullBody
}
var curl = "curl -X \(method) \"\(url.absoluteString)\" \\\n"
for (key, value) in request.allHTTPHeaderFields ?? [:] {
let maskedValue = key.lowercased().contains("auth") ? "Bearer [REDACTED]" : value
curl += " -H \"\(key): \(maskedValue)\" \\\n"
}
curl += " -d '\(bodyString)'"
DebugLogger.shared.info("LLMClient: Full Request as cURL:\n\(curl)", source: "LLMClient")
}🤖 Prompt for AI Agents
In Sources/Fluid/Services/LLMClient.swift around lines 614 to 630, the
logRequest helper currently prints the full request body which can leak
sensitive user prompts or conversation history; update it to avoid logging full
bodies in non-debug builds by either omitting the body or truncating it to a
safe length and/or replacing it with a placeholder. Concretely: detect if the
build is non-DEBUG (or use a production flag) and in that case replace
bodyString with "[REDACTED]" (or for DEBUG keep only the first N characters and
append "…"), and ensure any JSON body keys likely to contain PII are masked if
you decide to log structured snippets; then log the resulting safe bodyString
instead of the raw body.
Description
Brief description of what this PR does.
Type of Change
Related Issues
Closes #(issue number)
Testing
brew install swiftlint && swiftlint --strict --config .swiftlint.ymlbrew install swiftformat && swiftformat --config .swiftformat SourcesScreenshots / Video
Add screenshots or Video recording of the app after you have made your changes
Summary by CodeRabbit
New Features
Bug Fixes
Improvements
✏️ Tip: You can customize this high-level summary in your review settings.