Skip to content

Cache AI response in memory while inside the screen#323

Merged
pantelisss merged 5 commits intomainfrom
feature/fe-2050-cache-ai-response-in-memory-while-inside-the-screen
Nov 6, 2025
Merged

Cache AI response in memory while inside the screen#323
pantelisss merged 5 commits intomainfrom
feature/fe-2050-cache-ai-response-in-memory-while-inside-the-screen

Conversation

@pantelisss
Copy link
Copy Markdown
Collaborator

@pantelisss pantelisss commented Nov 5, 2025

Why?

Fixes in AI support feature

How?

  • Added elevation in AI button
  • Keep support view model in memory

Testing

Ensure the elevation is rendered as expected and the AI response is not fetched again once is loaded.

Additional context

fixes fe-2050

Summary by CodeRabbit

  • Style

    • Added a shadow to the AI support button for improved visual polish.
  • Refactor

    • Station support now caches its view model to reuse between openings and reduce reloads.
    • Data loading is triggered by the view model on creation for more consistent state.
    • Removed the previous automatic refresh on view appearance to rely on the view-model-driven lifecycle.
  • Chores

    • Updated app display names across build configurations.

@pantelisss pantelisss requested a review from PavlosTze November 5, 2025 15:59
@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Nov 5, 2025

Walkthrough

UI-only and lifecycle adjustments: added a shadow to the AI support button, OverviewViewModel now caches/reuses StationSupportViewModel for AI support taps, StationSupportView no longer auto-refreshes on appear, and StationSupportViewModel now calls refresh() in its initializer. Plus minor formatting and Info.plist display name updates.

Changes

Cohort / File(s) Summary
AI support UI
PresentationLayer/UIComponents/Screens/WeatherStations/StationDetails/Overview/OverviewView.swift
Added wxmShadow() modifier to aiSupportView after cornerRadius.
Station support view lifecycle
PresentationLayer/UIComponents/Screens/WeatherStations/StationSupport/StationSupportView.swift, PresentationLayer/UIComponents/Screens/WeatherStations/StationSupport/StationSupportViewModel.swift
Removed the view's .task that called viewModel.refresh(); added a call to refresh() inside StationSupportViewModel initializer (so refresh runs when VM is created).
ViewModel caching & tap handling
PresentationLayer/UIComponents/Screens/WeatherStations/StationDetails/Overview/OverviewViewModel.swift
Added a private cached stationSupportViewModel: StationSupportViewModel?; AI Support tap reuses cached VM or creates & caches a new one via ViewModelsFactory.
Styling & project metadata
PresentationLayer/UIComponents/Screens/RewardAnalyticsView.swift, PresentationLayer/UIComponents/Screens/WeatherStations/StationDetails/StationDetailsContainerView.swift, wxm-ios.xcodeproj/project.pbxproj
Minor formatting/whitespace tweaks and compacted expression in reward animation; updated CFBundleDisplayName values in project PBX/Info.plist entries across configurations.

Sequence Diagram(s)

sequenceDiagram
    participant User
    participant OverviewView
    participant OverviewVM as OverviewViewModel
    participant Factory as ViewModelsFactory
    participant SupportVM as StationSupportViewModel
    participant SupportView

    User->>OverviewView: Tap AI Support
    OverviewView->>OverviewVM: AISupport tapped

    alt cached StationSupportViewModel exists
        OverviewVM->>SupportVM: reuse cached VM
    else create new VM
        OverviewVM->>Factory: createStationSupportViewModel()
        Factory->>SupportVM: init()
        Note right of SupportVM: refresh() called in init()
        Factory-->>OverviewVM: return VM
        OverviewVM->>OverviewVM: cache VM
    end

    OverviewVM->>SupportView: present SupportView with VM
    SupportView-->>User: show support UI
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

  • Review timing/race considerations from calling refresh() during VM init.
  • Verify cached VM lifecycle to avoid stale state or memory retention.
  • Confirm removal of .task aligns with desired refresh timing.
  • Check Info.plist display name changes for correctness across build configs.

Suggested reviewers

  • PavlosTze

Poem

🐰 A tiny shadow on the AI key,
VM wakes early, ready to be,
Cached in waiting, quick to show,
The view trims whitespace — a gentle glow,
Hop, refresh, and off we go.

Pre-merge checks and finishing touches

❌ Failed checks (1 warning, 1 inconclusive)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
Description check ❓ Inconclusive The description is incomplete. It omits the Screenshots section (marked as applicable in the template) and lacks sufficient technical detail about the view model caching mechanism. Add screenshots showing the elevation change, and provide more details about how the caching mechanism prevents refetching (e.g., the specific null checks or conditional logic used).
✅ Passed checks (1 passed)
Check name Status Explanation
Title check ✅ Passed The title accurately describes the main objective: caching the AI response in memory while the user is on the screen, which is the core fix demonstrated by the OverviewViewModel changes that cache stationSupportViewModel.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch feature/fe-2050-cache-ai-response-in-memory-while-inside-the-screen

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between e3f9f16 and f0535cf.

📒 Files selected for processing (4)
  • PresentationLayer/UIComponents/Screens/WeatherStations/StationDetails/Overview/OverviewView.swift (1 hunks)
  • PresentationLayer/UIComponents/Screens/WeatherStations/StationDetails/Overview/OverviewViewModel.swift (2 hunks)
  • PresentationLayer/UIComponents/Screens/WeatherStations/StationSupport/StationSupportView.swift (0 hunks)
  • PresentationLayer/UIComponents/Screens/WeatherStations/StationSupport/StationSupportViewModel.swift (1 hunks)
💤 Files with no reviewable changes (1)
  • PresentationLayer/UIComponents/Screens/WeatherStations/StationSupport/StationSupportView.swift
🧰 Additional context used
🧬 Code graph analysis (2)
PresentationLayer/UIComponents/Screens/WeatherStations/StationSupport/StationSupportViewModel.swift (1)
PresentationLayer/UIComponents/Screens/WeatherStations/StationDetails/Overview/OverviewViewModel.swift (1)
  • refresh (56-61)
PresentationLayer/UIComponents/Screens/WeatherStations/StationDetails/Overview/OverviewViewModel.swift (2)
PresentationLayer/UIComponents/ViewModelsFactory.swift (1)
  • getStationSupportViewModel (400-403)
PresentationLayer/Navigation/Router.swift (1)
  • showBottomSheet (347-351)
🔇 Additional comments (3)
PresentationLayer/UIComponents/Screens/WeatherStations/StationDetails/Overview/OverviewView.swift (1)

150-150: LGTM! UI enhancement looks good.

The shadow modifier adds the requested elevation to the AI support button as described in the PR objectives.

PresentationLayer/UIComponents/Screens/WeatherStations/StationDetails/Overview/OverviewViewModel.swift (1)

19-21: Validate cached view model before reusing.

The caching approach is good, but the implementation needs safeguards to ensure the cached view model is still valid.

Consider these scenarios where the cache should be invalidated:

  1. Device changes: If refreshWithDevice (line 171) updates to a different device (different name), the cached VM contains data for the wrong station.
  2. Error state: If the cached VM is in .error mode, reusing it shows the same error without retry opportunity (especially problematic since refresh() is now private).

Verify whether refreshWithDevice can update to a different station, and if so, clear the cache when the device name changes.

PresentationLayer/UIComponents/Screens/WeatherStations/StationSupport/StationSupportViewModel.swift (1)

23-24: LGTM! Eager loading aligns well with the caching strategy.

Calling refresh() in the initializer ensures data is fetched immediately when the view model is created, which works well with the caching approach in OverviewViewModel.

Comment on lines +97 to +99
let vm = stationSupportViewModel ?? ViewModelsFactory.getStationSupportViewModel(deviceName: name)
Router.shared.showBottomSheet(.stationSupport(vm), bgColor: .top)
stationSupportViewModel = vm
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Major issue: Cached view model is reused without validating its state or matching device.

The caching logic has two critical flaws:

  1. Reuses error states: If the cached VM entered .error mode, tapping the AI button again shows the same error. Since refresh() is now private (StationSupportViewModel.swift line 27), there's no way to retry the fetch. The user is stuck with the error until they navigate away from the station details entirely.

  2. Doesn't validate device match: If the device changes via refreshWithDevice (line 171), the cached VM may contain data for a different station, leading to incorrect information being displayed.

Apply this diff to validate state and device before reusing:

 func handleAISupportTap() {
     guard let name = device?.name else {
         return
     }
-    let vm = stationSupportViewModel ?? ViewModelsFactory.getStationSupportViewModel(deviceName: name)
+    
+    // Validate cached VM is for the same device and not in error state
+    let vm: StationSupportViewModel
+    if let cached = stationSupportViewModel,
+       case .error = cached.mode {
+        // Don't reuse if in error state - create fresh VM to retry
+        vm = ViewModelsFactory.getStationSupportViewModel(deviceName: name)
+    } else {
+        vm = stationSupportViewModel ?? ViewModelsFactory.getStationSupportViewModel(deviceName: name)
+    }
+    
     Router.shared.showBottomSheet(.stationSupport(vm), bgColor: .top)
     stationSupportViewModel = vm
 }

Additionally, clear the cache in refreshWithDevice when the device changes:

@MainActor
func refreshWithDevice(_ device: DeviceDetails?, followState: UserDeviceFollowState?, error: NetworkErrorResponse?) async {
+   // Clear cache if device changes to prevent showing wrong station's data
+   if self.device?.name != device?.name {
+       stationSupportViewModel = nil
+   }
+   
    self.device = device
    // ... rest of method
}
🤖 Prompt for AI Agents
In
PresentationLayer/UIComponents/Screens/WeatherStations/StationDetails/Overview/OverviewViewModel.swift
around lines 97-99, the code reuses a cached stationSupportViewModel
unconditionally which can reuse an .error state or a VM tied to a different
device; update the logic to validate the cached VM before reuse by ensuring its
device identity matches the current station name and its state is not an error
(create a fresh ViewModel via
ViewModelsFactory.getStationSupportViewModel(deviceName: name) when the cached
VM fails these checks), then assign and present that VM; additionally, in
refreshWithDevice (around line 171) clear stationSupportViewModel (set to nil)
whenever the incoming device name differs from the existing one to avoid showing
another station’s data.

}

func refresh() {
private func refresh() {
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Major issue: Making refresh() private prevents retry functionality.

Changing refresh() to private eliminates the ability to retry after an error, which is particularly problematic with the new caching mechanism in OverviewViewModel.

Here's the problematic flow:

  1. User taps AI button → VM is created → refresh() called in init → API fails → mode is set to .error
  2. VM is cached in OverviewViewModel (line 99)
  3. User dismisses sheet and taps AI button again → cached VM is reused (still in .error state)
  4. Since refresh() is private, there's no way to trigger a retry
  5. User is stuck with the error until they navigate away from the entire station details view

Solution: Keep refresh() internal (not private) to allow retry functionality:

-private func refresh() {
+func refresh() {
     mode = .loading
     // ... rest of implementation
 }

Then update OverviewViewModel.handleAISupportTap() to call refresh() when reusing a cached VM in error state:

if let cached = stationSupportViewModel {
    if case .error = cached.mode {
        cached.refresh() // Retry the fetch
    }
    vm = cached
} else {
    vm = ViewModelsFactory.getStationSupportViewModel(deviceName: name)
}

Alternatively, if you want to keep the eager-loading-only pattern, ensure OverviewViewModel doesn't cache VMs that are in error state (see comment on OverviewViewModel.swift lines 97-99).

🤖 Prompt for AI Agents
In
PresentationLayer/UIComponents/Screens/WeatherStations/StationSupport/StationSupportViewModel.swift
around line 27, the refresh() method was made private which prevents callers
from retrying a failed load when a cached VM is reused; change refresh() back to
internal (remove the private modifier) so external callers can invoke it, and
then update OverviewViewModel.handleAISupportTap() to call cached.refresh() when
the cached VM is in .error state (or alternatively avoid caching VMs that are in
.error) so the retry path is available to users.

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
PresentationLayer/UIComponents/Screens/WeatherStations/StationSupport/StationSupportViewModel.swift (1)

27-48: Add retry logic when reusing cached StationSupportViewModel in error state.

The refresh() method is accessible as intended, but handleAISupportTap() reuses the cached stationSupportViewModel without triggering a retry. If the VM entered an .error state, tapping the AI button again displays the same stale error with no opportunity to refresh.

The cached VM must call refresh() when reused, or be cleared on screen dismissal to force a fresh instance:

func handleAISupportTap() {
    guard let name = device?.name else {
        return
    }
    let vm = stationSupportViewModel ?? ViewModelsFactory.getStationSupportViewModel(deviceName: name)
    if stationSupportViewModel == vm && vm.mode == .error {
        vm.refresh()  // Retry if reusing errored VM
    }
    Router.shared.showBottomSheet(.stationSupport(vm), bgColor: .top)
    stationSupportViewModel = vm
}
🧹 Nitpick comments (1)
PresentationLayer/UIComponents/Screens/WeatherStations/StationSupport/StationSupportViewModel.swift (1)

27-48: Consider canceling in-flight requests when refresh() is called again.

If refresh() is invoked multiple times in quick succession (e.g., user rapidly dismisses and reopens the support sheet, or a retry mechanism calls it while a previous request is still pending), multiple publisher chains will be active simultaneously. While the cancellables set stores all subscriptions, there's no explicit cancellation of previous requests before starting a new one.

This could lead to:

  • Race conditions where responses arrive out of order
  • The mode being set by a stale response after a newer request has already completed
  • Unnecessary network requests

Apply this diff to cancel previous requests before starting a new refresh:

 func refresh() {
+    cancellables.removeAll() // Cancel any in-flight requests
     mode = .loading
     do {
         try useCase.getDeviceSupport(deviceName: stationName)
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 46e1413 and 1ed975c.

📒 Files selected for processing (2)
  • PresentationLayer/UIComponents/Screens/WeatherStations/StationSupport/StationSupportViewModel.swift (1 hunks)
  • wxm-ios.xcodeproj/project.pbxproj (2 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
PresentationLayer/UIComponents/Screens/WeatherStations/StationSupport/StationSupportViewModel.swift (1)
PresentationLayer/UIComponents/Screens/WeatherStations/StationDetails/Overview/OverviewViewModel.swift (1)
  • refresh (56-61)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: WeatherXM | Unit tests | wxm-ios - iOS
🔇 Additional comments (2)
PresentationLayer/UIComponents/Screens/WeatherStations/StationSupport/StationSupportViewModel.swift (1)

20-25: LGTM! Eager loading pattern aligns well with the caching strategy.

Calling refresh() in the initializer ensures the AI response is fetched immediately when the view model is created. This works well with the caching mechanism introduced in OverviewViewModel, allowing the cached instance to retain the loaded data across multiple presentations of the support sheet.

wxm-ios.xcodeproj/project.pbxproj (1)

4968-4968: CFBundleDisplayName change verified — approved.

The mock configuration display name "WXM Mock" is consistent with the naming convention: Debug uses "WXM Dev", Mock uses "WXM Mock", and Release uses "WeatherXM". No conflicting entries in Info.plist, so all CFBundleDisplayName values are cleanly managed through build configuration overrides.

GENERATE_INFOPLIST_FILE = YES;
INFOPLIST_FILE = "wxm-ios/Info.plist";
INFOPLIST_KEY_CFBundleDisplayName = "WXM Dev";
INFOPLIST_KEY_CFBundleDisplayName = WeatherXM;
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

Release display name → WeatherXM — LGTM.

Branding for production looks correct. Ensure there’s no conflicting CFBundleDisplayName in Info.plist at build time.


🏁 Script executed:

#!/bin/bash
#!/bin/bash
# Double-check effective names across configs (Debug/Mock/Release)
rg -n 'INFOPLIST_KEY_CFBundleDisplayName' wxm-ios.xcodeproj/project.pbxproj -C1

Length of output: 1545


Fix missing quotes around WeatherXM at line 5195 for consistency.

The Release configuration is missing quotes that are present in Debug ("WXM Dev") and Mock ("WXM Mock") configurations. Line 5195 should be:

INFOPLIST_KEY_CFBundleDisplayName = "WeatherXM";

Without quotes, the value may be incorrectly parsed as an identifier rather than a string literal, causing potential build or runtime issues.

🤖 Prompt for AI Agents
In wxm-ios.xcodeproj/project.pbxproj around line 5195, the
INFOPLIST_KEY_CFBundleDisplayName value for the Release configuration is missing
quotes; update the line to wrap WeatherXM in double quotes so it reads as a
string literal (matching Debug/Mock entries) to avoid parsing issues and ensure
consistent formatting.

@weatherxmdev
Copy link
Copy Markdown
Collaborator

Code Coverage Summary

Framework Source Files Coverage
DataLayer.framework 49 66.82%
DomainLayer.framework 67 83.79%
Toolkit.framework 30 53.38%
WeatherXM.app 406 14.17%
station-intent.appex 2 83.33%
station-widgetExtension.appex 45 33.98%

@pantelisss pantelisss merged commit ed0618f into main Nov 6, 2025
4 checks passed
@pantelisss pantelisss deleted the feature/fe-2050-cache-ai-response-in-memory-while-inside-the-screen branch November 6, 2025 13:52
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants