Skip to content

Conversation

@SylvainChevalier
Copy link
Contributor

@SylvainChevalier SylvainChevalier commented Jan 29, 2026

Track all key factor creation interactions:

  • Manual creation vs LLM-generated
  • "Ask an LLM" button clicks
  • Automatic LLM generation attempts (background loading)
  • LLM suggestion acceptance, editing, and rejection
  • Successful LLM generation with counts by type

Analytics events:

  • addKeyFactor: Enhanced with source (manual/llm_suggestion/mixed) and type breakdown
  • keyFactorGenerateClicked: When user requests LLM suggestions
  • keyFactorLLMGenerationAttempted: When automatic LLM generation starts (background)
  • keyFactorLLMGenerated: When backend successfully generates suggestions
  • keyFactorLLMSuggestionAccepted: When user accepts a suggestion
  • keyFactorLLMSuggestionEdited: When user edits a suggestion
  • keyFactorLLMSuggestionRejected: When user rejects a suggestion

Fixes #4194

Generated with Claude Code

Summary by CodeRabbit

  • Analytics
    • Enhanced tracking for AI suggestion interactions (editing, accepting, rejecting)
    • Added telemetry for suggestion generation events and submission sources

Track all key factor creation interactions:
- Manual creation vs LLM-generated
- "Ask an LLM" button clicks
- LLM suggestion acceptance, editing, and rejection
- Successful LLM generation with counts by type

Analytics events:
- addKeyFactor: Enhanced with source (manual/llm_suggestion/mixed) and type breakdown
- keyFactorGenerateClicked: When user requests LLM suggestions
- keyFactorLLMGenerated: When backend successfully generates suggestions
- keyFactorLLMSuggestionAccepted: When user accepts a suggestion
- keyFactorLLMSuggestionEdited: When user edits a suggestion
- keyFactorLLMSuggestionRejected: When user rejects a suggestion

Fixes #4194

Co-authored-by: Sylvain <SylvainChevalier@users.noreply.github.com>
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Jan 29, 2026

📝 Walkthrough

Walkthrough

This pull request adds comprehensive analytics tracking to the key factor creation lifecycle, instrumenting user interactions with LLM-generated suggestions, generation attempts and results, button clicks, and submission source categorization across three component files.

Changes

Cohort / File(s) Summary
Key Factor Analytics Tracking
front_end/src/app/(main)/questions/[id]/components/key_factors/add_in_comment/key_factors_add_in_comment_llm_suggestions.tsx, front_end/src/app/(main)/questions/[id]/components/key_factors/hooks.ts, front_end/src/app/(main)/questions/[id]/components/key_factors/key_factors_context.tsx
Adds analytics events across the key factor lifecycle: tracks generation attempts and results with per-type counts (driver/base_rate/news), captures user interactions (editing/accepting/rejecting LLM suggestions), logs generate button clicks with context (fromComment/fromList), and includes submission source categorization (manual/llm_suggestion/mixed) in creation events.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

Poem

🐰 With whiskers twitching and eyes so bright,
We've tracked each click and edit right,
From generation spawns to suggestions rejected,
Every user action now reflected,
Analytics dancing, patterns take flight!

🚥 Pre-merge checks | ✅ 4
✅ Passed checks (4 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title accurately describes the main change: adding comprehensive analytics tracking for key factor creation across multiple interaction types.
Linked Issues check ✅ Passed The pull request fully implements all requirements from issue #4194: manual creation tracking, automatic generation tracking, generate button clicks, editing of LLM suggestions, and acceptance of suggestions, plus additional relevant analytics.
Out of Scope Changes check ✅ Passed All changes are directly related to the linked issue #4194 objectives. Analytics tracking is added only for key factor creation interactions, with no unrelated modifications.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch claude/issue-4194-20260129-1414

Important

Action Needed: IP Allowlist Update

If your organization protects your Git platform with IP whitelisting, please add the new CodeRabbit IP address to your allowlist:

  • 136.113.208.247/32 (new)
  • 34.170.211.100/32
  • 35.222.179.152/32

Reviews will stop working after February 8, 2026 if the new IP is not added to your allowlist.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@github-actions
Copy link
Contributor

github-actions bot commented Jan 29, 2026

🚀 Preview Environment

Your preview environment is ready!

Resource Details
🌐 Preview URL https://metaculus-pr-4197-claude-issue-4194-20260129-141-preview.mtcl.cc
📦 Docker Image ghcr.io/metaculus/metaculus:claude-issue-4194-20260129-1414-47dc88d
🗄️ PostgreSQL NeonDB branch preview/pr-4197-claude-issue-4194-20260129-141
Redis Fly Redis mtc-redis-pr-4197-claude-issue-4194-20260129-141

Details

  • Commit: 47dc88deceb08a885c2dcc4efdcbcb1b313aeaff
  • Branch: claude/issue-4194-20260129-1414
  • Fly App: metaculus-pr-4197-claude-issue-4194-20260129-141

ℹ️ Preview Environment Info

Isolation:

  • PostgreSQL and Redis are fully isolated from production
  • Each PR gets its own database branch and Redis instance
  • Changes pushed to this PR will trigger a new deployment

Limitations:

  • Background workers and cron jobs are not deployed in preview environments
  • If you need to test background jobs, use Heroku staging environments

Cleanup:

  • This preview will be automatically destroyed when the PR is closed

@SylvainChevalier
Copy link
Contributor Author

@claude also keep track of attempted generations when the user doesn't even click the button. I think the front end shows a discreet loading animation while it's working.

@claude
Copy link

claude bot commented Jan 29, 2026

Claude finished @SylvainChevalier's task in 2m 43s —— View job


Completed: Tracking automatic LLM generation attempts ✓

  • Read repository setup instructions (CLAUDE.md)
  • Understand current analytics implementation
  • Find where automatic LLM generation happens
  • Add analytics tracking for automatic generation attempts
  • Update PR description
  • Commit and push changes

What I did

Added tracking for automatic LLM generation attempts that happen in the background before users click any button.

New analytics event: keyFactorLLMGenerationAttempted

  • Location: hooks.ts:157-160
  • Triggers when: A new comment is created and the system automatically starts generating key factor suggestions
  • Category: "automatic" (to distinguish from manual "Ask an LLM" clicks)
  • Timing: Fires before the API call, so it captures attempts even if generation fails

This tracks the "discreet loading animation" phase you mentioned, where the spinner appears on the "Add Key Factor" button while the system works in the background.

Changes made

  • Updated hooks.ts to add the analytics event in the useEffect that triggers automatic generation
  • Updated PR description to document the new event

--- |

Add keyFactorLLMGenerationAttempted analytics event to track when
automatic background LLM generation is triggered for key factors.
This captures attempts before the user clicks any button, during
the discreet loading animation phase.

Co-authored-by: Sylvain <SylvainChevalier@users.noreply.github.com>
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Fix all issues with AI agents
In `@front_end/src/app/`(main)/questions/[id]/components/key_factors/hooks.ts:
- Around line 310-327: The source detection (variables submittedDrafts and
suggested) runs before empty drafts are filtered, so hasManualDrafts/suggested
can be wrong and sendAnalyticsEvent may report "mixed" incorrectly; update the
logic to determine source from the actual filtered results (e.g., the arrays
produced after filtering or from writePayloads by checking each payload's
origin), then set source ("manual", "llm_suggestion", "mixed") based on those
filtered arrays or payload flags and call sendAnalyticsEvent with that corrected
source.
🧹 Nitpick comments (2)
front_end/src/app/(main)/questions/[id]/components/key_factors/key_factors_context.tsx (1)

228-239: Analytics event fires before actual suggestion loading occurs.

The analytics event keyFactorGenerateClicked is tracked correctly when the user clicks the "Ask an LLM" button. The early return on line 229 prevents duplicate tracking during in-flight requests.

However, consider whether the early return should be placed after the analytics call to track all user click intents, even if a request is already in progress. This would provide more accurate data on user engagement patterns.

Optional: Track clicks even during loading
     loadSuggestions: (force?: boolean) => {
+      // Track when user clicks "Ask an LLM" button
+      sendAnalyticsEvent("keyFactorGenerateClicked", {
+        event_category: commentId ? "fromComment" : "fromList",
+        force: !!force,
+      });
+
       if (isLoadingSuggestedKeyFactors) return;
-
-      // Track when user clicks "Ask an LLM" button
-      sendAnalyticsEvent("keyFactorGenerateClicked", {
-        event_category: commentId ? "fromComment" : "fromList",
-        force: !!force,
-      });

       if (force) reloadSuggestions();
       else if (!shouldLoadSuggestions) setShouldLoadSuggestions(true);
     },
front_end/src/app/(main)/questions/[id]/components/key_factors/add_in_comment/key_factors_add_in_comment_llm_suggestions.tsx (1)

221-229: Clarify "Edited" event semantics and consider extracting type detection.

The keyFactorLLMSuggestionEdited event fires when the user clicks the edit button to start editing, not when they save their edits. This may or may not align with analytics requirements—verify this captures the intended user action.

Additionally, the type detection logic is duplicated across three locations (edit, accept, reject handlers). Consider extracting it to reduce repetition.

Extract type detection helper

Add this helper near the top of the component:

const getKeyFactorType = (kf: KeyFactorDraft): "driver" | "base_rate" | "news" =>
  isDriverDraft(kf) ? "driver" : isBaseRateDraft(kf) ? "base_rate" : "news";

Then use it in all three locations:

-    const keyFactorType = isDriverDraft(kf)
-      ? "driver"
-      : isBaseRateDraft(kf)
-        ? "base_rate"
-        : "news";
+    const keyFactorType = getKeyFactorType(kf);
📜 Review details

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 45cbb61 and 8fd4803.

📒 Files selected for processing (3)
  • front_end/src/app/(main)/questions/[id]/components/key_factors/add_in_comment/key_factors_add_in_comment_llm_suggestions.tsx
  • front_end/src/app/(main)/questions/[id]/components/key_factors/hooks.ts
  • front_end/src/app/(main)/questions/[id]/components/key_factors/key_factors_context.tsx
🧰 Additional context used
🧬 Code graph analysis (3)
front_end/src/app/(main)/questions/[id]/components/key_factors/hooks.ts (2)
front_end/src/utils/analytics.ts (1)
  • sendAnalyticsEvent (11-17)
front_end/src/utils/key_factors.ts (3)
  • isDriverDraft (9-10)
  • isBaseRateDraft (11-12)
  • isNewsDraft (13-14)
front_end/src/app/(main)/questions/[id]/components/key_factors/key_factors_context.tsx (1)
front_end/src/utils/analytics.ts (1)
  • sendAnalyticsEvent (11-17)
front_end/src/app/(main)/questions/[id]/components/key_factors/add_in_comment/key_factors_add_in_comment_llm_suggestions.tsx (2)
front_end/src/utils/key_factors.ts (2)
  • isDriverDraft (9-10)
  • isBaseRateDraft (11-12)
front_end/src/utils/analytics.ts (1)
  • sendAnalyticsEvent (11-17)
🔇 Additional comments (6)
front_end/src/app/(main)/questions/[id]/components/key_factors/key_factors_context.tsx (1)

242-260: LGTM!

The addition of commentId to the dependency array is correct since it's now referenced in the loadSuggestions callback for the analytics payload.

front_end/src/app/(main)/questions/[id]/components/key_factors/add_in_comment/key_factors_add_in_comment_llm_suggestions.tsx (2)

721-740: Analytics tracks user intent, not outcome.

The keyFactorLLMSuggestionAccepted event fires before the async submission completes. If the submission fails (lines 733-738), the event is still recorded even though the acceptance didn't succeed. This is acceptable if the intent is to track user actions rather than outcomes—the actual successful submissions are tracked separately via addKeyFactor in hooks.ts.


747-762: LGTM!

The reject analytics event is correctly placed before the removal action, ensuring the key factor type can be determined before it's removed from the suggestions list.

front_end/src/app/(main)/questions/[id]/components/key_factors/hooks.ts (3)

130-140: LGTM!

The keyFactorLLMGenerated event provides useful analytics with counts broken down by type. The multiple .filter() calls are acceptable given the typically small array size.


147-167: ⚠️ Potential issue | 🟡 Minor

Analytics event may fire for non-existent comments.

The keyFactorLLMGenerationAttempted event fires at line 156, but if the comment is not found (line 152 returns early), the analytics event is still recorded even though no generation attempt actually occurs.

Consider moving the analytics call after the comment existence check:

Proposed fix
   useEffect(() => {
     if (!shouldLoadKeyFactors || !commentId) return;
     if (questionLinksCheckedRef.current.has(commentId)) return;

     const comment = comments.find((c) => c.id === commentId);
     if (!comment) return;

     // Track automatic generation attempt (before user clicks button)
-    if (!fetchedOnceRef.current.has(commentId)) {
+    const isFirstAttempt = !fetchedOnceRef.current.has(commentId);
+    if (isFirstAttempt) {
       sendAnalyticsEvent("keyFactorLLMGenerationAttempted", {
         event_category: "automatic",
       });
     }

     const ids = extractQuestionNumbersFromText(comment.text || "");

Actually, looking more closely, the current placement at lines 154-159 is after the comment existence check at line 152. The control flow is correct—my initial reading was wrong. The analytics only fires if a valid comment exists.


154-159: LGTM!

The keyFactorLLMGenerationAttempted event correctly tracks automatic/background generation attempts. The placement after the comment existence check ensures analytics only fires for valid scenarios, and the fetchedOnceRef check prevents duplicate tracking.

✏️ Tip: You can disable this entire section by setting review_details to false in your review settings.

Comment on lines +310 to 327
// Determine if this is a manual creation or from LLM suggestions
const hasManualDrafts = submittedDrafts.length > 0;
const hasSuggestedDrafts = suggested.length > 0;

// Track the creation source
let source = "manual";
if (hasSuggestedDrafts && !hasManualDrafts) {
source = "llm_suggestion";
} else if (hasSuggestedDrafts && hasManualDrafts) {
source = "mixed";
}

sendAnalyticsEvent("addKeyFactor", {
event_label: isNil(commentId) ? "fromList" : "fromComment",
event_category: "submit",
event_category: submitType,
source,
count: writePayloads.length,
});
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Source detection may misclassify when manual drafts are empty.

The source detection at lines 311-312 checks submittedDrafts.length > 0 and suggested.length > 0, but this happens before filtering out empty drafts. For example, if a user has an empty driver draft (which gets filtered out at line 263), hasManualDrafts would still be true, potentially classifying the submission as "mixed" when it's actually "llm_suggestion" only.

Consider checking the actual filtered arrays or the final writePayloads:

Proposed fix using writePayloads
     const writePayloads = [
       ...driverPayloads,
       ...baseRatePayloads,
       ...newsPayloads,
     ];

-    // Determine if this is a manual creation or from LLM suggestions
-    const hasManualDrafts = submittedDrafts.length > 0;
-    const hasSuggestedDrafts = suggested.length > 0;
-
-    // Track the creation source
-    let source = "manual";
-    if (hasSuggestedDrafts && !hasManualDrafts) {
-      source = "llm_suggestion";
-    } else if (hasSuggestedDrafts && hasManualDrafts) {
-      source = "mixed";
-    }
+    // Determine if this is a manual creation or from LLM suggestions
+    // Count based on filtered arrays to exclude empty drafts
+    const manualCount =
+      (submitType === "driver" ? driverDrafts.filter((d) => d.driver.text.trim() !== "").length : 0) +
+      (submitType === "base_rate" ? baseRateDrafts.length : 0) +
+      (submitType === "news" ? newsDrafts.length : 0);
+    const suggestedCount =
+      (submitType === "driver" ? suggestedDriverDrafts.filter((d) => d.driver.text.trim() !== "").length : 0) +
+      (submitType === "base_rate" ? suggestedBaseRateDrafts.length : 0) +
+      (submitType === "news" ? suggestedNewsDrafts.length : 0);
+
+    // Track the creation source
+    let source: "manual" | "llm_suggestion" | "mixed" = "manual";
+    if (suggestedCount > 0 && manualCount === 0) {
+      source = "llm_suggestion";
+    } else if (suggestedCount > 0 && manualCount > 0) {
+      source = "mixed";
+    }
🤖 Prompt for AI Agents
In `@front_end/src/app/`(main)/questions/[id]/components/key_factors/hooks.ts
around lines 310 - 327, The source detection (variables submittedDrafts and
suggested) runs before empty drafts are filtered, so hasManualDrafts/suggested
can be wrong and sendAnalyticsEvent may report "mixed" incorrectly; update the
logic to determine source from the actual filtered results (e.g., the arrays
produced after filtering or from writePayloads by checking each payload's
origin), then set source ("manual", "llm_suggestion", "mixed") based on those
filtered arrays or payload flags and call sendAnalyticsEvent with that corrected
source.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Collect more analytics on key factor creation

2 participants