Updated Organization feature and allowed guests to view the atcoder leaderboard#201
Updated Organization feature and allowed guests to view the atcoder leaderboard#201Arpit-Mahajan09 wants to merge 8 commits intoOpenLake:mainfrom
Conversation
WalkthroughAdds multiple Django migrations for organizations, user profile fields, achievements/votes, and schema shifts (username → user_id); adjusts Codeforces last_activity defaults. Updates backend Celery lookup to use user_id. Frontend refactors layout, routing, scrollbar styling, navbar ordering, and trend-analysis token handling (localStorage). Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 1 | ❌ 2❌ Failed checks (2 warnings)
✅ Passed checks (1 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 8
🧹 Nitpick comments (2)
api/leaderboard/management/commands/update_db.py (1)
314-316: Remove redundantimport restatement.
reis already imported at the module level (line 3), making this local import unnecessary.🧹 Proposed fix
def atcoder_user_update(self): - import re ac_users = AtcoderUser.objects.all()🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@api/leaderboard/management/commands/update_db.py` around lines 314 - 316, The local import of the regex module inside the atcoder_user_update function is redundant because re is already imported at module level; remove the line "import re" from the atcoder_user_update function so the function uses the module-level re import (refer to function atcoder_user_update and the module-level re import at top of the file).app/src/components/PlatformStreakFetcher.jsx (1)
119-128: Consider adding explicit error handling forJSON.parse.If
calendar_datacontains malformed JSON,JSON.parseat line 125 will throw. While the outercatchblock handles this gracefully, adding explicit handling would improve debugging and provide a clearer error message.🛡️ Proposed improvement
// The backend returns calendar_data as a JSON string of the heatmap array (userDailySubmissionsStats) if (data && data.calendar_data) { - const parsedCalendar = JSON.parse(data.calendar_data); + let parsedCalendar; + try { + parsedCalendar = JSON.parse(data.calendar_data); + } catch (e) { + console.error("Failed parsing CodeChef calendar_data", e); + updateStreak(platform, 0); + return; + } const sortedEntries = parsedCalendar🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/src/components/PlatformStreakFetcher.jsx` around lines 119 - 128, Wrap the JSON.parse call for data.calendar_data in its own try/catch to provide a clearer, contextual error message and safe fallback: inside the block that reads data.calendar_data, replace direct JSON.parse(data.calendar_data) with a guarded parse (try { parsedCalendar = JSON.parse(data.calendar_data) } catch (err) { processLogger?.error or console.error with a message including the username and the raw calendar_data; set parsedCalendar = [] or rethrow depending on desired behavior }) so subsequent code that computes sortedEntries (the .filter/.sort chain) won't crash on malformed JSON and the log includes useful debugging info.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@api/leaderboard/views.py`:
- Around line 396-412: When user.is_outdated and
get_codechef_data(user.username) returns "NOT_FOUND", do not fall through to
serializer-cached response; instead return a 404 response (or raise Http404)
immediately. Modify the branch in the outdated-user logic (the block using
get_codechef_data, user.save and CC_Serializer) to check for user_data ==
"NOT_FOUND" and return Response({"detail": "CodeChef user not found"},
status=404) (or raise Http404()) before the serializer/Response call so
deleted/renamed handles don’t continue to serve stale cached records.
- Around line 940-946: The organization creation can be left orphaned if
serializer.save() succeeds but OrganizationMember.objects.create() fails; wrap
both writes in a single atomic transaction: in perform_create, import and use
transaction.atomic() to call serializer.save(admin=self.request.user) and
OrganizationMember.objects.create(organization=organization,
user=self.request.user) inside the same atomic block so both succeed or both
rollback; reference perform_create, serializer.save, and
OrganizationMember.objects.create to locate the change.
In `@app/src/App.css`:
- Around line 11-16: The .App CSS rule is now orphaned and GoToTop.jsx's
document.querySelector(".App") returns null; either add the App class to the new
wrapper component (use SidebarInset: give it className="App" or include "App" in
its class list where SidebarInset is rendered) or change GoToTop.jsx to select
the new scrollable element (e.g., querySelector("main") or the specific
container used) and update any associated references to use that element for
scrolling; modify whichever file you choose (App.jsx/SidebarInset rendering or
GoToTop.jsx selector) so the selector matches the actual DOM element at runtime.
In `@app/src/components/GoToTop.jsx`:
- Around line 5-8: The GoToTop button currently queries
document.querySelector(".App") (appElement) but the actual scrollable container
is the page's main element; update the selector in GoToTop.jsx to target the
scrolling container (e.g., document.querySelector("main") or a specific
scrollable container ID/class), use that element (e.g., containerElement) for
containerElement.scrollTo({ left: 0, top: 0, behavior: "smooth" }), and keep the
existing null check so the button works when the main overflow-y-auto container
is present.
In `@app/src/components/Navbar.jsx`:
- Around line 126-132: The spacing check currently uses a hardcoded index (isOrg
= index === 4) which targets the wrong menu item; update the logic in the
SidebarMenuItem rendering to identify Organizations reliably (either change the
index to index === 5 or, preferably, use a semantic check like isOrg =
item.title === "Organizations") so the mt-8 class is applied to the
Organizations item; adjust the isOrg variable and keep the SidebarMenuItem
className usage unchanged.
In `@app/src/components/OrganizationLeaderboard.jsx`:
- Around line 126-128: The icon-only Buttons lack accessible names; update the
Button that wraps ChevronLeft (the back button with onClick={() =>
navigate("/organizations")}) and the copy Button (the icon-only copy action in
the block around the copy handler) to include explicit aria-label attributes
(e.g., aria-label="Go back to organizations" and aria-label="Copy organization
ID" or similar) so screen readers receive a clear action name; ensure labels are
descriptive and unique to the action.
In `@app/src/components/Organizations.jsx`:
- Around line 60-67: loadPublicOrganizations doesn't validate the API response
before calling setPublicOrganizations, unlike loadOrganizations; update
loadPublicOrganizations to check that the fetched data from
fetchPublicOrganizations is an array (e.g., using Array.isArray(data)) and only
call setPublicOrganizations(data) when it is an array, otherwise log a warning
and setPublicOrganizations([]) (or leave existing state) to avoid runtime errors
when rendering publicOrganizations.
In `@app/src/components/OrganizationsModals.jsx`:
- Around line 21-25: The submit handler currently calls onCreate and immediately
closes the modal; make handleSubmit asynchronous, await the onCreate(...) call
and only call onClose() (and any form resets) after the awaited promise resolves
successfully; wrap the await in try/catch to preserve the modal state on failure
(and surface or rethrow the error) so failed create/join requests do not dismiss
the dialog or discard entered data.
---
Nitpick comments:
In `@api/leaderboard/management/commands/update_db.py`:
- Around line 314-316: The local import of the regex module inside the
atcoder_user_update function is redundant because re is already imported at
module level; remove the line "import re" from the atcoder_user_update function
so the function uses the module-level re import (refer to function
atcoder_user_update and the module-level re import at top of the file).
In `@app/src/components/PlatformStreakFetcher.jsx`:
- Around line 119-128: Wrap the JSON.parse call for data.calendar_data in its
own try/catch to provide a clearer, contextual error message and safe fallback:
inside the block that reads data.calendar_data, replace direct
JSON.parse(data.calendar_data) with a guarded parse (try { parsedCalendar =
JSON.parse(data.calendar_data) } catch (err) { processLogger?.error or
console.error with a message including the username and the raw calendar_data;
set parsedCalendar = [] or rethrow depending on desired behavior }) so
subsequent code that computes sortedEntries (the .filter/.sort chain) won't
crash on malformed JSON and the log includes useful debugging info.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: eba11bfd-bc20-4377-92b8-ec812c160250
⛔ Files ignored due to path filters (4)
app/src/icons/atcoder1.svgis excluded by!**/*.svgapp/src/icons/atcoderDarkTheme.svgis excluded by!**/*.svgapp/src/icons/atcoderLight.svgis excluded by!**/*.svgapp/src/icons/atcoderLightTheme.svgis excluded by!**/*.svg
📒 Files selected for processing (23)
api/leaderboard/management/commands/update_db.pyapi/leaderboard/migrations/0019_codechefuser_calendar_data.pyapi/leaderboard/migrations/0020_organization_organizationmember.pyapi/leaderboard/migrations/0021_leetcodeuser_calendar_data_usernames_ac_uname_and_more.pyapi/leaderboard/migrations/0022_alter_organization_join_code.pyapi/leaderboard/models.pyapi/leaderboard/serializers.pyapi/leaderboard/urls.pyapi/leaderboard/views.pyapp/src/App.cssapp/src/App.jsxapp/src/components/AtcoderTable.jsxapp/src/components/Footer.jsxapp/src/components/Friends.jsxapp/src/components/GoToTop.jsxapp/src/components/NavMenu.jsxapp/src/components/Navbar.jsxapp/src/components/OrganizationLeaderboard.jsxapp/src/components/Organizations.jsxapp/src/components/OrganizationsModals.jsxapp/src/components/PlatformStreakFetcher.jsxapp/src/index.cssapp/src/utils/api_organizations.js
| if user.is_outdated: | ||
| user_data = self.get_codechef_data(user.username) | ||
| if isinstance(user_data, dict): | ||
| user.rating = user_data["rating"] | ||
| user.max_rating = user_data["highest_rating"] | ||
| user.Global_rank = user_data["global_rank"] | ||
| user.Country_rank = user_data["country_rank"] | ||
| user.avatar = user_data["avatar"] | ||
| user.calendar_data = user_data["calendar_data"] | ||
| user.last_updated = datetime.now() | ||
| user.save() | ||
| elif user_data == "TRANSIENT_ERROR": | ||
| # For an existing outdated user, we can either return the old data | ||
| # or provide a warning. Let's return the old data for now but log. | ||
| logger.warning(f"Transient error while refreshing {user.username}, serving cached data.") | ||
| serializer = CC_Serializer(user) | ||
| return Response(serializer.data) |
There was a problem hiding this comment.
Don't serve stale cached data after a confirmed NOT_FOUND.
When an existing row is outdated and get_codechef_data() returns NOT_FOUND, this branch falls through and still serializes the cached record. Deleted or renamed CodeChef handles will keep looking valid indefinitely instead of returning 404.
Proposed fix
if user.is_outdated:
user_data = self.get_codechef_data(user.username)
if isinstance(user_data, dict):
user.rating = user_data["rating"]
user.max_rating = user_data["highest_rating"]
user.Global_rank = user_data["global_rank"]
user.Country_rank = user_data["country_rank"]
user.avatar = user_data["avatar"]
user.calendar_data = user_data["calendar_data"]
user.last_updated = datetime.now()
user.save()
elif user_data == "TRANSIENT_ERROR":
# For an existing outdated user, we can either return the old data
# or provide a warning. Let's return the old data for now but log.
logger.warning(f"Transient error while refreshing {user.username}, serving cached data.")
+ elif user_data == "NOT_FOUND":
+ return Response(
+ {"error": "User not found"},
+ status=status.HTTP_404_NOT_FOUND,
+ )
serializer = CC_Serializer(user)
return Response(serializer.data)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if user.is_outdated: | |
| user_data = self.get_codechef_data(user.username) | |
| if isinstance(user_data, dict): | |
| user.rating = user_data["rating"] | |
| user.max_rating = user_data["highest_rating"] | |
| user.Global_rank = user_data["global_rank"] | |
| user.Country_rank = user_data["country_rank"] | |
| user.avatar = user_data["avatar"] | |
| user.calendar_data = user_data["calendar_data"] | |
| user.last_updated = datetime.now() | |
| user.save() | |
| elif user_data == "TRANSIENT_ERROR": | |
| # For an existing outdated user, we can either return the old data | |
| # or provide a warning. Let's return the old data for now but log. | |
| logger.warning(f"Transient error while refreshing {user.username}, serving cached data.") | |
| serializer = CC_Serializer(user) | |
| return Response(serializer.data) | |
| if user.is_outdated: | |
| user_data = self.get_codechef_data(user.username) | |
| if isinstance(user_data, dict): | |
| user.rating = user_data["rating"] | |
| user.max_rating = user_data["highest_rating"] | |
| user.Global_rank = user_data["global_rank"] | |
| user.Country_rank = user_data["country_rank"] | |
| user.avatar = user_data["avatar"] | |
| user.calendar_data = user_data["calendar_data"] | |
| user.last_updated = datetime.now() | |
| user.save() | |
| elif user_data == "TRANSIENT_ERROR": | |
| # For an existing outdated user, we can either return the old data | |
| # or provide a warning. Let's return the old data for now but log. | |
| logger.warning(f"Transient error while refreshing {user.username}, serving cached data.") | |
| elif user_data == "NOT_FOUND": | |
| return Response( | |
| {"error": "User not found"}, | |
| status=status.HTTP_404_NOT_FOUND, | |
| ) | |
| serializer = CC_Serializer(user) | |
| return Response(serializer.data) |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@api/leaderboard/views.py` around lines 396 - 412, When user.is_outdated and
get_codechef_data(user.username) returns "NOT_FOUND", do not fall through to
serializer-cached response; instead return a 404 response (or raise Http404)
immediately. Modify the branch in the outdated-user logic (the block using
get_codechef_data, user.save and CC_Serializer) to check for user_data ==
"NOT_FOUND" and return Response({"detail": "CodeChef user not found"},
status=404) (or raise Http404()) before the serializer/Response call so
deleted/renamed handles don’t continue to serve stale cached records.
| const appElement = document.querySelector(".App"); | ||
| if (appElement) { | ||
| appElement.scrollTo({ left: 0, top: 0, behavior: "smooth" }); | ||
| } |
There was a problem hiding this comment.
Scroll target is incorrect, causing button no-op.
Lines 5-8 target .App, but the active scroll container is main with overflow-y-auto (see app/src/App.jsx, Line 81). In this layout, the button can fail to scroll anything.
💡 Proposed fix
const GoToTop = () => {
const gotop = () => {
- const appElement = document.querySelector(".App");
- if (appElement) {
- appElement.scrollTo({ left: 0, top: 0, behavior: "smooth" });
- }
+ const scrollContainer =
+ document.querySelector("main.overflow-y-auto") ?? document.scrollingElement;
+ if (scrollContainer) {
+ scrollContainer.scrollTo({ left: 0, top: 0, behavior: "smooth" });
+ }
};🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@app/src/components/GoToTop.jsx` around lines 5 - 8, The GoToTop button
currently queries document.querySelector(".App") (appElement) but the actual
scrollable container is the page's main element; update the selector in
GoToTop.jsx to target the scrolling container (e.g.,
document.querySelector("main") or a specific scrollable container ID/class), use
that element (e.g., containerElement) for containerElement.scrollTo({ left: 0,
top: 0, behavior: "smooth" }), and keep the existing null check so the button
works when the main overflow-y-auto container is present.
| <Button variant="ghost" size="icon" onClick={() => navigate("/organizations")}> | ||
| <ChevronLeft className="h-5 w-5" /> | ||
| </Button> |
There was a problem hiding this comment.
Add accessible names to the icon-only buttons.
The back button and copy button render only icons, so screen readers don't get a usable label for either action. Give both buttons explicit aria-labels.
Proposed fix
- <Button variant="ghost" size="icon" onClick={() => navigate("/organizations")}>
+ <Button
+ variant="ghost"
+ size="icon"
+ aria-label="Back to groups"
+ onClick={() => navigate("/organizations")}
+ >
<ChevronLeft className="h-5 w-5" />
</Button>
...
<Button
variant="outline"
size="icon"
className="h-8 w-8"
+ aria-label={copied ? "Join code copied" : "Copy join code"}
onClick={() => {
navigator.clipboard.writeText(orgDetails.join_code);
setCopied(true);Also applies to: 147-158
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@app/src/components/OrganizationLeaderboard.jsx` around lines 126 - 128, The
icon-only Buttons lack accessible names; update the Button that wraps
ChevronLeft (the back button with onClick={() => navigate("/organizations")})
and the copy Button (the icon-only copy action in the block around the copy
handler) to include explicit aria-label attributes (e.g., aria-label="Go back to
organizations" and aria-label="Copy organization ID" or similar) so screen
readers receive a clear action name; ensure labels are descriptive and unique to
the action.
| const loadPublicOrganizations = async () => { | ||
| try { | ||
| const data = await fetchPublicOrganizations(authTokens); | ||
| setPublicOrganizations(data); | ||
| } catch (error) { | ||
| console.error("Error loading public organizations:", error); | ||
| } | ||
| }; |
There was a problem hiding this comment.
Add array validation for publicOrganizations data.
loadOrganizations at line 51 validates the response is an array before setting state, but loadPublicOrganizations at line 63 does not. If the API returns a non-array response, this could cause runtime errors when rendering.
🛡️ Proposed fix
const loadPublicOrganizations = async () => {
try {
const data = await fetchPublicOrganizations(authTokens);
- setPublicOrganizations(data);
+ setPublicOrganizations(Array.isArray(data) ? data : []);
} catch (error) {
console.error("Error loading public organizations:", error);
+ setPublicOrganizations([]);
}
};📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const loadPublicOrganizations = async () => { | |
| try { | |
| const data = await fetchPublicOrganizations(authTokens); | |
| setPublicOrganizations(data); | |
| } catch (error) { | |
| console.error("Error loading public organizations:", error); | |
| } | |
| }; | |
| const loadPublicOrganizations = async () => { | |
| try { | |
| const data = await fetchPublicOrganizations(authTokens); | |
| setPublicOrganizations(Array.isArray(data) ? data : []); | |
| } catch (error) { | |
| console.error("Error loading public organizations:", error); | |
| setPublicOrganizations([]); | |
| } | |
| }; |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@app/src/components/Organizations.jsx` around lines 60 - 67,
loadPublicOrganizations doesn't validate the API response before calling
setPublicOrganizations, unlike loadOrganizations; update loadPublicOrganizations
to check that the fetched data from fetchPublicOrganizations is an array (e.g.,
using Array.isArray(data)) and only call setPublicOrganizations(data) when it is
an array, otherwise log a warning and setPublicOrganizations([]) (or leave
existing state) to avoid runtime errors when rendering publicOrganizations.
| const handleSubmit = (e) => { | ||
| e.preventDefault(); | ||
| onCreate({ name, description, is_private: isPrivate }); | ||
| onClose(); | ||
| }; |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, locate and read the OrganizationsModals.jsx file
find . -name "OrganizationsModals.jsx" -type fRepository: OpenLake/Leaderboard-Pro
Length of output: 112
🏁 Script executed:
# List the app/src/components directory
ls -la app/src/components/ 2>/dev/null | head -20Repository: OpenLake/Leaderboard-Pro
Length of output: 1305
🏁 Script executed:
# Get more context about the file size
wc -l app/src/components/OrganizationsModals.jsx 2>/dev/nullRepository: OpenLake/Leaderboard-Pro
Length of output: 114
🏁 Script executed:
# Read the entire OrganizationsModals.jsx file
cat -n app/src/components/OrganizationsModals.jsxRepository: OpenLake/Leaderboard-Pro
Length of output: 4375
🏁 Script executed:
# Find where CreateGroupModal and JoinGroupModal are used
rg "CreateGroupModal|JoinGroupModal" --type jsx --type ts -B 2 -A 2Repository: OpenLake/Leaderboard-Pro
Length of output: 95
🏁 Script executed:
# Search for CreateGroupModal and JoinGroupModal usage without type restriction
rg "CreateGroupModal|JoinGroupModal" -B 2 -A 2Repository: OpenLake/Leaderboard-Pro
Length of output: 1920
🏁 Script executed:
# Now find the Organizations.jsx file and look for handleCreate and onJoin implementations
cat -n app/src/components/Organizations.jsxRepository: OpenLake/Leaderboard-Pro
Length of output: 12674
🏁 Script executed:
# Let me verify the API functions to confirm they are async/return promises
find . -path ./node_modules -prune -o -name "*api_organizations*" -type f -printRepository: OpenLake/Leaderboard-Pro
Length of output: 104
🏁 Script executed:
# Read the API file to see the function implementations
cat -n app/src/utils/api_organizations.jsRepository: OpenLake/Leaderboard-Pro
Length of output: 5036
Await the submit callback before closing these dialogs.
These handlers fire the parent callback and immediately close/reset the modal. The callbacks are async functions making network requests, so a failed create/join still dismisses the dialog and, for joins, throws away the entered code. Keep the modal open until the promise resolves successfully.
Proposed fix
- const handleSubmit = (e) => {
+ const handleSubmit = async (e) => {
e.preventDefault();
- onCreate({ name, description, is_private: isPrivate });
+ await onCreate({ name, description, is_private: isPrivate });
+ setName("");
+ setDescription("");
+ setIsPrivate(true);
onClose();
};
...
- const handleSubmit = (e) => {
+ const handleSubmit = async (e) => {
e.preventDefault();
- onJoin(joinCode);
+ await onJoin(joinCode.trim());
setJoinCode("");
onClose();
};🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@app/src/components/OrganizationsModals.jsx` around lines 21 - 25, The submit
handler currently calls onCreate and immediately closes the modal; make
handleSubmit asynchronous, await the onCreate(...) call and only call onClose()
(and any form resets) after the awaited promise resolves successfully; wrap the
await in try/catch to preserve the modal state on failure (and surface or
rethrow the error) so failed create/join requests do not dismiss the dialog or
discard entered data.
…eaderboard with a few bug fixes
There was a problem hiding this comment.
Actionable comments posted: 7
🧹 Nitpick comments (5)
api/leaderboard/models.py (2)
269-272: Add blank line before theUnifiedScoreHistoryclass.PEP 8 recommends two blank lines between top-level class definitions.
♻️ Proposed fix
def __str__(self): return f"{self.user.username} in {self.organization.name}" + + # ── NEW: stores daily unified score snapshots per user ──────────────────────── class UnifiedScoreHistory(models.Model):🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@api/leaderboard/models.py` around lines 269 - 272, Add a single blank line between the end of the __str__ method and the class declaration for UnifiedScoreHistory so there are two blank lines separating top-level classes per PEP 8; locate the __str__ method that returns f"{self.user.username} in {self.organization.name}" and insert one empty line before the line starting with "class UnifiedScoreHistory(models.Model):".
284-284: Replace en-dash with hyphen-minus in comment.The comment uses an en-dash (
–) which can cause issues with text searching/grepping. Use a standard hyphen-minus (-).♻️ Proposed fix
- # per-platform normalized scores (0–1 each, same as analytics.py) + # per-platform normalized scores (0-1 each, same as analytics.py)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@api/leaderboard/models.py` at line 284, Replace the en-dash in the comment inside leaderboard.models that reads "per-platform normalized scores (0–1 each, same as analytics.py)" with a standard hyphen-minus so it becomes "(0-1 each, ...)" to ensure consistent ASCII hyphens for searching/grepping; locate the comment in the module (the line containing "per-platform normalized scores") and update the punctuation only.api/leaderboard/analytics.py (2)
187-192: Consider usingmethod="dense"for ranking.The current
method="min"will leave gaps in ranks when there are ties (e.g., two users with rank 1 means the next user gets rank 3). For leaderboards,method="dense"is often preferred to produce consecutive ranks (1, 1, 2, 3...).♻️ Proposed change
- df["rank"] = df["total_score"].rank(ascending=False, method="min") + df["rank"] = df["total_score"].rank(ascending=False, method="dense")🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@api/leaderboard/analytics.py` around lines 187 - 192, The ranking currently uses df["total_score"].rank(ascending=False, method="min") which leaves gaps for tied scores; change it to use method="dense" instead (i.e., df["rank"] = df["total_score"].rank(ascending=False, method="dense")) so ranks are consecutive for ties; update the block that computes df["rank"] and keep the subsequent return df.sort_values("rank") unchanged.
162-183: AtCoder platform is missing from the unified ranking.The
build_unified_ranking()function includes GitHub, Codeforces, CodeChef, and LeetCode, but excludes AtCoder despite it being one of the six platforms mentioned in the PR objectives. TheUserNamesmodel hasac_unamefield (perapi/leaderboard/models.py:147), but it's not used here.Consider adding AtCoder to the unified ranking for consistency:
♻️ Proposed fix to include AtCoder
+from .models import ( + githubUser, + codeforcesUser, + codechefUser, + LeetcodeUser, + AtcoderUser, +) + +def atcoder_score(df): + if df.empty: + return pd.DataFrame(columns=["username", "ac_score"]) + df = df.fillna(0) + df["rating_norm"] = safe_normalize(numeric_series(df, "rating")) + df["highest_norm"] = safe_normalize(numeric_series(df, "highest_rating")) + df["ac_score"] = 0.6 * df["rating_norm"] + 0.4 * df["highest_norm"] + return df[["username", "ac_score"]]Then update
build_unified_ranking()to include AtCoder in the username fallback and score computation.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@api/leaderboard/analytics.py` around lines 162 - 183, The unified ranking is missing AtCoder: update build_unified_ranking to use mapping.ac_uname in the username fallback (add mapping.ac_uname to the or-chain), add an "ac_uname": mapping.ac_uname entry, compute "ac_score" (e.g., float(a.loc[mapping.ac_uname, "ac_score"]) if mapping.ac_uname in a.index else 0.0 using the AtCoder DataFrame variable used elsewhere), and include "ac_score" in the total_score aggregation alongside github/cf/cc/lt scores; reference mapping.ac_uname and the build_unified_ranking function to locate the change.api/leaderboard/trends_views.py (1)
79-79: Use timezone-awarefromtimestampinstead of deprecatedutcfromtimestamp.
datetime.utcfromtimestamp()is deprecated since Python 3.12 and returns a naive datetime. This pattern also appears on lines 154, 211, 275, and 304.♻️ Proposed fix
- d = datetime.utcfromtimestamp(int(ts_str)).date() + d = datetime.fromtimestamp(int(ts_str), tz=timezone.utc).date()Apply similar changes to lines 154, 211, 275, and 304.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@api/leaderboard/trends_views.py` at line 79, Replace deprecated naive calls to datetime.utcfromtimestamp(...) with timezone-aware datetime.fromtimestamp(..., tz=timezone.utc) and then call .date(); update the import in trends_views.py to include timezone from datetime if not present. Specifically change occurrences like d = datetime.utcfromtimestamp(int(ts_str)).date() (and the similar occurrences at the other spots) to use datetime.fromtimestamp(int(ts_str), tz=timezone.utc).date() so all timestamps are timezone-aware; ensure any tests or code that rely on naive datetimes are adjusted accordingly.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@api/leaderboard/celery.py`:
- Around line 437-443: The code is using the transient platform handle stored in
username to key snapshots, which can collide or change; switch the persistence
key to a stable identifier (preferably user_id). In the block where
build_unified_ranking() yields row and you assign username =
row.get("username"), instead extract a stable id (e.g., user_id =
row.get("user_id") or row.get("id") if available), and call
UnifiedScoreHistory.objects.update_or_create(user_id=user_id, date=today,
defaults={...}) while still storing/updating username/display name inside
defaults so the human-readable handle is preserved; update any model field names
used by UnifiedScoreHistory accordingly.
- Around line 432-452: The try/except around build_unified_ranking() and the
loop catches all exceptions and only prints, causing Celery to treat the task as
successful even when UnifiedScoreHistory writes fail; remove the broad exception
swallowing and let errors propagate (or log with logger.exception and then
re-raise) so Celery will record a failure and trigger retries; specifically
update the block around build_unified_ranking(), the DataFrame iteration, and
UnifiedScoreHistory.objects.update_or_create to either not catch Exception or to
re-raise after logging, ensuring task failures are visible to Celery.
In `@api/leaderboard/migrations/0023_merge_20260329_1109.py`:
- Around line 8-10: The merge migration 0023_merge_20260329_1109 currently
depends on the wrong head ("0002_alter_codeforcesuser_last_activity_and_more");
update the dependencies list in the migrations file (the dependencies variable)
so it depends on the two branch heads being merged (include
"0022_alter_organization_join_code" and the other branch head
"0019_codechefuser_calendar_data") instead of the 0002 entry, i.e. list both
("leaderboard", "0022_alter_organization_join_code") and ("leaderboard",
"0019_codechefuser_calendar_data") as dependencies.
In `@api/leaderboard/trends_views.py`:
- Around line 296-297: Replace the silent except block that currently reads
"except Exception: pass" (the one guarding the live Codeforces API call in
trends_views.py) with a logged exception so we still fall back to stored data
but capture the failure; call the module/class logger (or logging.exception)
inside that except to record the error and context (e.g., "failed to fetch live
Codeforces trends"), then continue with the existing fallback behavior
unchanged.
In `@app/src/components/TrendAnalysis.jsx`:
- Around line 648-659: The loadLine useCallback (function name: loadLine)
captures the outer state variable range via its default parameter r = range but
range is not listed in the dependency array, causing a stale closure; update the
callback so its dependencies include range (or remove the default param and read
range directly inside the function) and add range to the dependency array along
with token so setLineLoading/setLineError/setLineData behave with the current
range value.
In `@app/src/components/UnifiedLeaderboard.jsx`:
- Around line 225-295: The tooltip texts in UnifiedLeaderboard.jsx for the
columns with accessorKey "github_score", "cf_score", "cc_score", and "lt_score"
are inconsistent with the backend in api/leaderboard/analytics.py; update the
header TooltipContent for the corresponding column definitions (the blocks that
render the header for "github_score", "cf_score", "cc_score", "lt_score" which
currently render ScoreBar cells) so the messages match the actual formulas: for
"github_score" say "stars · followers" (or similar short text indicating stars
and followers only), for "cf_score" and "cc_score" say "rating only", and for
"lt_score" say "weighted solved counts only"; keep the existing
Tooltip/TooltipTrigger structure and the ScoreBar cell rendering unchanged.
- Around line 309-316: The root div in UnifiedLeaderboard is forcing viewport
sizing and causing overflow when rendered inside TabsContent; remove the inline
style that sets width to "100vw" / "calc(100vw - var(--sidebar-width))" and
instead let the container control width by using a full-width layout (e.g., set
the root wrapper to w-full or style width: "100%"). Update the div in the
UnifiedLeaderboard component (the element with className "h-full px-1.5 py-1")
to stop applying the open && !isMobile width logic and use a neutral full-width
approach so the parent TabsContent can manage viewport sizing.
---
Nitpick comments:
In `@api/leaderboard/analytics.py`:
- Around line 187-192: The ranking currently uses
df["total_score"].rank(ascending=False, method="min") which leaves gaps for tied
scores; change it to use method="dense" instead (i.e., df["rank"] =
df["total_score"].rank(ascending=False, method="dense")) so ranks are
consecutive for ties; update the block that computes df["rank"] and keep the
subsequent return df.sort_values("rank") unchanged.
- Around line 162-183: The unified ranking is missing AtCoder: update
build_unified_ranking to use mapping.ac_uname in the username fallback (add
mapping.ac_uname to the or-chain), add an "ac_uname": mapping.ac_uname entry,
compute "ac_score" (e.g., float(a.loc[mapping.ac_uname, "ac_score"]) if
mapping.ac_uname in a.index else 0.0 using the AtCoder DataFrame variable used
elsewhere), and include "ac_score" in the total_score aggregation alongside
github/cf/cc/lt scores; reference mapping.ac_uname and the build_unified_ranking
function to locate the change.
In `@api/leaderboard/models.py`:
- Around line 269-272: Add a single blank line between the end of the __str__
method and the class declaration for UnifiedScoreHistory so there are two blank
lines separating top-level classes per PEP 8; locate the __str__ method that
returns f"{self.user.username} in {self.organization.name}" and insert one empty
line before the line starting with "class UnifiedScoreHistory(models.Model):".
- Line 284: Replace the en-dash in the comment inside leaderboard.models that
reads "per-platform normalized scores (0–1 each, same as analytics.py)" with a
standard hyphen-minus so it becomes "(0-1 each, ...)" to ensure consistent ASCII
hyphens for searching/grepping; locate the comment in the module (the line
containing "per-platform normalized scores") and update the punctuation only.
In `@api/leaderboard/trends_views.py`:
- Line 79: Replace deprecated naive calls to datetime.utcfromtimestamp(...) with
timezone-aware datetime.fromtimestamp(..., tz=timezone.utc) and then call
.date(); update the import in trends_views.py to include timezone from datetime
if not present. Specifically change occurrences like d =
datetime.utcfromtimestamp(int(ts_str)).date() (and the similar occurrences at
the other spots) to use datetime.fromtimestamp(int(ts_str),
tz=timezone.utc).date() so all timestamps are timezone-aware; ensure any tests
or code that rely on naive datetimes are adjusted accordingly.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 9c2e8387-d382-48b6-bab1-f83b06895ebb
📒 Files selected for processing (21)
api/leaderboard/analytics.pyapi/leaderboard/analytics_views.pyapi/leaderboard/celery.pyapi/leaderboard/migrations/0001_initial.pyapi/leaderboard/migrations/0002_alter_codeforcesuser_last_activity_and_more.pyapi/leaderboard/migrations/0003_alter_codeforcesuser_last_activity.pyapi/leaderboard/migrations/0004_alter_codeforcesuser_last_activity_alter_user_uid.pyapi/leaderboard/migrations/0005_alter_codeforcesuser_last_activity.pyapi/leaderboard/migrations/0006_atcoderuser_alter_codeforcesuser_last_activity.pyapi/leaderboard/migrations/0019_codechefuser_calendar_data.pyapi/leaderboard/migrations/0023_merge_20260329_1109.pyapi/leaderboard/migrations/0024_alter_codeforcesuser_last_activity.pyapi/leaderboard/models.pyapi/leaderboard/settings.pyapi/leaderboard/trends_views.pyapi/leaderboard/urls.pyapp/src/App.jsxapp/src/components/HomePage.jsxapp/src/components/Navbar.jsxapp/src/components/TrendAnalysis.jsxapp/src/components/UnifiedLeaderboard.jsx
💤 Files with no reviewable changes (4)
- api/leaderboard/migrations/0005_alter_codeforcesuser_last_activity.py
- api/leaderboard/migrations/0003_alter_codeforcesuser_last_activity.py
- api/leaderboard/migrations/0004_alter_codeforcesuser_last_activity_alter_user_uid.py
- api/leaderboard/migrations/0006_atcoderuser_alter_codeforcesuser_last_activity.py
🚧 Files skipped from review as they are similar to previous changes (3)
- app/src/components/Navbar.jsx
- app/src/App.jsx
- api/leaderboard/urls.py
| try: | ||
| df = build_unified_ranking() | ||
| if df.empty: | ||
| return | ||
| for _, row in df.iterrows(): | ||
| username = row.get("username") | ||
| if not username: | ||
| continue | ||
| UnifiedScoreHistory.objects.update_or_create( | ||
| username=username, | ||
| date=today, | ||
| defaults={ | ||
| "total_score": float(row.get("total_score", 0)), | ||
| "github_score": float(row.get("github_score", 0)), | ||
| "cf_score": float(row.get("cf_score", 0)), | ||
| "cc_score": float(row.get("cc_score", 0)), | ||
| "lt_score": float(row.get("lt_score", 0)), | ||
| }, | ||
| ) | ||
| except Exception as e: | ||
| print(f"unified_score_snapshot error: {e}") |
There was a problem hiding this comment.
Let Celery record snapshot failures instead of printing and returning success.
This except swallows every write failure, so beat will mark the run successful even when no history row was saved. That makes missed daily snapshots silent and non-retryable.
🛠️ Suggested fix
- except Exception as e:
- print(f"unified_score_snapshot error: {e}")
+ except Exception:
+ logger.exception("unified_score_snapshot failed")
+ raise🧰 Tools
🪛 Ruff (0.15.7)
[warning] 451-451: Do not catch blind exception: Exception
(BLE001)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@api/leaderboard/celery.py` around lines 432 - 452, The try/except around
build_unified_ranking() and the loop catches all exceptions and only prints,
causing Celery to treat the task as successful even when UnifiedScoreHistory
writes fail; remove the broad exception swallowing and let errors propagate (or
log with logger.exception and then re-raise) so Celery will record a failure and
trigger retries; specifically update the block around build_unified_ranking(),
the DataFrame iteration, and UnifiedScoreHistory.objects.update_or_create to
either not catch Exception or to re-raise after logging, ensuring task failures
are visible to Celery.
| except Exception: | ||
| pass |
There was a problem hiding this comment.
Log the exception before falling back to stored data.
The silent pass makes debugging difficult when the live Codeforces API fails. Adding a log statement preserves the fallback behavior while improving observability.
🔧 Proposed fix
except Exception:
- pass
+ logger.warning("Live Codeforces API failed, falling back to stored data", exc_info=True)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| except Exception: | |
| pass | |
| except Exception: | |
| logger.warning("Live Codeforces API failed, falling back to stored data", exc_info=True) |
🧰 Tools
🪛 Ruff (0.15.7)
[error] 296-297: try-except-pass detected, consider logging the exception
(S110)
[warning] 296-296: Do not catch blind exception: Exception
(BLE001)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@api/leaderboard/trends_views.py` around lines 296 - 297, Replace the silent
except block that currently reads "except Exception: pass" (the one guarding the
live Codeforces API call in trends_views.py) with a logged exception so we still
fall back to stored data but capture the failure; call the module/class logger
(or logging.exception) inside that except to record the error and context (e.g.,
"failed to fetch live Codeforces trends"), then continue with the existing
fallback behavior unchanged.
This comment has been minimized.
This comment has been minimized.
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
app/src/components/TrendAnalysis.jsx (1)
644-662:⚠️ Potential issue | 🟠 MajorRange changes are being overwritten back to
all.
handleRangeChange(r)loads the selected range, butsetRange(r)also recreatesloadLinebecause of[range]. That retriggers the effect on Line 662, which immediately callsloadLine("all")and refetches the heatmap, so the chart can drift from the active range button.🛠️ Suggested fix
- const loadLine = useCallback(async (r = range) => { + const loadLine = useCallback(async (r) => { setLineLoading(true); setLineError(null); try { const data = await fetchTrend(`/trends/codeforces/linechart/?range=${r}`); setLineData(data); } catch { setLineError("Failed to load Codeforces rating history."); } finally { setLineLoading(false); } - }, [range]); + }, []);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/src/components/TrendAnalysis.jsx` around lines 644 - 662, The effect currently always re-calls loadLine("all") because loadLine is recreated when setRange runs; change the useEffect so it respects the current selected range instead of forcing "all": call loadLine(range) (or loadLine() without a hardcoded "all") and update the dependency array to include range and loadHeatmap (remove loadLine from deps) so the effect uses the current range rather than a recreated function; reference loadLine, handleRangeChange and the useEffect that currently does loadHeatmap(); loadLine("all").
♻️ Duplicate comments (1)
app/src/components/Navbar.jsx (1)
126-132:⚠️ Potential issue | 🟡 Minor
isOrgindex check targets "Friends" (index 4), not "Organizations" (index 5).The variable name
isOrgand the comment "Add distance before Blogs" are inconsistent with the actual behavior. After reordering, index 4 is now "Friends". Use a semantic check instead of a fragile index.💡 Proposed fix
- const isOrg = index === 4; + const isOrg = item.title === "Organizations"; return ( <SidebarMenuItem key={item.title} className={cn( "relative", - isOrg && "mt-8" // Add distance before Blogs + isOrg && "mt-8" // Add distance before Organizations )} >🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/src/components/Navbar.jsx` around lines 126 - 132, The current check uses a fragile index (isOrg = index === 4) which now targets "Friends" instead of "Organizations"; update the Navbar rendering logic to use a semantic check against the menu item (e.g., item.title === "Organizations" or a stable item.id/key) instead of index, and adjust the conditional/className usage in the SidebarMenuItem rendering (where isOrg is used) so spacing applies to the actual Organizations item; also fix the inline comment to reflect the correct intent and remove the misleading "Add distance before Blogs" text.
🧹 Nitpick comments (1)
api/leaderboard/migrations/0022_alter_organization_join_code.py (1)
12-18: Redundant migration, but harmless.This
AlterFieldoperation specifies the same field definition (CharField(blank=True, max_length=10, null=True, unique=True)) that was already created in migration0020. The migration is a no-op at the database level but won't cause issues.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@api/leaderboard/migrations/0022_alter_organization_join_code.py` around lines 12 - 18, The migration file contains a redundant migrations.AlterField for model_name "organization" and name "join_code" that duplicates the existing definition from migration 0020; remove this no-op by deleting this migration file (0022_alter_organization_join_code.py) or replace its operations list with an empty list so Django has no-op migration to skip, ensuring migrations.Remove the redundant AlterField operation referencing the "join_code" field to avoid cluttering migration history.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@app/src/components/TrendAnalysis.jsx`:
- Around line 557-581: LeetCodeTab is calling setters and rendering state that
aren't declared; add the missing useState hooks near the top of the LeetCodeTab
component: declare heatmapData, setHeatmapData, heatmapLoading,
setHeatmapLoading, heatmapError, setHeatmapError, and lineData, setLineData,
lineLoading, setLineLoading, lineError, setLineError using React's useState so
loadHeatmap, loadLine and the render logic can read/update these values (ensure
initial values match expected types: null/[] for data and false/null for
loading/error).
---
Outside diff comments:
In `@app/src/components/TrendAnalysis.jsx`:
- Around line 644-662: The effect currently always re-calls loadLine("all")
because loadLine is recreated when setRange runs; change the useEffect so it
respects the current selected range instead of forcing "all": call
loadLine(range) (or loadLine() without a hardcoded "all") and update the
dependency array to include range and loadHeatmap (remove loadLine from deps) so
the effect uses the current range rather than a recreated function; reference
loadLine, handleRangeChange and the useEffect that currently does loadHeatmap();
loadLine("all").
---
Duplicate comments:
In `@app/src/components/Navbar.jsx`:
- Around line 126-132: The current check uses a fragile index (isOrg = index ===
4) which now targets "Friends" instead of "Organizations"; update the Navbar
rendering logic to use a semantic check against the menu item (e.g., item.title
=== "Organizations" or a stable item.id/key) instead of index, and adjust the
conditional/className usage in the SidebarMenuItem rendering (where isOrg is
used) so spacing applies to the actual Organizations item; also fix the inline
comment to reflect the correct intent and remove the misleading "Add distance
before Blogs" text.
---
Nitpick comments:
In `@api/leaderboard/migrations/0022_alter_organization_join_code.py`:
- Around line 12-18: The migration file contains a redundant
migrations.AlterField for model_name "organization" and name "join_code" that
duplicates the existing definition from migration 0020; remove this no-op by
deleting this migration file (0022_alter_organization_join_code.py) or replace
its operations list with an empty list so Django has no-op migration to skip,
ensuring migrations.Remove the redundant AlterField operation referencing the
"join_code" field to avoid cluttering migration history.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: d0f22796-6588-4ffd-8fbe-68db89f7229c
⛔ Files ignored due to path filters (4)
app/src/icons/atcoder1.svgis excluded by!**/*.svgapp/src/icons/atcoderDarkTheme.svgis excluded by!**/*.svgapp/src/icons/atcoderLight.svgis excluded by!**/*.svgapp/src/icons/atcoderLightTheme.svgis excluded by!**/*.svg
📒 Files selected for processing (15)
api/leaderboard/migrations/0019_codechefuser_calendar_data.pyapi/leaderboard/migrations/0020_organization_organizationmember.pyapi/leaderboard/migrations/0021_leetcodeuser_calendar_data_usernames_ac_uname_and_more.pyapi/leaderboard/migrations/0022_alter_organization_join_code.pyapi/leaderboard/migrations/0023_merge_20260329_1109.pyapi/leaderboard/migrations/0024_alter_codeforcesuser_last_activity.pyapp/src/App.cssapp/src/App.jsxapp/src/components/Footer.jsxapp/src/components/HomePage.jsxapp/src/components/NavMenu.jsxapp/src/components/Navbar.jsxapp/src/components/TrendAnalysis.jsxapp/src/components/UnifiedLeaderboard.jsxapp/src/index.css
💤 Files with no reviewable changes (1)
- app/src/components/UnifiedLeaderboard.jsx
✅ Files skipped from review due to trivial changes (2)
- app/src/components/Footer.jsx
- app/src/components/NavMenu.jsx
🚧 Files skipped from review as they are similar to previous changes (3)
- app/src/index.css
- app/src/App.css
- app/src/App.jsx
| const loadHeatmap = useCallback(async () => { | ||
| setHeatmapLoading(true); | ||
| setHeatmapError(null); | ||
| try { | ||
| const data = await fetchTrend("/trends/leetcode/heatmap/", token); | ||
| const data = await fetchTrend("/trends/leetcode/heatmap/"); | ||
| setHeatmapData(data); | ||
| } catch { | ||
| setHeatmapError("Failed to load LeetCode heatmap."); | ||
| } finally { | ||
| setHeatmapLoading(false); | ||
| } | ||
| }, [token]); | ||
| }, []); | ||
|
|
||
| const loadLine = useCallback(async () => { | ||
| setLineLoading(true); | ||
| setLineError(null); | ||
| try { | ||
| const data = await fetchTrend("/trends/leetcode/linechart/", token); | ||
| const data = await fetchTrend("/trends/leetcode/linechart/"); | ||
| setLineData(data); | ||
| } catch { | ||
| setLineError("Failed to load LeetCode chart."); | ||
| } finally { | ||
| setLineLoading(false); | ||
| } | ||
| }, [token]); | ||
| }, []); |
There was a problem hiding this comment.
Restore the missing useState hooks in LeetCodeTab.
Line 557 starts calling setHeatmapLoading, and Lines 593-614 render heatmapData, lineData, and the related loading/error state, but none of those bindings are declared in this component. The tab will throw as soon as it renders.
🛠️ Suggested fix
function LeetCodeTab({ darkmode }) {
+ const [heatmapData, setHeatmapData] = useState(null);
+ const [lineData, setLineData] = useState(null);
+ const [heatmapLoading, setHeatmapLoading] = useState(true);
+ const [lineLoading, setLineLoading] = useState(true);
+ const [heatmapError, setHeatmapError] = useState(null);
+ const [lineError, setLineError] = useState(null);
+
const loadHeatmap = useCallback(async () => {Also applies to: 593-614
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@app/src/components/TrendAnalysis.jsx` around lines 557 - 581, LeetCodeTab is
calling setters and rendering state that aren't declared; add the missing
useState hooks near the top of the LeetCodeTab component: declare heatmapData,
setHeatmapData, heatmapLoading, setHeatmapLoading, heatmapError,
setHeatmapError, and lineData, setLineData, lineLoading, setLineLoading,
lineError, setLineError using React's useState so loadHeatmap, loadLine and the
render logic can read/update these values (ensure initial values match expected
types: null/[] for data and false/null for loading/error).
@check-spelling-bot Report🔴 Please reviewSee the 📂 files view, the 📜action log, or 📝 job summary for details.Unrecognized words (55)These words are not needed and should be removedCRA elems leetcoderankingccps scrollbars signup sqlite UUIDSome files were automatically ignored 🙈These sample patterns would exclude them: You should consider adding them to: File matching is via Perl regular expressions. To check these files, more of their words need to be in the dictionary than not. You can use To accept these unrecognized words as correct, update file exclusions, and remove the previously acknowledged and now absent words, you could run the following commands... in a clone of the git@github.com:Arpit-Mahajan09/Leaderboard-Pro.git repository curl -s -S -L 'https://raw.githubusercontent.com/check-spelling/check-spelling/main/apply.pl' |
perl - 'https://github.com/OpenLake/Leaderboard-Pro/actions/runs/23709389687/attempts/1' &&
git commit -m 'Update check-spelling metadata'Available 📚 dictionaries could cover words (expected and unrecognized) not in the 📘 dictionaryThis includes both expected items (592) from .github/actions/spelling/expect.txt and unrecognized words (55)
Consider adding them (in with:
extra_dictionaries: |
cspell:django/dict/django.txt
cspell:software-terms/dict/softwareTerms.txt
cspell:python/src/common/extra.txt
cspell:npm/dict/npm.txt
cspell:html/dict/html.txtTo stop checking additional dictionaries, add (in check_extra_dictionaries: ""Warnings
|
| Count | |
|---|---|
| 3 | |
| 1 |
See
If you see a bunch of garbage
If it relates to a ...
well-formed pattern
See if there's a pattern that would match it.
If not, try writing one and adding it to the patterns.txt file.
Patterns are Perl 5 Regular Expressions - you can test yours before committing to verify it will match your lines.
Note that patterns can't match multiline strings.
binary-ish string
Please add a file path to the excludes.txt file instead of just accepting the garbage.
File paths are Perl 5 Regular Expressions - you can test yours before committing to verify it will match your files.
^ refers to the file's path from the root of the repository, so ^README\.md$ would exclude README.md (on whichever branch you're using).
There was a problem hiding this comment.
Actionable comments posted: 3
🧹 Nitpick comments (1)
api/leaderboard/models.py (1)
278-278: Consider usingForeignKeyinstead ofIntegerFieldfor referential integrity.Using
models.IntegerFieldforuser_idbypasses database-level foreign key constraints, allowing orphaned records if users are deleted. If historical data should be preserved even after user deletion, this is acceptable; otherwise, consider:user = models.ForeignKey(User, on_delete=models.CASCADE, db_index=True)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@api/leaderboard/models.py` at line 278, The user_id field is defined as an IntegerField (user_id = models.IntegerField(...)) which lacks DB-level referential integrity; replace it with a ForeignKey to your User model (e.g., user = models.ForeignKey(User, on_delete=models.CASCADE, db_index=True) or another on_delete policy you prefer) so the ORM and DB enforce relations and prevent orphaned leaderboard records — update the field name, imports, and any code that references user_id accordingly.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In
`@api/leaderboard/migrations/0025_alter_unifiedscorehistory_unique_together_and_more.py`:
- Around line 12-26: Migration will fail because existing UnifiedScoreHistory
rows get default user_id=0 and the new unique_together (user_id, date) will
conflict; add a RunPython data migration between the AddField for user_id and
the final AlterUniqueTogether to backfill user_id from usernames. Implement a
backfill function (e.g., backfill_user_id) that uses
apps.get_model('leaderboard', 'UnifiedScoreHistory') and
apps.get_model('leaderboard', 'UserNames'), build a mapping from username to
user_id using the same logic as build_unified_ranking (username = gh_uname or
cf_uname or cc_uname or lt_uname) and then iterate UnifiedScoreHistory objects
setting record.user_id from that map and saving with update_fields=['user_id'];
register it as migrations.RunPython(backfill_user_id, migrations.RunPython.noop)
placed after AddField and before the AlterUniqueTogether that enforces
{('user_id','date')}.
In `@api/leaderboard/models.py`:
- Around line 293-296: UnifiedScoreHistory records can collide when different
users share the same platform handle because unified_heatmap() and
unified_linechart() build score_map = {r.date.isoformat(): r.total_score} while
querying by username, allowing multiple records with the same date to overwrite
each other; fix by changing the views unified_heatmap() and unified_linechart()
to query/filter by user_id (e.g.,
UnifiedScoreHistory.objects.filter(user_id=..., date__gte=cutoff)) instead of
username so each user’s records are isolated, and in the model add db_index=True
to the username field and either add unique_together = ("username", "date") in
UnifiedScoreHistory.Meta or keep just the user_id-based queries (recommended) to
prevent future collisions.
- Line 278: UnifiedScoreHistory migration 0025 adds
user_id=IntegerField(default=0) and changes unique_together to
("user_id","date") but doesn't backfill existing rows, which will cause
integrity errors; modify migration 0025 to include a data migration that looks
up the correct user.pk for each existing UnifiedScoreHistory.username (using the
User model or your canonical user lookup) and sets user_id accordingly before
altering unique_together, then remove the default=0 or set null=False only after
backfill; also update trends_views.py queries that currently filter by username
(the filters around the username usage) to filter by user_id (or to join/match
both username and user_id) so query logic aligns with the schema change and
avoids returning incorrect results.
---
Nitpick comments:
In `@api/leaderboard/models.py`:
- Line 278: The user_id field is defined as an IntegerField (user_id =
models.IntegerField(...)) which lacks DB-level referential integrity; replace it
with a ForeignKey to your User model (e.g., user = models.ForeignKey(User,
on_delete=models.CASCADE, db_index=True) or another on_delete policy you prefer)
so the ORM and DB enforce relations and prevent orphaned leaderboard records —
update the field name, imports, and any code that references user_id
accordingly.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: f63aad1a-0fb9-480e-9cfc-fbabff159c1c
📒 Files selected for processing (7)
api/leaderboard/celery.pyapi/leaderboard/migrations/0023_merge_20260329_1109.pyapi/leaderboard/migrations/0025_alter_unifiedscorehistory_unique_together_and_more.pyapi/leaderboard/models.pyapi/leaderboard/views.pyapp/src/App.jsxapp/src/components/UnifiedLeaderboard.jsx
🚧 Files skipped from review as they are similar to previous changes (2)
- app/src/components/UnifiedLeaderboard.jsx
- app/src/App.jsx
| operations = [ | ||
| migrations.AlterUniqueTogether( | ||
| name="unifiedscorehistory", | ||
| unique_together=set(), | ||
| ), | ||
| migrations.AddField( | ||
| model_name="unifiedscorehistory", | ||
| name="user_id", | ||
| field=models.IntegerField(db_index=True, default=0), | ||
| ), | ||
| migrations.AlterUniqueTogether( | ||
| name="unifiedscorehistory", | ||
| unique_together={("user_id", "date")}, | ||
| ), | ||
| ] |
There was a problem hiding this comment.
Migration will fail on existing data: missing data migration step.
The migration sequence:
- Removes old
(username, date)constraint - Adds
user_idfield withdefault=0 - Applies new
(user_id, date)constraint
If any existing UnifiedScoreHistory rows share the same date, step 3 will fail with a unique constraint violation because they all have user_id=0.
Insert a RunPython data migration between steps 2 and 3 to backfill user_id from the username field by looking up the corresponding UserNames.user_id.
🛠️ Suggested migration structure
from django.db import migrations, models
def backfill_user_id(apps, schema_editor):
UnifiedScoreHistory = apps.get_model('leaderboard', 'UnifiedScoreHistory')
UserNames = apps.get_model('leaderboard', 'UserNames')
# Build username->user_id mapping
user_map = {}
for un in UserNames.objects.select_related('user').all():
# Use same logic as build_unified_ranking to find username
username = un.gh_uname or un.cf_uname or un.cc_uname or un.lt_uname
if username and un.user_id:
user_map[username] = un.user_id
for record in UnifiedScoreHistory.objects.all():
if record.username in user_map:
record.user_id = user_map[record.username]
record.save(update_fields=['user_id'])
class Migration(migrations.Migration):
dependencies = [
("leaderboard", "0024_alter_codeforcesuser_last_activity"),
]
operations = [
migrations.AlterUniqueTogether(
name="unifiedscorehistory",
unique_together=set(),
),
migrations.AddField(
model_name="unifiedscorehistory",
name="user_id",
field=models.IntegerField(db_index=True, default=0),
),
migrations.RunPython(backfill_user_id, migrations.RunPython.noop),
migrations.AlterUniqueTogether(
name="unifiedscorehistory",
unique_together={("user_id", "date")},
),
]🧰 Tools
🪛 Ruff (0.15.7)
[warning] 12-26: Mutable default value for class attribute
(RUF012)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@api/leaderboard/migrations/0025_alter_unifiedscorehistory_unique_together_and_more.py`
around lines 12 - 26, Migration will fail because existing UnifiedScoreHistory
rows get default user_id=0 and the new unique_together (user_id, date) will
conflict; add a RunPython data migration between the AddField for user_id and
the final AlterUniqueTogether to backfill user_id from usernames. Implement a
backfill function (e.g., backfill_user_id) that uses
apps.get_model('leaderboard', 'UnifiedScoreHistory') and
apps.get_model('leaderboard', 'UserNames'), build a mapping from username to
user_id using the same logic as build_unified_ranking (username = gh_uname or
cf_uname or cc_uname or lt_uname) and then iterate UnifiedScoreHistory objects
setting record.user_id from that map and saving with update_fields=['user_id'];
register it as migrations.RunPython(backfill_user_id, migrations.RunPython.noop)
placed after AddField and before the AlterUniqueTogether that enforces
{('user_id','date')}.
| per-platform breakdown. Written by a Celery beat task every day. | ||
| Used by the Unified trend analysis heatmap and line chart. | ||
| """ | ||
| user_id = models.IntegerField(db_index=True, default=0) |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Check for existing UnifiedScoreHistory rows and potential conflicts
ast-grep --pattern 'class UnifiedScoreHistory($$$):
$$$
'
# Look for any existing data migration that backfills user_id
rg -n 'UnifiedScoreHistory' --glob '**/migrations/*.py' -A 5Repository: OpenLake/Leaderboard-Pro
Length of output: 2658
🏁 Script executed:
# Get the full migration file to see if UnifiedScoreHistory is being created or altered
cat -n api/leaderboard/migrations/0002_alter_codeforcesuser_last_activity_and_more.py | head -100Repository: OpenLake/Leaderboard-Pro
Length of output: 1963
🏁 Script executed:
# Check if there's a data migration that backfills user_id
rg -l 'RunPython|RunSQL' api/leaderboard/migrations/ | xargs -I {} grep -l 'user_id\|UnifiedScoreHistory' {} 2>/dev/nullRepository: OpenLake/Leaderboard-Pro
Length of output: 50
🏁 Script executed:
# Check trends_views.py for how it queries UnifiedScoreHistory
rg -n 'UnifiedScoreHistory' api/ -A 3 -B 1 | grep -E '(trends_views|\.filter|\.objects)'Repository: OpenLake/Leaderboard-Pro
Length of output: 1872
🏁 Script executed:
# List all migrations in order
ls -la api/leaderboard/migrations/*.py | head -20Repository: OpenLake/Leaderboard-Pro
Length of output: 1222
🏁 Script executed:
# Find migrations that modify UnifiedScoreHistory after 0002
rg -l 'UnifiedScoreHistory' api/leaderboard/migrations/*.py | sortRepository: OpenLake/Leaderboard-Pro
Length of output: 146
🏁 Script executed:
# Check the celery task that updates UnifiedScoreHistory
rg -n 'update_or_create' api/leaderboard/celery.py -B 5 -A 10 | grep -A 10 'UnifiedScoreHistory'Repository: OpenLake/Leaderboard-Pro
Length of output: 658
🏁 Script executed:
# Read migration 0025 that alters UnifiedScoreHistory
cat -n api/leaderboard/migrations/0025_alter_unifiedscorehistory_unique_together_and_more.pyRepository: OpenLake/Leaderboard-Pro
Length of output: 949
🏁 Script executed:
# Check if there are any migrations between 0002 and 0025 that might touch UnifiedScoreHistory
rg -n 'UnifiedScoreHistory' api/leaderboard/migrations/000*.pyRepository: OpenLake/Leaderboard-Pro
Length of output: 189
🏁 Script executed:
# Search for any data migrations (numbered migrations with RunPython) that might backfill user_id
find api/leaderboard/migrations -name "*.py" -type f -exec grep -l "RunPython\|raw_sql" {} \; | xargs grep -l "user_id\|UnifiedScoreHistory" 2>/dev/nullRepository: OpenLake/Leaderboard-Pro
Length of output: 52
🏁 Script executed:
# Check if there are any migrations after 0025 that might fix the user_id values
ls api/leaderboard/migrations/002*.pyRepository: OpenLake/Leaderboard-Pro
Length of output: 499
🏁 Script executed:
# Look at the celery task more broadly to understand the context
rg -n 'def.*unified' api/leaderboard/celery.py -i -A 20 | head -50Repository: OpenLake/Leaderboard-Pro
Length of output: 912
Migration 0025 adds user_id field with default=0 but provides no data migration to backfill values from existing records.
Migration 0002 created UnifiedScoreHistory with unique_together=("username", "date"). Migration 0025 changes this to unique_together=("user_id", "date") and adds user_id=IntegerField(default=0).
Any existing records in the table will have user_id=0 after this migration. If multiple records share the same date (which is possible under the old unique constraint), the new constraint will fail because they'll all have (user_id=0, date=<same>).
A data migration must backfill user_id before the new constraint is applied, or the migration will cause integrity violations.
Additionally, trends_views.py filters by username only (lines 348, 396), but the new constraint is on user_id. Ensure the query logic aligns with this schema change to avoid returning incorrect results if data consistency issues exist.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@api/leaderboard/models.py` at line 278, UnifiedScoreHistory migration 0025
adds user_id=IntegerField(default=0) and changes unique_together to
("user_id","date") but doesn't backfill existing rows, which will cause
integrity errors; modify migration 0025 to include a data migration that looks
up the correct user.pk for each existing UnifiedScoreHistory.username (using the
User model or your canonical user lookup) and sets user_id accordingly before
altering unique_together, then remove the default=0 or set null=False only after
backfill; also update trends_views.py queries that currently filter by username
(the filters around the username usage) to filter by user_id (or to join/match
both username and user_id) so query logic aligns with the schema change and
avoids returning incorrect results.
| class Meta: | ||
| # one record per user per day | ||
| unique_together = ("username", "date") | ||
| unique_together = ("user_id", "date") | ||
| ordering = ["date"] |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Check how trends_views.py queries UnifiedScoreHistory
rg -n 'UnifiedScoreHistory' --type py -B 2 -A 10 | head -100Repository: OpenLake/Leaderboard-Pro
Length of output: 7785
🏁 Script executed:
# See the complete query and score_map logic in trends_views.py around the UnifiedScoreHistory filter
sed -n '330,420p' api/leaderboard/trends_views.pyRepository: OpenLake/Leaderboard-Pro
Length of output: 3617
🏁 Script executed:
# Check if username can be duplicate for different user_ids in the unified ranking
grep -n 'username' api/leaderboard/analytics.py | head -30Repository: OpenLake/Leaderboard-Pro
Length of output: 2036
🏁 Script executed:
# Search for score_map usage in trends_views.py
rg -n 'score_map' api/leaderboard/trends_views.py -B 2 -A 2Repository: OpenLake/Leaderboard-Pro
Length of output: 514
🏁 Script executed:
# Check the exact iteration logic in build_unified_ranking to confirm one row per User
sed -n '160,195p' api/leaderboard/analytics.pyRepository: OpenLake/Leaderboard-Pro
Length of output: 1712
🏁 Script executed:
# Check if username field constraint exists or if it's truly only unique on (user_id, date)
grep -A 5 'unique_together' api/leaderboard/models.py | grep -A 5 'UnifiedScoreHistory'Repository: OpenLake/Leaderboard-Pro
Length of output: 210
🏁 Script executed:
# Verify the User model to understand if user.username is unique
rg -A 10 'class User' api/leaderboard/models.pyRepository: OpenLake/Leaderboard-Pro
Length of output: 1795
Data loss risk: score_map overwrites scores when multiple users share the same username.
In unified_heatmap() and unified_linechart() (trends_views.py lines 348–352, 396–408), queries filter UnifiedScoreHistory by username only:
records = UnifiedScoreHistory.objects.filter(username=username, date__gte=cutoff)
score_map = {r.date.isoformat(): r.total_score for r in records}The constraint unique_together = ("user_id", "date") allows multiple records with the same username on the same date (if two users have identical usernames). When building the dict with date as the sole key, duplicate dates cause the last record to overwrite earlier ones, losing score data.
This occurs because username in UnifiedScoreHistory is derived as: platform handle (gh_uname, cf_uname, cc_uname, lt_uname) OR fallback to user.username. While user.username is globally unique, platform handles are not, enabling collisions across users.
Add db_index=True to the username field and either:
- Add
unique_together = ("username", "date")to prevent simultaneous records, or - Update views to filter by
user_idinstead ofusername(recommended).
🧰 Tools
🪛 Ruff (0.15.7)
[warning] 296-296: Mutable default value for class attribute
(RUF012)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@api/leaderboard/models.py` around lines 293 - 296, UnifiedScoreHistory
records can collide when different users share the same platform handle because
unified_heatmap() and unified_linechart() build score_map = {r.date.isoformat():
r.total_score} while querying by username, allowing multiple records with the
same date to overwrite each other; fix by changing the views unified_heatmap()
and unified_linechart() to query/filter by user_id (e.g.,
UnifiedScoreHistory.objects.filter(user_id=..., date__gte=cutoff)) instead of
username so each user’s records are isolated, and in the model add db_index=True
to the username field and either add unique_together = ("username", "date") in
UnifiedScoreHistory.Meta or keep just the user_id-based queries (recommended) to
prevent future collisions.
Key Features
Organization/Campus Leaderboards
Backend Infrastructure: Implemented
Organization and OrganizationMember models with automatic 10-character alphanumeric join code generation for private groups.
OrganizationViewSet with actions for join, leave, discover, and members management.
Implemented automatic Admin Transfer logic: if an admin leaves a group, leadership is automatically passed to the next oldest member.
Added custom leaderboard filtering to only show rankings for members within a specific organization.
Frontend Dashboard:
Developed a new Organizations Dashboard with discovery and management tabs.
Implemented Private & Public Groups: Users can join public groups directly or enter a code for private ones.
Created a dedicated Group Leaderboard view with support for all 6 platforms (Codeforces, LeetCode, etc.).
Member Management:
Added a Members Tab for group admins to manage their community and remove users.
Added a "Leave Group" feature for all users.
2. AtCoder Guest Access
Public Visibility: Modified routing to allow unauthenticated users (guests) to view the AtCoder leaderboard.
Hardened UI: Guests now see a friendly login/register prompt instead of the "Add User" input field, preventing unauthorized modification attempts.
3. State & Layout Integration
Full compatibility with the Sidebar navigation and Guest Mode protections introduced in PR #196.
Optimized API calls by disabling unnecessary pagination for small data sets like organization lists and members.
Technical Changes
API: Added organizations/ endpoints and updated
Summary by CodeRabbit
New Features
Bug Fixes
Style