Skip to content

🚀 Duet game result: move to enum#18

Open
asaf-kali wants to merge 1 commit intomainfrom
game-result-enum
Open

🚀 Duet game result: move to enum#18
asaf-kali wants to merge 1 commit intomainfrom
game-result-enum

Conversation

@asaf-kali
Copy link
Copy Markdown
Owner

@asaf-kali asaf-kali commented Feb 16, 2025

Summary by CodeRabbit

  • Refactor

    • Standardized game outcome messages for more consistent feedback during gameplay, ensuring clear indications of events like win, mistake limit reached, and game quit.
  • Chores

    • Enhanced the release configuration by expanding the list of tags for minor version updates.
  • Tests

    • Updated validations to align with the new, unified approach to game result handling.

@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Feb 16, 2025

Walkthrough

This pull request refactors the handling of game results by converting the GameResult from a Pydantic-based model into an Enum with tuple values, representing the win status and reason. The changes update constant references across various modules—state and test files now reference the enum members instead of legacy constants. Additionally, the import statements have been streamlined, and a new minor tag ("🏞️") has been added to the pyproject.toml file.

Changes

File(s) Change Summary
codenames/duet/score.py Converted GameResult from a BaseModel to an Enum; redefined win and reason as tuple elements; updated enum member values and adjusted related import statements (removed ConfigDict, added Enum).
codenames/duet/state.py, codenames/mini/state.py Removed constant imports and updated all game result assignments to use the new GameResult enum members in both state classes.
Tests: tests/duet/test_game_runner.py, tests/duet/test_game_state.py, tests/duet/test_side_state.py, tests/mini/test_game_runner.py Refactored import statements and assertions to use the GameResult enum instead of the legacy constants; adjusted method parameter in one test (card_amount updated from 2 to 3).
pyproject.toml Added a new minor tag "🏞️" in the [tool.semantic_release.commit_parser_options] section.

Sequence Diagram(s)

sequenceDiagram
    participant Runner as GameRunner
    participant DS as DuetSideState / MiniGameState
    participant DG as DuetGameState
    Note over DS,DG: Game state processing with updated GameResult enum
    Runner->>DG: Trigger game event (e.g., clue, quit or error)
    DG->>DS: Delegate event handling (e.g., _quit, _update_score)
    DS->>DS: Set game_result = GameResult.[EVENT_TYPE] (e.g., GAME_QUIT, ASSASSIN_HIT)
    DS-->>DG: Return updated side state with enum result
    DG-->>Runner: Return complete game state with GameResult enum
Loading

Possibly related PRs

  • 🏖️ Abstraction fixes #17: Addresses similar updates by transitioning game result handling to use the GameResult enum, particularly affecting the DuetSideState class.

📜 Recent review details

Configuration used: CodeRabbit UI
Review profile: ASSERTIVE
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between b83dafb and de6f58e.

📒 Files selected for processing (8)
  • codenames/duet/score.py (2 hunks)
  • codenames/duet/state.py (5 hunks)
  • codenames/mini/state.py (2 hunks)
  • pyproject.toml (1 hunks)
  • tests/duet/test_game_runner.py (4 hunks)
  • tests/duet/test_game_state.py (3 hunks)
  • tests/duet/test_side_state.py (6 hunks)
  • tests/mini/test_game_runner.py (4 hunks)
🔇 Additional comments (18)
codenames/duet/score.py (4)

3-3: Good import of Enum from the standard library.

This import is necessary for the new GameResult enum approach. No concerns here.


5-5: Confirm continued need for BaseModel.

Score still inherits from BaseModel, so the import remains relevant. However, if future refactors remove the Pydantic dependency for Score, consider removing the import to keep dependencies minimal.


26-29: Praise for custom enum constructor.

Defining a custom __init__ in an enum to store extra attributes (win, reason) is a valid pattern. Everything looks correct and consistent.


31-35: Consider verifying comprehensive test coverage for each enum member.

Members like ASSASSIN_HIT and GAME_QUIT may need coverage in the test suite. Ensure that all scenarios referencing these members are tested to maintain code quality and correctness.

codenames/mini/state.py (2)

3-3: Appropriate import for the new Enum-based game results.

Switching to GameResult centralizes game outcome definitions and promotes consistency across modules.


38-45: Enum-based assignments confirm alignment with the new approach.

Reassigning self.game_result to GameResult.TIMER_TOKENS_DEPLETED and GameResult.MISTAKE_LIMIT_REACHED is consistent with the Enum usage, reducing the risk of mismatched constants.

tests/mini/test_game_runner.py (4)

2-2: Streamlined import of GameResult.

Removing the old constants in favor of the new enum clarifies the domain model for game outcomes.


24-24: Good assertion for TARGET_REACHED.

This verifies the correct result when the target gets fully revealed.


45-45: Accurate check for timer depletion result.

Ensuring the test references GameResult.TIMER_TOKENS_DEPLETED maintains consistency with the new enum pattern.


66-66: Correct usage of the new MISTAKE_LIMIT_REACHED member.

Test coverage for this result ensures consistent behavior under error-limiting conditions.

codenames/duet/state.py (2)

98-104: LGTM! Field validator ensures proper enum deserialization.

The validator correctly handles both None values and tuple-based enum initialization.


199-199: LGTM! Consistent use of GameResult enum.

The refactoring consistently replaces all constant references with their corresponding enum values, improving type safety and maintainability.

Also applies to: 206-206, 210-210, 246-246, 248-248, 263-263

tests/duet/test_game_runner.py (1)

101-101: LGTM! Test assertions correctly updated to use GameResult enum.

The test cases have been properly updated to use the new enum values in assertions.

Also applies to: 120-120, 143-143

tests/duet/test_game_state.py (1)

114-114: LGTM! Test assertions correctly updated to use GameResult enum.

The test cases have been properly updated to use the new enum values in assertions.

Also applies to: 139-139

tests/duet/test_side_state.py (3)

16-16: Verify the change in card amount.

The card amount in the test has been changed from 2 to 3. Please confirm if this change is intentional and aligns with the test's objectives.


24-31: LGTM! Enhanced test coverage for JSON serialization.

The test has been expanded to verify JSON serialization after game quit, improving test coverage.


89-89: LGTM! Test assertions correctly updated to use GameResult enum.

The test cases have been properly updated to use the new enum values in assertions.

Also applies to: 132-132, 154-154, 167-167

pyproject.toml (1)

74-80: New Minor Tag Addition in Semantic Release Configuration

The addition of the "🏞️" tag to the minor_tags list is clear and well-integrated. This new tag helps expand the versioning triggers in line with the semantic release strategy and supports the broader refactoring changes (e.g., moving game result handling to an Enum). Ensure that any related documentation or tests referencing minor tags are updated accordingly if needed.

✨ Finishing Touches
  • 📝 Generate Docstrings (Beta)

Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR. (Beta)
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@codecov-commenter
Copy link
Copy Markdown

codecov-commenter commented Feb 16, 2025

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 83.24%. Comparing base (b8e159f) to head (de6f58e).
⚠️ Report is 3 commits behind head on main.

Additional details and impacted files
@@             Coverage Diff             @@
##             main      #18       +/-   ##
===========================================
- Coverage   93.42%   83.24%   -10.18%     
===========================================
  Files          70       70               
  Lines        3073     3086       +13     
===========================================
- Hits         2871     2569      -302     
- Misses        202      517      +315     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@asaf-kali asaf-kali force-pushed the main branch 2 times, most recently from b06a308 to 55b3c23 Compare April 12, 2026 13:01
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants