Skip to content

feat(project-manager): add project-manager command (#94)#104

Open
trixy-the-ai-bot wants to merge 11 commits into
mainfrom
feat/project-manager-94
Open

feat(project-manager): add project-manager command (#94)#104
trixy-the-ai-bot wants to merge 11 commits into
mainfrom
feat/project-manager-94

Conversation

@trixy-the-ai-bot
Copy link
Copy Markdown
Collaborator

@trixy-the-ai-bot trixy-the-ai-bot commented Apr 14, 2026

Summary

  • Adds the loony-dev project-manager command that acts as a higher-level orchestration layer on top of the existing worker pipeline
  • Uses two-phase prioritisation: Phase 1 heuristic scoring (milestone alignment, labels, age) narrows to a shortlist; Phase 2 Claude CLI AI ranking picks the winner
  • Ensures exactly N issues are always in progress by promoting candidates when slots are free
  • Auto-merges PRs once CI passes and a configurable merge delay elapses (default 24 h), with optional deployment verification
  • Circuit-breaker: checks main branch health before every action — pauses the pipeline on CI/deploy failures
  • Adds list_milestones(), list_issues_all(), find_pr_for_issue(), merge_pull_request(), get_deployment_run(), get_pr_merged_at(), and related caching helpers to loony_dev/github/repo.py
  • Adds loony_dev/prioritiser.py with Prioritiser class and full heuristic + AI ranking logic
  • Adds loony_dev/project_manager.py with ProjectManager polling loop
  • Extends loony_dev/cli.py with the project-manager sub-command and all configuration options
  • Fixes find_pr_for_issue to search --state all (not just open PRs) so merged PRs are found during deployment verification
  • Fixes two log-message references from self.github.repoself.github.name

Closes #94

Test plan

  • loony-dev project-manager --help shows all options
  • python -c "from loony_dev.project_manager import ProjectManager; print('OK')" passes
  • python -c "from loony_dev.prioritiser import Prioritiser; print('OK')" passes
  • Run against a real repo with --n 1 --interval 30 --skip-merge and verify an issue is labelled ready-for-planning or ready-for-development
  • Verify pipeline pauses when a CI check is failing on the default branch

🤖 Generated with Claude Code

Summary by CodeRabbit

Release Notes

  • New Features

    • Added project-manager CLI command for automated GitHub repository project orchestration with configurable polling intervals and concurrency controls.
    • Introduced intelligent issue prioritization with optional AI-powered ranking and filtering based on dependencies and pipeline status.
    • Added automated pull request merging with configurable delays and deployment workflow verification.
    • Added GitHub Actions workflow tracking and deployment completion detection.
  • Bug Fixes

    • Fixed label extraction in pull request data processing.

Loony-dev and others added 3 commits April 12, 2026 22:58
Introduces a higher-level orchestration layer on top of the existing
worker pipeline.  While the `worker` command reactively handles issues
as they move through labelled states, the `project-manager` command
proactively selects, prioritises, and promotes issues through the
pipeline — ensuring that exactly N issues are always in progress.

New files:
- loony_dev/prioritiser.py — two-phase prioritisation: heuristic filter
  (Phase 1) narrows the backlog, then a Claude agent ranks the shortlist
  by strategic value (Phase 2) with fallback to Phase-1 top pick.
- loony_dev/project_manager.py — ProjectManager class: polling loop,
  pipeline-stage detection, main-branch health-check / circuit-breaker,
  configurable merge delay, deployment verification, and action
  deduplication across ticks.

Modified files:
- loony_dev/github.py — new methods: list_milestones(), list_issues_all()
  (with caching + invalidation), get_open_pr_issue_numbers(),
  get_branch_check_runs(), get_default_branch_sha(), get_deployment_run(),
  get_pr_merged_at(), merge_pull_request().
- loony_dev/cli.py — new `project-manager` sub-command wired to
  ProjectManager with full --n, --interval, --skip-planning, --skip-merge,
  --merge-delay, --deploy-workflow, --milestone-soon-days, --shortlist-size,
  --ai-model, --verbose, --log-file options.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
- Use `github.name` instead of `github.repo` in log messages inside
  ProjectManager (Repo exposes `.name`, not `.repo`)
- Search PRs with `--state all` in `find_pr_for_issue` so merged PRs
  are found during post-merge deployment verification

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Copy link
Copy Markdown
Member

@TMaYaD TMaYaD left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

github Repo is turning into god class again. Follow the existing patterns (active record) in the repo.

Comment thread loony_dev/github/repo.py Outdated

# --- Milestones ---

def list_milestones(self, ttl: float = 3600.0) -> dict[str, datetime | None]:
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should be repo.milestones.
Caching should be handled through decorators.

Comment thread loony_dev/github/repo.py Outdated
logger.debug("list_milestones() fetched %d milestone(s)", len(result))
return result

def invalidate_milestones_cache(self) -> None:
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is this the responsibility of repo?

Comment thread loony_dev/github/repo.py Outdated

# --- All-issues cache ---

def list_issues_all(self, ttl: float = 120.0) -> list[dict]:
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

repo.issues.open. Follow the active record pattern already established.

Comment thread loony_dev/github/repo.py Outdated

# --- Open PR → issue number mapping ---

def get_open_pr_issue_numbers(self) -> set[int]:
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[pr.issues for pr in repo.pull_requests].flatten

Comment thread loony_dev/github/repo.py Outdated

# --- Branch health ---

def get_branch_check_runs(self, sha: str) -> list[dict]:
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

branch.check_runs

Comment thread loony_dev/github/repo.py Outdated
logger.debug("get_branch_check_runs(%r) returned %d run(s)", sha, len(runs))
return runs

def get_default_branch_sha(self) -> str:
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

repo.default_branch.sha

Comment thread loony_dev/github/repo.py Outdated

# --- Deployment workflow ---

def get_deployment_run(self, workflow: str, after_timestamp: datetime) -> bool:
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

workflow.runs.where(name="deployment", timestamp_is_gt=t)

Comment thread loony_dev/github/repo.py Outdated

# --- PR merging ---

def get_pr_merged_at(self, pr_number: int) -> datetime | None:
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

pr.merged_at

Comment thread loony_dev/github/repo.py Outdated
logger.warning("Failed to fetch merge state for PR #%d", pr_number)
return None

def merge_pull_request(self, pr_number: int, merge_method: str = "squash") -> bool:
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

pr.merge(method)

Comment thread loony_dev/github/repo.py Outdated

# --- Compat helpers for project_manager ---

def list_open_prs(self) -> list[dict]:
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

repo.pull_requests.open

…odels

- Add Branch, Milestone, Workflow, WorkflowRun models in dedicated files
- Add IssueCollection (repo.issues.open) and PullRequestCollection
  (repo.pull_requests.open) collection proxies
- Add tick_cached and ttl_cached decorators; remove inline cache boilerplate
- Add repo.default_branch, repo.milestones as cached properties
- Add pr.issues (referenced issue numbers), pr.merged_at, pr.merge(),
  pr.is_draft to PullRequest
- Update project_manager to use model API; update prioritiser to use
  repo.milestones property

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
@trixy-the-ai-bot
Copy link
Copy Markdown
Collaborator Author

Review comments addressed.

The refactor replaced a 280-line god-class Repo with active record models (Branch, Milestone, Workflow, IssueCollection, PullRequestCollection) that encapsulate their own data fetching and caching. Caching was centralized into tick_cached/ttl_cached decorators instead of inline boilerplate, and new properties were added to PullRequest (issues, merged_at, merge, is_draft). project_manager.py and prioritiser.py were updated throughout to use the new model API.

Copy link
Copy Markdown
Member

@TMaYaD TMaYaD left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the return types are using primitives and generic dicts.
Use ActiveRecord patterns. Use ObjectClasses and CollectionClasses(inheriting from list may be a good idea) with chainable filters where required.
I have highlighted a few instances. But search across the codebase for similar patterns.

Comment thread loony_dev/github/issue.py Outdated
self._repo = repo

@property
def open(self) -> list[dict]:
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

must return list[Issue] or better yet IssueCollection.

Comment thread loony_dev/github/branch.py Outdated
return self._sha

@property
def check_runs(self) -> list[dict]:
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

don't return list of generic dicts. use more concrete list like list[CheckRun] or list[WorkflowRun]

Comment thread loony_dev/github/pull_request.py Outdated
return any(a.get("login", "") == username for a in self.assignees)

@property
def issues(self) -> list[int]:
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

return IssueCollection

…bjects

- IssueCollection(list): list subclass of Issue instances with .where()
  chainable filters and .numbers(); replaces list[dict] everywhere
- Issue gains created_at and milestone fields; _from_api supports
  optional content sanitization; fixes milestone due-date field (dueOn)
- Branch.check_runs returns list[CheckRun] via new CheckRun.list_all()
  (all runs, not just failing)
- pr.issues returns IssueCollection of stub Issue instances instead of
  list[int]
- WorkflowRunCollection(list): inherits from list; .where() returns
  WorkflowRunCollection instead of list[WorkflowRun]
- prioritiser and project_manager updated to use Issue attributes
  (.number, .labels, .milestone, .created_at) instead of dict access

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
@trixy-the-ai-bot
Copy link
Copy Markdown
Collaborator Author

Review comments addressed.

Replaced raw dict returns across branch.py, issue.py, and pull_request.py with typed model objects (CheckRun, IssueCollection, Issue). IssueCollection was promoted to a proper list subclass with chainable .where() filtering and a tick-cached fetch_open() classmethod, and Issue gained created_at and milestone fields with a fix for milestone due-date parsing. prioritiser.py and project_manager.py were updated throughout to use attribute access instead of dict lookups, and WorkflowRunCollection received the same list-subclass/chainable-.where() treatment.

@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Apr 17, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review
📝 Walkthrough

Walkthrough

A new project-manager CLI command is introduced alongside supporting GitHub models and business logic to orchestrate a long-running polling loop that manages issues through a pipeline: prioritizing open issues using milestone proximity and AI ranking, promoting selected issues through pipeline stages, auto-merging ready PRs with configurable delays, and verifying deployment completion before marking issues done.

Changes

Cohort / File(s) Summary
CLI Command
loony_dev/cli.py
Adds project-manager Click command with DEBUG-capable logging, repository resolution, and ProjectManager instantiation; includes 12 new options for concurrency, pipeline toggles, delays, prioritization thresholds, and AI model selection.
GitHub Public API
loony_dev/github/__init__.py
Expands __all__ exports to include Branch, IssueCollection, Milestone, PullRequestCollection, Workflow, WorkflowRun, and WorkflowRunCollection for public consumption.
GitHub Data Models
loony_dev/github/branch.py, loony_dev/github/milestone.py, loony_dev/github/workflow.py
Introduces three new Active Record–style models: Branch (lazily caches commit SHA and check runs), Milestone (represents milestones with list_open), and Workflow/WorkflowRun/WorkflowRunCollection (GitHub Actions workflow tracking with chainable filtering).
GitHub API Extensions
loony_dev/github/check_run.py
Adds CheckRun.list_all() classmethod with per-SHA tick caching to fetch all check runs for a commit without filtering.
Issue Model Expansion
loony_dev/github/issue.py
Adds typed fields (created_at, milestone), introduces IssueCollection list subclass with fetch_open(), chainable where() filtering, and numbers() methods; adds _IssueQuery proxy with cached open accessor; extends _from_api to support content sanitization.
Pull Request Model Expansion
loony_dev/github/pull_request.py
Adds body and is_draft fields; introduces issues property extracting referenced issue numbers from PR text/branch; adds merged_at property and merge() method with squash support; introduces PullRequestCollection with tick-cached open accessor and invalidate() method; fixes label extraction bug.
Business Logic — Prioritisation
loony_dev/prioritiser.py
Implements two-phase issue selection: Phase 1 filters by disqualifying labels/dependencies and scores by milestone proximity, plan markers, and age; Phase 2 uses Claude API to rank shortlist, with fallback to Phase 1 winner on AI failures.
Business Logic — Orchestration
loony_dev/project_manager.py
Implements ProjectManager with polling loop that: clears tick caches, gates on main-branch health, auto-merges ready PRs (with configurable delay), tracks in-flight issue capacity, promotes candidates via Prioritiser, and verifies deployment completion before labeling issues as done.

Sequence Diagram(s)

sequenceDiagram
    participant CLI as CLI/User
    participant PM as ProjectManager
    participant Cache as Tick Cache
    participant GitHub as GitHub API
    participant Prioritiser as Prioritiser
    participant Claude as Claude AI

    CLI->>PM: run()
    loop Every interval seconds
        PM->>Cache: Clear tick cache
        PM->>GitHub: Check main branch health
        GitHub-->>PM: Health status
        alt Branch unhealthy
            PM->>PM: Pause promotion/merge
        else Branch healthy
            PM->>GitHub: Fetch open issues & PRs
            GitHub-->>PM: Issue/PR snapshots
            PM->>PM: Auto-merge ready PRs (if not skipped)
            PM->>GitHub: Check deployment completion
            GitHub-->>PM: Deployment status
            PM->>PM: Count in-flight issues
            loop Until capacity reached
                PM->>Prioritiser: select_next(open_issues)
                Prioritiser->>Prioritiser: Phase 1: Filter & score candidates
                Prioritiser->>Claude: Phase 2: AI rank shortlist
                Claude-->>Prioritiser: Ranked issue number
                Prioritiser-->>PM: Selected issue
                PM->>GitHub: Promote to pipeline
                GitHub-->>PM: Confirmation
            end
        end
        PM->>PM: Sleep 1 second
    end
Loading
sequenceDiagram
    participant Prioritiser
    participant Issues as Issue Data
    participant OpenPRs as Open PRs
    participant Body as Issue Body
    participant Claude as Claude CLI

    Prioritiser->>Issues: Get all open issues
    Issues-->>Prioritiser: List of issues
    loop For each issue
        Prioritiser->>OpenPRs: Check if issue has open PR
        alt Has open PR
            Prioritiser->>Prioritiser: Exclude (disqualifying)
        else No open PR
            Prioritiser->>Body: Extract blocking dependencies
            alt Has blocking dependencies
                Prioritiser->>Prioritiser: Exclude (disqualifying)
            else No blockers
                Prioritiser->>Prioritiser: Score by milestone, plan marker, labels, age
                Prioritiser->>Prioritiser: Add to shortlist
            end
        end
    end
    Prioritiser->>Prioritiser: Sort shortlist by score, truncate to size
    alt Shortlist has exactly 1 candidate
        Prioritiser-->>Prioritiser: Return top candidate, skip Phase 2
    else Shortlist has multiple candidates
        Prioritiser->>Claude: Send milestone/issue metadata + prompt
        Claude-->>Prioritiser: Parse JSON response for issue_number
        Prioritiser-->>Prioritiser: Return selected issue or fallback to Phase 1 top
    end
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Possibly related PRs

Suggested reviewers

  • TMaYaD

Poem

🐰 Hops with glee through issue queues
With priorities set and pipelines true,
Milestones near? Claude picks the best,
Merges and deploys put our code to the test!

🚥 Pre-merge checks | ✅ 4 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 68.92% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (4 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'feat(project-manager): add project-manager command (#94)' clearly and specifically identifies the main change: adding a new project-manager command. It aligns with the primary objective of implementing a project-manager click command.
Linked Issues check ✅ Passed The pull request comprehensively implements all coding requirements from issue #94: adds the project-manager command [#94], implements two-phase prioritisation [#94], manages N issues in-flight [#94], auto-merges PRs with CI verification [#94], pauses on branch health issues [#94], and moves issues through pipeline stages [#94].
Out of Scope Changes check ✅ Passed All changes directly support the project-manager functionality. New modules (prioritiser.py, project_manager.py, milestone.py, workflow.py, branch.py) and GitHub API additions (CheckRun.list_all, Issue collections, PullRequest enhancements, Workflow support) are essential for pipeline orchestration and prioritisation.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch feat/project-manager-94

Comment @coderabbitai help to get the list of available commands and usage tips.

@trixy-the-ai-bot
Copy link
Copy Markdown
Collaborator Author

Merge conflicts resolved.

The merge conflict in issue.py was resolved by combining fields from both branches: main contributed assignees, is_assigned_to, and has_other_assignee, while the feature branch added created_at, milestone, IssueCollection(list), and _IssueQuery. Both JSON fetch calls were updated to request all fields (milestone,assignees). The resolved file was pushed to the remote branch.

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 7

🧹 Nitpick comments (6)
loony_dev/github/pull_request.py (1)

164-178: Consider caching merged_at result.

The merged_at property makes a fresh API call on every access. If this property is accessed multiple times in a single tick (e.g., for logging and then for deployment verification), it will redundantly query GitHub.

💡 Option: Cache on instance after first fetch
+        self._merged_at: datetime | None = None
+        self._merged_at_fetched: bool = False
+
     `@property`
     def merged_at(self):
         """Return the merged-at datetime if this PR has been merged, else ``None``."""
+        if self._merged_at_fetched:
+            return self._merged_at
         try:
             output = self._repo.client.gh(
                 "pr", "view", str(self.number),
                 "--json", "state,merged,mergedAt",
                 "-q", ".",
             )
             data = _json.loads(output)
             if data.get("merged"):
-                return parse_datetime(data.get("mergedAt"))
+                self._merged_at = parse_datetime(data.get("mergedAt"))
         except (subprocess.CalledProcessError, ValueError):
             logger.warning("Failed to fetch merge state for PR #%d", self.number)
-        return None
+        self._merged_at_fetched = True
+        return self._merged_at
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@loony_dev/github/pull_request.py` around lines 164 - 178, The merged_at
property currently queries GitHub every access; change it to cache the fetched
value on the instance: add a private attribute (e.g., self._merged_at or
self._cached_merged_at) initialized to a sentinel (None or a separate _UNSET)
and have merged_at check that attribute first, performing the API call only if
the cache is unset, storing the parsed datetime (or None) into the cache before
returning; update any error-handling to still set the cache on failure so
subsequent accesses don't re-query repeatedly.
loony_dev/github/check_run.py (1)

100-105: Error results are tick-cached, unlike list_failing.

When CalledProcessError is raised, the code sets runs = [] and then caches it at line 104. This means failed API calls will return [] for the entire tick without retrying. The existing list_failing method (lines 71-73) returns immediately on error without caching.

If this is intentional (to avoid repeated failing calls within a tick), consider adding a brief comment. If not, move the cache write inside the try block.

💡 Option A: Don't cache errors (match list_failing behavior)
         except subprocess.CalledProcessError:
             logger.warning("Failed to fetch all check runs for SHA %r", head_sha)
-            runs = []
+            return []

         repo._tick_cache[cache_key] = runs
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@loony_dev/github/check_run.py` around lines 100 - 105, The current
CheckRun.list_all error path caches an empty result on
subprocess.CalledProcessError (assigning runs = [] then writing
repo._tick_cache[cache_key] = runs), causing failed API calls to be suppressed
for the whole tick; change this to match list_failing behavior by moving the
cache write (repo._tick_cache[cache_key] = runs) inside the try block so caching
only occurs on successful responses, and leave the except block to return
without writing to repo._tick_cache (or add a short comment if caching on error
was intentional). Ensure you update the logic in the list_all function and
reference cache_key, repo._tick_cache, and the CalledProcessError except handler
when making the change.
loony_dev/cli.py (1)

221-226: Repo.detect() called without cwd parameter.

The worker command (line 77) passes cwd=str(work_path) to Repo.detect(), but project-manager calls it without arguments. This means project-manager will always detect from the current working directory, which may be fine for this use case but differs from worker's behavior.

If project-manager should also support --work-dir, consider adding the option for consistency. Otherwise, this is acceptable as-is.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@loony_dev/cli.py` around lines 221 - 226, The code calls Repo.detect()
without a cwd like the worker command does; add a --work-dir click option to the
project-manager command and pass it to Repo.detect to make behavior consistent:
declare a work_dir parameter for the command (matching worker's name/type), call
Repo.detect(cwd=str(work_dir)) when repo is None, and use that detected path for
github = Repo(repo) so detection occurs relative to the provided work-dir; also
update the option help text accordingly.
loony_dev/github/branch.py (1)

23-36: Instance-level SHA cache persists across ticks.

The sha property caches the result on the Branch instance (self._sha). If a Branch object is reused across multiple ticks (e.g., stored in a long-lived Repo property), the SHA will become stale as the branch advances.

Consider either:

  1. Using tick-caching via repo._tick_cache (consistent with other models), or
  2. Documenting that Branch instances should be short-lived / recreated each tick.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@loony_dev/github/branch.py` around lines 23 - 36, The Branch.sha property
currently stores the commit SHA on the Branch instance (self._sha) which causes
stale values across ticks; update Branch.sha to use the repo tick cache instead
of an instance-level field: in the sha property's getter (method name: sha on
class Branch) read/write the SHA from self._repo._tick_cache under a stable key
(e.g. include the branch name) so the value is refreshed each tick, falling back
to calling self._repo.client.gh when the cache miss occurs and then storing the
result in _tick_cache; if you prefer the alternate approach, remove the instance
cache and document that Branch objects must be recreated per tick. Ensure
exception handling (subprocess.CalledProcessError) and the existing
logger.warning remain when populating the tick cache.
loony_dev/github/workflow.py (1)

52-79: runs property is not cached.

Each access to Workflow.runs makes a fresh API call. If deployment verification polls this property multiple times within a tick, consider adding tick-caching similar to other collection properties.

However, since deployment runs may complete during a tick, avoiding caching might be intentional for freshness. If so, this is fine as-is.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@loony_dev/github/workflow.py` around lines 52 - 79, The runs property
currently always calls the API; add tick-caching inside the Workflow.runs
property so repeated accesses within the same tick return a cached
WorkflowRunCollection. Implement by storing a small cache (e.g.
self._cached_runs and self._cached_runs_tick) on the Workflow instance, compare
the current tick value from the repository (e.g. self._repo.tick or equivalent
tick provider used by other collection properties) to _cached_runs_tick, and
only call self._repo.client.gh_api(...) to refresh and set
_cached_runs/_cached_runs_tick when the tick has advanced or cache is empty;
otherwise return the cached WorkflowRunCollection. Ensure you still parse
entries into WorkflowRun and log as before.
loony_dev/prioritiser.py (1)

302-311: Consider input validation and structured formatting for prompt injection defense.

The subprocess uses list arguments (mitigating shell injection), but the prompt variable is constructed from untrusted GitHub issue data (titles, bodies, labels). While the immediate risk is prompt injection rather than command execution, best practices recommend defense-in-depth:

  • Input validation: Filter or sanitize issue content for injection patterns before constructing the prompt.
  • Structured formatting: Use clear delimiters (e.g., XML tags) to separate system instructions from user-controlled data, signaling to Claude that issue content is data, not directives.
  • Output validation: Validate Claude's response against expected schema.

The static analysis flags (S607, S603) are acknowledged; partial path "claude" is standard for CLI tools, and untrusted input handling is the core concern here. Current implementation is functional but lacks these mitigations, which are straightforward to add.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@loony_dev/prioritiser.py` around lines 302 - 311, The prompt construction
currently passes untrusted GitHub issue content into subprocess.run (see prompt
variable used in the ["claude", ...] call) without sanitization; before building
prompt, add input validation/sanitization for issue titles, bodies and labels
(strip or escape control characters, remove unexpected injection tokens like
instruction separators), and wrap all user-controlled data in explicit,
machine-readable delimiters/tags (e.g., <ISSUE>...</ISSUE>) so system
instructions are separate from data; after subprocess.run returns, validate
result.stdout against the expected schema/format and reject or re-request
parsing if it doesn't conform (use result.returncode and result.stderr for error
handling as now).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@loony_dev/github/issue.py`:
- Line 149: The list comprehension using a one-letter variable `l`
(labels=[l["name"] for l in data.get("labels", [])]) triggers Ruff E741; change
the comprehension variable to a clear name like `label` (labels=[label["name"]
for label in data.get("labels", [])]) and update any adjacent references if
present so the codebase passes lint checks.

In `@loony_dev/github/pull_request.py`:
- Around line 135-162: The return annotation IssueCollection on the issues
property is referenced before it's imported at module level, causing runtime
introspection (typing.get_type_hints) to fail; to fix, add IssueCollection (and
Issue if not already) to the module-level TYPE_CHECKING imports block (i.e.,
import IssueCollection from loony_dev.github.issue inside an if TYPE_CHECKING:
...), keep the existing runtime local import inside issues for behavior, and
ensure typing.TYPE_CHECKING is imported at top if missing so annotations can be
resolved during type-checking/introspection.

In `@loony_dev/github/repo.py`:
- Around line 337-352: find_pr_for_issue currently returns the newest created PR
regardless of merge status which can hide an earlier merged PR; update the
method (find_pr_for_issue) to request "state" (and optionally "mergedAt") in the
gh_json --json call, then prefer and return a PR whose state is "MERGED" (or has
a non-null mergedAt) if any exist, otherwise fall back to the newest by
createdAt as before; ensure you still iterate the same search terms but
filter/sort the returned data to select merged PRs first.

In `@loony_dev/prioritiser.py`:
- Around line 54-60: The code that builds the regex after re.escape(pattern) is
incorrectly using escaped.rstrip(r"\#") which removes any trailing backslashes
or hashes individually; change this to explicitly remove the escaped sequence
for a literal hash: after escaped = re.escape(pattern) check if
escaped.endswith(r"\#") and if so set escaped = escaped[:-2] (or otherwise
remove the exact suffix), then append r"\s*#(\d+)" and compile the pattern (the
code around regex = re.compile(... ) in prioritiser.py). Ensure you preserve any
trailing whitespace handling (e.g., call .rstrip() only after removing the exact
'\#' suffix) so find_issue numbers still match as intended.

In `@loony_dev/project_manager.py`:
- Around line 82-105: The constructor stores milestone_cache_ttl but it is never
used when reading milestones, so make the option actually apply by plumbing the
value into the Github/Repo milestone call: update ProjectManager.__init__ to
either pass milestone_cache_ttl into the Repo instance (or store it as
self.milestone_cache_ttl) and then call Repo.milestones(...,
ttl=self.milestone_cache_ttl) wherever milestones are fetched; alternatively add
an optional ttl parameter to Repo.milestones and ensure calls from
ProjectManager (and any methods like fetch_milestones or get_milestones in
project_manager.py) supply self.milestone_cache_ttl so the Repo.milestones cache
TTL overrides the hardcoded 3600s.
- Around line 298-327: The bug is that self._ci_passed_at is keyed only by PR
number so a new head commit inherits the old timer; change the logic to track CI
pass per head SHA (e.g. key by (pr.number, pr.head_sha) or maintain a mapping
from pr.number -> (head_sha, timestamp)) in the block handling successful
checks: when you see pr.head_sha not in the map or different from the stored
SHA, reset/set the timestamp for that head (use self._ci_passed_at with keys
including pr.head_sha or update an auxiliary self._ci_head_sha map), then
compute elapsed using the timestamp tied to the current pr.head_sha so a fresh
push restarts the merge-delay clock.
- Around line 192-202: The snapshot of open PRs is taken into
open_prs/open_pr_by_number before calling _maybe_merge_ready_prs(), causing
stale data when merges happen; move or re-create the snapshot after auto-merges
by calling _maybe_merge_ready_prs(open_prs) first (or call it, then reassign
open_prs = list(self.github.pull_requests.open) and recompute open_pr_by_number
and open_pr_issue_numbers) so that _check_merged_for_deployment(all_issues,
open_pr_by_number) operates on an up-to-date view; update references to
open_prs/open_pr_by_number/open_pr_issue_numbers accordingly.

---

Nitpick comments:
In `@loony_dev/cli.py`:
- Around line 221-226: The code calls Repo.detect() without a cwd like the
worker command does; add a --work-dir click option to the project-manager
command and pass it to Repo.detect to make behavior consistent: declare a
work_dir parameter for the command (matching worker's name/type), call
Repo.detect(cwd=str(work_dir)) when repo is None, and use that detected path for
github = Repo(repo) so detection occurs relative to the provided work-dir; also
update the option help text accordingly.

In `@loony_dev/github/branch.py`:
- Around line 23-36: The Branch.sha property currently stores the commit SHA on
the Branch instance (self._sha) which causes stale values across ticks; update
Branch.sha to use the repo tick cache instead of an instance-level field: in the
sha property's getter (method name: sha on class Branch) read/write the SHA from
self._repo._tick_cache under a stable key (e.g. include the branch name) so the
value is refreshed each tick, falling back to calling self._repo.client.gh when
the cache miss occurs and then storing the result in _tick_cache; if you prefer
the alternate approach, remove the instance cache and document that Branch
objects must be recreated per tick. Ensure exception handling
(subprocess.CalledProcessError) and the existing logger.warning remain when
populating the tick cache.

In `@loony_dev/github/check_run.py`:
- Around line 100-105: The current CheckRun.list_all error path caches an empty
result on subprocess.CalledProcessError (assigning runs = [] then writing
repo._tick_cache[cache_key] = runs), causing failed API calls to be suppressed
for the whole tick; change this to match list_failing behavior by moving the
cache write (repo._tick_cache[cache_key] = runs) inside the try block so caching
only occurs on successful responses, and leave the except block to return
without writing to repo._tick_cache (or add a short comment if caching on error
was intentional). Ensure you update the logic in the list_all function and
reference cache_key, repo._tick_cache, and the CalledProcessError except handler
when making the change.

In `@loony_dev/github/pull_request.py`:
- Around line 164-178: The merged_at property currently queries GitHub every
access; change it to cache the fetched value on the instance: add a private
attribute (e.g., self._merged_at or self._cached_merged_at) initialized to a
sentinel (None or a separate _UNSET) and have merged_at check that attribute
first, performing the API call only if the cache is unset, storing the parsed
datetime (or None) into the cache before returning; update any error-handling to
still set the cache on failure so subsequent accesses don't re-query repeatedly.

In `@loony_dev/github/workflow.py`:
- Around line 52-79: The runs property currently always calls the API; add
tick-caching inside the Workflow.runs property so repeated accesses within the
same tick return a cached WorkflowRunCollection. Implement by storing a small
cache (e.g. self._cached_runs and self._cached_runs_tick) on the Workflow
instance, compare the current tick value from the repository (e.g.
self._repo.tick or equivalent tick provider used by other collection properties)
to _cached_runs_tick, and only call self._repo.client.gh_api(...) to refresh and
set _cached_runs/_cached_runs_tick when the tick has advanced or cache is empty;
otherwise return the cached WorkflowRunCollection. Ensure you still parse
entries into WorkflowRun and log as before.

In `@loony_dev/prioritiser.py`:
- Around line 302-311: The prompt construction currently passes untrusted GitHub
issue content into subprocess.run (see prompt variable used in the ["claude",
...] call) without sanitization; before building prompt, add input
validation/sanitization for issue titles, bodies and labels (strip or escape
control characters, remove unexpected injection tokens like instruction
separators), and wrap all user-controlled data in explicit, machine-readable
delimiters/tags (e.g., <ISSUE>...</ISSUE>) so system instructions are separate
from data; after subprocess.run returns, validate result.stdout against the
expected schema/format and reject or re-request parsing if it doesn't conform
(use result.returncode and result.stderr for error handling as now).
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro Plus

Run ID: 442673a3-6471-410b-bda5-9376aae5d854

📥 Commits

Reviewing files that changed from the base of the PR and between d63f074 and df25c11.

📒 Files selected for processing (11)
  • loony_dev/cli.py
  • loony_dev/github/__init__.py
  • loony_dev/github/branch.py
  • loony_dev/github/check_run.py
  • loony_dev/github/issue.py
  • loony_dev/github/milestone.py
  • loony_dev/github/pull_request.py
  • loony_dev/github/repo.py
  • loony_dev/github/workflow.py
  • loony_dev/prioritiser.py
  • loony_dev/project_manager.py

Comment thread loony_dev/github/issue.py Outdated
Comment thread loony_dev/github/repo.py
Comment thread loony_dev/prioritiser.py
Comment thread loony_dev/project_manager.py
Comment thread loony_dev/project_manager.py
Comment thread loony_dev/project_manager.py
- fix(issue): rename `l` → `label` in _from_api to satisfy Ruff E741
- fix(pull_request): add IssueCollection to TYPE_CHECKING to avoid F821
- fix(repo): prefer merged PRs in find_pr_for_issue to prevent abandoned
  PRs from shadowing the actual merged PR
- fix(repo): honour milestone_cache_ttl instead of hardcoded 3600s;
  inline TTL check in milestones property using self.milestone_cache_ttl
- fix(prioritiser): strip trailing # before re.escape to fix rstrip logic
- fix(project_manager): wire milestone_cache_ttl into github.milestone_cache_ttl
- fix(project_manager): refresh open-PR snapshot after _maybe_merge_ready_prs
  so _check_merged_for_deployment sees post-merge state
- fix(project_manager): key _ci_passed_at by (head_sha, ts) so a fresh
  push resets the merge-delay clock

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
@trixy-the-ai-bot
Copy link
Copy Markdown
Collaborator Author

Review comments addressed.

The PR addressed six review comments across five files: fixing a Ruff linter warning (llabel), adding a missing TYPE_CHECKING import, improving find_pr_for_issue to prefer merged PRs, fixing a regex rstrip bug in the prioritiser, and propagating milestone_cache_ttl while also resetting the merge-delay clock on fresh pushes in the project manager.

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@loony_dev/github/pull_request.py`:
- Around line 136-163: The issues property currently only scans title and
branch; update the issues property in pull_request.py to also scan the PR body
(self.body) for referenced issue numbers, including both bare "#123" and keyword
patterns like "closes `#123`", "fixes `#123`" (case-insensitive), while preserving
the existing deduplication via seen and appending into numbers; return the same
IssueCollection(Issue(number=n, _repo=self._repo) for n in numbers) so
downstream code (ProjectManager) sees issues referenced in the body as well.

In `@loony_dev/github/repo.py`:
- Around line 325-335: The add_label and remove_label helpers currently swallow
subprocess.CalledProcessError which makes callers like
ProjectManager._promote_to_pipeline and _check_merged_for_deployment believe
labeling succeeded; update add_label and remove_label (methods on the repo
client that call self.client.gh("issue","edit",...)) to not swallow
failures—either remove the try/except so CalledProcessError propagates, or log
and re-raise the exception (or change the signature to return a boolean success
and have the ProjectManager callers check it); ensure
ProjectManager._promote_to_pipeline and _check_merged_for_deployment are updated
to handle a raised exception or check the returned bool and retry/abort
accordingly so label failures are not treated as successes.

In `@loony_dev/project_manager.py`:
- Around line 316-344: The merge-delay timer isn't reset when checks regress on
the same SHA: in the branches that early-continue for entry.all_completed ==
False or check_runs (failing checks), if self._ci_passed_at contains an entry
for this pr and its SHA equals pr.head_sha you must clear that stored timestamp
so a later successful run of the same SHA will restart the merge_delay; update
the blocks handling entry.all_completed and check_runs to check
self._ci_passed_at.get(pr.number) and if stored and stored[0] == pr.head_sha
then del self._ci_passed_at[pr.number] and
self._actions_taken.discard(f"merge-delay:{pr.number}") before continuing (use
symbols _ci_passed_at, _actions_taken, pr.number, pr.head_sha,
entry.all_completed, check_runs, merge_delay to locate the code).
- Around line 263-281: In _count_in_flight, skip any issue that has the "done"
label so completed items don’t occupy pipeline slots: when computing labels for
each issue in _count_in_flight check if "done" is in labels and continue (do not
increment) before applying the existing _PIPELINE_LABELS logic; similarly, when
adding self._merged_issue_prs only count merged issues that are not labeled
"done" (e.g. filter self._merged_issue_prs by looking up the corresponding Issue
in all_issues and excluding those whose labels contain "done") so
merged-but-complete items aren’t counted as in-flight.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro Plus

Run ID: 9e8dba46-3994-4844-b624-5cfec2acb0c2

📥 Commits

Reviewing files that changed from the base of the PR and between df25c11 and 938c350.

📒 Files selected for processing (5)
  • loony_dev/github/issue.py
  • loony_dev/github/pull_request.py
  • loony_dev/github/repo.py
  • loony_dev/prioritiser.py
  • loony_dev/project_manager.py

Comment thread loony_dev/github/pull_request.py
Comment thread loony_dev/github/repo.py Outdated
Comment thread loony_dev/project_manager.py
Comment thread loony_dev/project_manager.py
- fix(pull_request): scan PR body for issue references (Closes #N, Fixes
  #N, bare #N) so PRs using body-only links are visible to ProjectManager
- fix(pull_request): add body field to PullRequest model and JSON fetches;
  fix label comprehension variable l → label (Ruff E741)
- fix(repo): re-raise in add_label/remove_label so callers can detect
  transient failures and retry instead of silently treating them as success
- fix(project_manager): handle add_label failures in _promote_to_pipeline
  and _check_merged_for_deployment (retry next tick on error)
- fix(project_manager): skip done-labelled issues in _count_in_flight so
  completed issues don't occupy pipeline slots indefinitely
- fix(project_manager): reset _ci_passed_at when checks regress (not-yet-
  complete or failing) so the merge-delay clock restarts on re-green

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
@trixy-the-ai-bot
Copy link
Copy Markdown
Collaborator Author

Review comments addressed.

Four review comments were addressed across pull_request.py, repo.py, and project_manager.py. The changes improve correctness (PR body scanning for issues, llabel rename, done-label exclusion from in-flight counts) and error resilience (add_label/remove_label now propagate failures, label errors retry next tick, CI clock resets on non-green status).

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (4)
loony_dev/project_manager.py (2)

279-281: Redundant condition on line 280.

The check not (labels & _PIPELINE_LABELS) is always true at this point because line 276-278 already continues when pipeline labels are present.

Simplify
             # PR open with no pipeline label (implementation submitted, awaiting merge)
-            if issue.number in open_pr_issue_numbers and not (labels & _PIPELINE_LABELS):
+            if issue.number in open_pr_issue_numbers:
                 count += 1
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@loony_dev/project_manager.py` around lines 279 - 281, Redundant boolean
check: inside the loop that builds the PR count (uses open_pr_issue_numbers,
labels, _PIPELINE_LABELS, count, issue.number) you can remove the unnecessary
"not (labels & _PIPELINE_LABELS)" clause because earlier code already continues
when pipeline labels are present; simplify the condition to just check "if
issue.number in open_pr_issue_numbers: count += 1" (or equivalent) so the branch
is clearer and the redundant bitwise label test is dropped.

404-416: Consider handling post_comment failure.

If add_label succeeds but post_comment raises, action_key is added but the confirmation comment is missing. The issue will still be marked done, so this is low-impact, but users won't see the confirmation message.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@loony_dev/project_manager.py` around lines 404 - 416, The code currently adds
the action key then calls self.github.post_comment, so if post_comment fails the
issue is marked done but the confirmation comment is missing; change the order
and error handling: call self.github.add_label(issue_number, _DONE_LABEL), then
attempt self.github.post_comment(...) inside a try/except (catch
subprocess.CalledProcessError or a broad Exception if post_comment can raise
others), only add self._actions_taken.add(action_key) after post_comment
succeeds, and on exception log a warning referencing PM_MARKER and issue_number
and leave the action key out so the operation will be retried on the next tick;
still call self.github.issues.invalidate() as appropriate (either on success or
failure) to refresh state.
loony_dev/github/repo.py (2)

339-342: Consider consistent error handling for post_comment.

Unlike add_label/remove_label, post_comment doesn't catch and log subprocess.CalledProcessError before propagating. If this inconsistency is intentional (comment failures should halt operations), it's fine—otherwise consider aligning the error handling pattern.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@loony_dev/github/repo.py` around lines 339 - 342, post_comment currently
calls self.client.gh(...) without catching subprocess.CalledProcessError, unlike
add_label/remove_label; wrap the gh invocation inside a try/except that catches
subprocess.CalledProcessError, log an error via logger.error including the
exception and the truncated body (use truncate_for_log) and then either re-raise
or return/handle consistently with add_label/remove_label's behavior so error
handling is aligned with the other label methods.

91-111: ttl_cached decorator is defined but unused.

The milestones property manually implements TTL caching to support instance-level milestone_cache_ttl configuration. Consider removing this decorator if no other methods will use it, or document why it exists for future use.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@loony_dev/github/repo.py` around lines 91 - 111, The ttl_cached decorator is
dead code because the milestones property implements its own TTL using
milestone_cache_ttl and self._ttl_cache; either remove the unused function
ttl_cached or refactor milestones to reuse it: if removing, delete the
ttl_cached definition and any imports (functools, time) only used by it; if
reusing, change the milestones method to use
`@ttl_cached`(default_ttl=self.milestone_cache_ttl) or adapt ttl_cached to accept
a callable/default resolver so it reads milestone_cache_ttl from the instance at
call time, ensuring references to ttl_cached, milestones, milestone_cache_ttl
and self._ttl_cache are updated consistently.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@loony_dev/github/repo.py`:
- Around line 79-87: The wrapper currently treats cached None as a cache miss
because it checks `if cached is not None`; change the logic to check presence of
the key in the cache instead. In the `wrapper` function that wraps `method`,
replace the `cached = self._tick_cache.get(key)` / `if cached is not None`
branch with a presence check like `if key in self._tick_cache: return
self._tick_cache[key]`, then call `result = method(self, *args)` and always
assign `self._tick_cache[key] = result` (so legitimate None results are cached).

---

Nitpick comments:
In `@loony_dev/github/repo.py`:
- Around line 339-342: post_comment currently calls self.client.gh(...) without
catching subprocess.CalledProcessError, unlike add_label/remove_label; wrap the
gh invocation inside a try/except that catches subprocess.CalledProcessError,
log an error via logger.error including the exception and the truncated body
(use truncate_for_log) and then either re-raise or return/handle consistently
with add_label/remove_label's behavior so error handling is aligned with the
other label methods.
- Around line 91-111: The ttl_cached decorator is dead code because the
milestones property implements its own TTL using milestone_cache_ttl and
self._ttl_cache; either remove the unused function ttl_cached or refactor
milestones to reuse it: if removing, delete the ttl_cached definition and any
imports (functools, time) only used by it; if reusing, change the milestones
method to use `@ttl_cached`(default_ttl=self.milestone_cache_ttl) or adapt
ttl_cached to accept a callable/default resolver so it reads milestone_cache_ttl
from the instance at call time, ensuring references to ttl_cached, milestones,
milestone_cache_ttl and self._ttl_cache are updated consistently.

In `@loony_dev/project_manager.py`:
- Around line 279-281: Redundant boolean check: inside the loop that builds the
PR count (uses open_pr_issue_numbers, labels, _PIPELINE_LABELS, count,
issue.number) you can remove the unnecessary "not (labels & _PIPELINE_LABELS)"
clause because earlier code already continues when pipeline labels are present;
simplify the condition to just check "if issue.number in open_pr_issue_numbers:
count += 1" (or equivalent) so the branch is clearer and the redundant bitwise
label test is dropped.
- Around line 404-416: The code currently adds the action key then calls
self.github.post_comment, so if post_comment fails the issue is marked done but
the confirmation comment is missing; change the order and error handling: call
self.github.add_label(issue_number, _DONE_LABEL), then attempt
self.github.post_comment(...) inside a try/except (catch
subprocess.CalledProcessError or a broad Exception if post_comment can raise
others), only add self._actions_taken.add(action_key) after post_comment
succeeds, and on exception log a warning referencing PM_MARKER and issue_number
and leave the action key out so the operation will be retried on the next tick;
still call self.github.issues.invalidate() as appropriate (either on success or
failure) to refresh state.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro Plus

Run ID: 0555a671-25a6-4a78-8e92-a1736f55334f

📥 Commits

Reviewing files that changed from the base of the PR and between 938c350 and 5bc55ec.

📒 Files selected for processing (3)
  • loony_dev/github/pull_request.py
  • loony_dev/github/repo.py
  • loony_dev/project_manager.py
✅ Files skipped from review due to trivial changes (1)
  • loony_dev/github/pull_request.py

Comment thread loony_dev/github/repo.py
Methods that legitimately return None were not being cached — the old
`if cached is not None` check treated them as a miss and re-invoked the
method on every access within the same tick. Switch to `key in self._tick_cache`
so any result, including None, is cached after the first call.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
@trixy-the-ai-bot
Copy link
Copy Markdown
Collaborator Author

Review comments addressed.

Fixed a bug in the tick_cached decorator where a key-presence check (if key in self._tick_cache) replaces a value-presence check (if cached is not None). This ensures methods that legitimately return None are properly cached and not redundantly re-invoked on subsequent accesses within the same tick.

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (1)
loony_dev/github/repo.py (1)

343-346: Consider adding a more specific return type hint.

The return type -> list could be more specific as -> list[Comment] for better IDE support and type safety. This requires importing Comment in a TYPE_CHECKING block to avoid circular imports at runtime.

♻️ Proposed fix

Add to imports at top of file:

from typing import TYPE_CHECKING

if TYPE_CHECKING:
    from loony_dev.github.comment import Comment

Then update the method signature:

-    def get_issue_comments(self, number: int) -> list:
+    def get_issue_comments(self, number: int) -> list[Comment]:
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@loony_dev/github/repo.py` around lines 343 - 346, Add a precise return type
for get_issue_comments by changing its annotation to return list[Comment]; to
avoid runtime circular imports, import TYPE_CHECKING from typing and put "from
loony_dev.github.comment import Comment" inside an "if TYPE_CHECKING:" block at
the top of the module, then update the get_issue_comments signature to ->
list[Comment] and keep the runtime import inside the method as-is.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@loony_dev/github/repo.py`:
- Around line 343-346: Add a precise return type for get_issue_comments by
changing its annotation to return list[Comment]; to avoid runtime circular
imports, import TYPE_CHECKING from typing and put "from loony_dev.github.comment
import Comment" inside an "if TYPE_CHECKING:" block at the top of the module,
then update the get_issue_comments signature to -> list[Comment] and keep the
runtime import inside the method as-is.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro Plus

Run ID: bb60130a-c836-4be1-b560-3733e1b5c42a

📥 Commits

Reviewing files that changed from the base of the PR and between 5bc55ec and 7759626.

📒 Files selected for processing (1)
  • loony_dev/github/repo.py

@trixy-the-ai-bot
Copy link
Copy Markdown
Collaborator Author

Merge conflicts resolved.

Merge conflicts on the feat/project-manager-94 branch were resolved and pushed, preserving changes from both sides. In pull_request.py, the body field (added by this feature for PR body scanning) and author field (added on main) were combined in __init__, storage, and all JSON fetch fields. In repo.py, the owner property introduced on main was retained.

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@loony_dev/cli.py`:
- Line 244: The deploy_workflow value is being forced to a string which converts
None to "None" and breaks ProjectManager.__init__'s expected str | None; change
the call that sets deploy_workflow (the expression deploying
deploy_workflow=str(pm_cfg.get("deploy_workflow",
config.settings.deploy_workflow))) to preserve None instead of coercing it—i.e.,
pass the raw value from pm_cfg.get (or explicitly check for None and only call
str() for non-None values) so ProjectManager receives either a real string or
None as intended.
- Around line 237-248: The ProjectManager constructor call is missing two
supported settings (milestone_cache_ttl and dependency_patterns) so values from
project_manager config are ignored; update the call that creates ProjectManager
to forward these two arguments from pm_cfg using the same pattern as the other
options (e.g. milestone_cache_ttl=int(pm_cfg.get("milestone_cache_ttl",
config.settings.milestone_cache_ttl)) and
dependency_patterns=pm_cfg.get("dependency_patterns",
config.settings.dependency_patterns)) so ProjectManager.__init__ receives both
values.
- Around line 225-228: Repo autodetection via Repo.detect() can raise
subprocess.CalledProcessError and currently bubbles up causing a traceback; wrap
the call that assigns repo when config.settings.repo is None (the Repo.detect()
call near repo = config.settings.repo) in a try/except that catches
subprocess.CalledProcessError (ensure subprocess is imported) and convert it
into a CLI-friendly error (e.g., raise click.ClickException or click.echo an
explanatory message and exit with non‑zero status) instructing the user to pass
--repo instead of letting the exception propagate.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro Plus

Run ID: a7e546d9-9c9b-4e88-89d1-493bc8fa808e

📥 Commits

Reviewing files that changed from the base of the PR and between 7759626 and 69e80b2.

📒 Files selected for processing (3)
  • loony_dev/cli.py
  • loony_dev/github/issue.py
  • loony_dev/github/pull_request.py
🚧 Files skipped from review as they are similar to previous changes (1)
  • loony_dev/github/issue.py

Comment thread loony_dev/cli.py
Comment thread loony_dev/cli.py
Comment thread loony_dev/cli.py Outdated
- wrap Repo.detect() in try/except to produce a friendly CLI error
  instead of a traceback when not in a git checkout or gh auth fails
- forward milestone_cache_ttl and dependency_patterns to ProjectManager
  so config values aren't silently ignored
- preserve None for deploy_workflow instead of coercing to "None" string

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
@trixy-the-ai-bot
Copy link
Copy Markdown
Collaborator Author

Review comments addressed.

Three bug fixes were applied and pushed to the branch: Repo.detect() now handles errors gracefully with a friendly message and clean exit instead of a traceback, deploy_workflow correctly preserves None instead of coercing it to the string "None", and milestone_cache_ttl and dependency_patterns are now properly forwarded from config to ProjectManager.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Create a project manager role

3 participants