feat(project-manager): add project-manager command (#94)#104
feat(project-manager): add project-manager command (#94)#104trixy-the-ai-bot wants to merge 11 commits into
Conversation
Introduces a higher-level orchestration layer on top of the existing worker pipeline. While the `worker` command reactively handles issues as they move through labelled states, the `project-manager` command proactively selects, prioritises, and promotes issues through the pipeline — ensuring that exactly N issues are always in progress. New files: - loony_dev/prioritiser.py — two-phase prioritisation: heuristic filter (Phase 1) narrows the backlog, then a Claude agent ranks the shortlist by strategic value (Phase 2) with fallback to Phase-1 top pick. - loony_dev/project_manager.py — ProjectManager class: polling loop, pipeline-stage detection, main-branch health-check / circuit-breaker, configurable merge delay, deployment verification, and action deduplication across ticks. Modified files: - loony_dev/github.py — new methods: list_milestones(), list_issues_all() (with caching + invalidation), get_open_pr_issue_numbers(), get_branch_check_runs(), get_default_branch_sha(), get_deployment_run(), get_pr_merged_at(), merge_pull_request(). - loony_dev/cli.py — new `project-manager` sub-command wired to ProjectManager with full --n, --interval, --skip-planning, --skip-merge, --merge-delay, --deploy-workflow, --milestone-soon-days, --shortlist-size, --ai-model, --verbose, --log-file options. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
- Use `github.name` instead of `github.repo` in log messages inside ProjectManager (Repo exposes `.name`, not `.repo`) - Search PRs with `--state all` in `find_pr_for_issue` so merged PRs are found during post-merge deployment verification Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
TMaYaD
left a comment
There was a problem hiding this comment.
github Repo is turning into god class again. Follow the existing patterns (active record) in the repo.
|
|
||
| # --- Milestones --- | ||
|
|
||
| def list_milestones(self, ttl: float = 3600.0) -> dict[str, datetime | None]: |
There was a problem hiding this comment.
should be repo.milestones.
Caching should be handled through decorators.
| logger.debug("list_milestones() fetched %d milestone(s)", len(result)) | ||
| return result | ||
|
|
||
| def invalidate_milestones_cache(self) -> None: |
There was a problem hiding this comment.
is this the responsibility of repo?
|
|
||
| # --- All-issues cache --- | ||
|
|
||
| def list_issues_all(self, ttl: float = 120.0) -> list[dict]: |
There was a problem hiding this comment.
repo.issues.open. Follow the active record pattern already established.
|
|
||
| # --- Open PR → issue number mapping --- | ||
|
|
||
| def get_open_pr_issue_numbers(self) -> set[int]: |
There was a problem hiding this comment.
[pr.issues for pr in repo.pull_requests].flatten
|
|
||
| # --- Branch health --- | ||
|
|
||
| def get_branch_check_runs(self, sha: str) -> list[dict]: |
| logger.debug("get_branch_check_runs(%r) returned %d run(s)", sha, len(runs)) | ||
| return runs | ||
|
|
||
| def get_default_branch_sha(self) -> str: |
|
|
||
| # --- Deployment workflow --- | ||
|
|
||
| def get_deployment_run(self, workflow: str, after_timestamp: datetime) -> bool: |
There was a problem hiding this comment.
workflow.runs.where(name="deployment", timestamp_is_gt=t)
|
|
||
| # --- PR merging --- | ||
|
|
||
| def get_pr_merged_at(self, pr_number: int) -> datetime | None: |
| logger.warning("Failed to fetch merge state for PR #%d", pr_number) | ||
| return None | ||
|
|
||
| def merge_pull_request(self, pr_number: int, merge_method: str = "squash") -> bool: |
|
|
||
| # --- Compat helpers for project_manager --- | ||
|
|
||
| def list_open_prs(self) -> list[dict]: |
…odels - Add Branch, Milestone, Workflow, WorkflowRun models in dedicated files - Add IssueCollection (repo.issues.open) and PullRequestCollection (repo.pull_requests.open) collection proxies - Add tick_cached and ttl_cached decorators; remove inline cache boilerplate - Add repo.default_branch, repo.milestones as cached properties - Add pr.issues (referenced issue numbers), pr.merged_at, pr.merge(), pr.is_draft to PullRequest - Update project_manager to use model API; update prioritiser to use repo.milestones property Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
|
Review comments addressed. The refactor replaced a 280-line god-class |
TMaYaD
left a comment
There was a problem hiding this comment.
the return types are using primitives and generic dicts.
Use ActiveRecord patterns. Use ObjectClasses and CollectionClasses(inheriting from list may be a good idea) with chainable filters where required.
I have highlighted a few instances. But search across the codebase for similar patterns.
| self._repo = repo | ||
|
|
||
| @property | ||
| def open(self) -> list[dict]: |
There was a problem hiding this comment.
must return list[Issue] or better yet IssueCollection.
| return self._sha | ||
|
|
||
| @property | ||
| def check_runs(self) -> list[dict]: |
There was a problem hiding this comment.
don't return list of generic dicts. use more concrete list like list[CheckRun] or list[WorkflowRun]
| return any(a.get("login", "") == username for a in self.assignees) | ||
|
|
||
| @property | ||
| def issues(self) -> list[int]: |
…bjects - IssueCollection(list): list subclass of Issue instances with .where() chainable filters and .numbers(); replaces list[dict] everywhere - Issue gains created_at and milestone fields; _from_api supports optional content sanitization; fixes milestone due-date field (dueOn) - Branch.check_runs returns list[CheckRun] via new CheckRun.list_all() (all runs, not just failing) - pr.issues returns IssueCollection of stub Issue instances instead of list[int] - WorkflowRunCollection(list): inherits from list; .where() returns WorkflowRunCollection instead of list[WorkflowRun] - prioritiser and project_manager updated to use Issue attributes (.number, .labels, .milestone, .created_at) instead of dict access Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
|
Review comments addressed. Replaced raw |
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughA new Changes
Sequence Diagram(s)sequenceDiagram
participant CLI as CLI/User
participant PM as ProjectManager
participant Cache as Tick Cache
participant GitHub as GitHub API
participant Prioritiser as Prioritiser
participant Claude as Claude AI
CLI->>PM: run()
loop Every interval seconds
PM->>Cache: Clear tick cache
PM->>GitHub: Check main branch health
GitHub-->>PM: Health status
alt Branch unhealthy
PM->>PM: Pause promotion/merge
else Branch healthy
PM->>GitHub: Fetch open issues & PRs
GitHub-->>PM: Issue/PR snapshots
PM->>PM: Auto-merge ready PRs (if not skipped)
PM->>GitHub: Check deployment completion
GitHub-->>PM: Deployment status
PM->>PM: Count in-flight issues
loop Until capacity reached
PM->>Prioritiser: select_next(open_issues)
Prioritiser->>Prioritiser: Phase 1: Filter & score candidates
Prioritiser->>Claude: Phase 2: AI rank shortlist
Claude-->>Prioritiser: Ranked issue number
Prioritiser-->>PM: Selected issue
PM->>GitHub: Promote to pipeline
GitHub-->>PM: Confirmation
end
end
PM->>PM: Sleep 1 second
end
sequenceDiagram
participant Prioritiser
participant Issues as Issue Data
participant OpenPRs as Open PRs
participant Body as Issue Body
participant Claude as Claude CLI
Prioritiser->>Issues: Get all open issues
Issues-->>Prioritiser: List of issues
loop For each issue
Prioritiser->>OpenPRs: Check if issue has open PR
alt Has open PR
Prioritiser->>Prioritiser: Exclude (disqualifying)
else No open PR
Prioritiser->>Body: Extract blocking dependencies
alt Has blocking dependencies
Prioritiser->>Prioritiser: Exclude (disqualifying)
else No blockers
Prioritiser->>Prioritiser: Score by milestone, plan marker, labels, age
Prioritiser->>Prioritiser: Add to shortlist
end
end
end
Prioritiser->>Prioritiser: Sort shortlist by score, truncate to size
alt Shortlist has exactly 1 candidate
Prioritiser-->>Prioritiser: Return top candidate, skip Phase 2
else Shortlist has multiple candidates
Prioritiser->>Claude: Send milestone/issue metadata + prompt
Claude-->>Prioritiser: Parse JSON response for issue_number
Prioritiser-->>Prioritiser: Return selected issue or fallback to Phase 1 top
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Comment |
|
Merge conflicts resolved. The merge conflict in |
There was a problem hiding this comment.
Actionable comments posted: 7
🧹 Nitpick comments (6)
loony_dev/github/pull_request.py (1)
164-178: Consider cachingmerged_atresult.The
merged_atproperty makes a fresh API call on every access. If this property is accessed multiple times in a single tick (e.g., for logging and then for deployment verification), it will redundantly query GitHub.💡 Option: Cache on instance after first fetch
+ self._merged_at: datetime | None = None + self._merged_at_fetched: bool = False + `@property` def merged_at(self): """Return the merged-at datetime if this PR has been merged, else ``None``.""" + if self._merged_at_fetched: + return self._merged_at try: output = self._repo.client.gh( "pr", "view", str(self.number), "--json", "state,merged,mergedAt", "-q", ".", ) data = _json.loads(output) if data.get("merged"): - return parse_datetime(data.get("mergedAt")) + self._merged_at = parse_datetime(data.get("mergedAt")) except (subprocess.CalledProcessError, ValueError): logger.warning("Failed to fetch merge state for PR #%d", self.number) - return None + self._merged_at_fetched = True + return self._merged_at🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@loony_dev/github/pull_request.py` around lines 164 - 178, The merged_at property currently queries GitHub every access; change it to cache the fetched value on the instance: add a private attribute (e.g., self._merged_at or self._cached_merged_at) initialized to a sentinel (None or a separate _UNSET) and have merged_at check that attribute first, performing the API call only if the cache is unset, storing the parsed datetime (or None) into the cache before returning; update any error-handling to still set the cache on failure so subsequent accesses don't re-query repeatedly.loony_dev/github/check_run.py (1)
100-105: Error results are tick-cached, unlikelist_failing.When
CalledProcessErroris raised, the code setsruns = []and then caches it at line 104. This means failed API calls will return[]for the entire tick without retrying. The existinglist_failingmethod (lines 71-73) returns immediately on error without caching.If this is intentional (to avoid repeated failing calls within a tick), consider adding a brief comment. If not, move the cache write inside the
tryblock.💡 Option A: Don't cache errors (match list_failing behavior)
except subprocess.CalledProcessError: logger.warning("Failed to fetch all check runs for SHA %r", head_sha) - runs = [] + return [] repo._tick_cache[cache_key] = runs🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@loony_dev/github/check_run.py` around lines 100 - 105, The current CheckRun.list_all error path caches an empty result on subprocess.CalledProcessError (assigning runs = [] then writing repo._tick_cache[cache_key] = runs), causing failed API calls to be suppressed for the whole tick; change this to match list_failing behavior by moving the cache write (repo._tick_cache[cache_key] = runs) inside the try block so caching only occurs on successful responses, and leave the except block to return without writing to repo._tick_cache (or add a short comment if caching on error was intentional). Ensure you update the logic in the list_all function and reference cache_key, repo._tick_cache, and the CalledProcessError except handler when making the change.loony_dev/cli.py (1)
221-226:Repo.detect()called withoutcwdparameter.The
workercommand (line 77) passescwd=str(work_path)toRepo.detect(), butproject-managercalls it without arguments. This meansproject-managerwill always detect from the current working directory, which may be fine for this use case but differs from worker's behavior.If
project-managershould also support--work-dir, consider adding the option for consistency. Otherwise, this is acceptable as-is.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@loony_dev/cli.py` around lines 221 - 226, The code calls Repo.detect() without a cwd like the worker command does; add a --work-dir click option to the project-manager command and pass it to Repo.detect to make behavior consistent: declare a work_dir parameter for the command (matching worker's name/type), call Repo.detect(cwd=str(work_dir)) when repo is None, and use that detected path for github = Repo(repo) so detection occurs relative to the provided work-dir; also update the option help text accordingly.loony_dev/github/branch.py (1)
23-36: Instance-level SHA cache persists across ticks.The
shaproperty caches the result on theBranchinstance (self._sha). If aBranchobject is reused across multiple ticks (e.g., stored in a long-livedRepoproperty), the SHA will become stale as the branch advances.Consider either:
- Using tick-caching via
repo._tick_cache(consistent with other models), or- Documenting that
Branchinstances should be short-lived / recreated each tick.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@loony_dev/github/branch.py` around lines 23 - 36, The Branch.sha property currently stores the commit SHA on the Branch instance (self._sha) which causes stale values across ticks; update Branch.sha to use the repo tick cache instead of an instance-level field: in the sha property's getter (method name: sha on class Branch) read/write the SHA from self._repo._tick_cache under a stable key (e.g. include the branch name) so the value is refreshed each tick, falling back to calling self._repo.client.gh when the cache miss occurs and then storing the result in _tick_cache; if you prefer the alternate approach, remove the instance cache and document that Branch objects must be recreated per tick. Ensure exception handling (subprocess.CalledProcessError) and the existing logger.warning remain when populating the tick cache.loony_dev/github/workflow.py (1)
52-79:runsproperty is not cached.Each access to
Workflow.runsmakes a fresh API call. If deployment verification polls this property multiple times within a tick, consider adding tick-caching similar to other collection properties.However, since deployment runs may complete during a tick, avoiding caching might be intentional for freshness. If so, this is fine as-is.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@loony_dev/github/workflow.py` around lines 52 - 79, The runs property currently always calls the API; add tick-caching inside the Workflow.runs property so repeated accesses within the same tick return a cached WorkflowRunCollection. Implement by storing a small cache (e.g. self._cached_runs and self._cached_runs_tick) on the Workflow instance, compare the current tick value from the repository (e.g. self._repo.tick or equivalent tick provider used by other collection properties) to _cached_runs_tick, and only call self._repo.client.gh_api(...) to refresh and set _cached_runs/_cached_runs_tick when the tick has advanced or cache is empty; otherwise return the cached WorkflowRunCollection. Ensure you still parse entries into WorkflowRun and log as before.loony_dev/prioritiser.py (1)
302-311: Consider input validation and structured formatting for prompt injection defense.The subprocess uses list arguments (mitigating shell injection), but the
promptvariable is constructed from untrusted GitHub issue data (titles, bodies, labels). While the immediate risk is prompt injection rather than command execution, best practices recommend defense-in-depth:
- Input validation: Filter or sanitize issue content for injection patterns before constructing the prompt.
- Structured formatting: Use clear delimiters (e.g., XML tags) to separate system instructions from user-controlled data, signaling to Claude that issue content is data, not directives.
- Output validation: Validate Claude's response against expected schema.
The static analysis flags (S607, S603) are acknowledged; partial path
"claude"is standard for CLI tools, and untrusted input handling is the core concern here. Current implementation is functional but lacks these mitigations, which are straightforward to add.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@loony_dev/prioritiser.py` around lines 302 - 311, The prompt construction currently passes untrusted GitHub issue content into subprocess.run (see prompt variable used in the ["claude", ...] call) without sanitization; before building prompt, add input validation/sanitization for issue titles, bodies and labels (strip or escape control characters, remove unexpected injection tokens like instruction separators), and wrap all user-controlled data in explicit, machine-readable delimiters/tags (e.g., <ISSUE>...</ISSUE>) so system instructions are separate from data; after subprocess.run returns, validate result.stdout against the expected schema/format and reject or re-request parsing if it doesn't conform (use result.returncode and result.stderr for error handling as now).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@loony_dev/github/issue.py`:
- Line 149: The list comprehension using a one-letter variable `l`
(labels=[l["name"] for l in data.get("labels", [])]) triggers Ruff E741; change
the comprehension variable to a clear name like `label` (labels=[label["name"]
for label in data.get("labels", [])]) and update any adjacent references if
present so the codebase passes lint checks.
In `@loony_dev/github/pull_request.py`:
- Around line 135-162: The return annotation IssueCollection on the issues
property is referenced before it's imported at module level, causing runtime
introspection (typing.get_type_hints) to fail; to fix, add IssueCollection (and
Issue if not already) to the module-level TYPE_CHECKING imports block (i.e.,
import IssueCollection from loony_dev.github.issue inside an if TYPE_CHECKING:
...), keep the existing runtime local import inside issues for behavior, and
ensure typing.TYPE_CHECKING is imported at top if missing so annotations can be
resolved during type-checking/introspection.
In `@loony_dev/github/repo.py`:
- Around line 337-352: find_pr_for_issue currently returns the newest created PR
regardless of merge status which can hide an earlier merged PR; update the
method (find_pr_for_issue) to request "state" (and optionally "mergedAt") in the
gh_json --json call, then prefer and return a PR whose state is "MERGED" (or has
a non-null mergedAt) if any exist, otherwise fall back to the newest by
createdAt as before; ensure you still iterate the same search terms but
filter/sort the returned data to select merged PRs first.
In `@loony_dev/prioritiser.py`:
- Around line 54-60: The code that builds the regex after re.escape(pattern) is
incorrectly using escaped.rstrip(r"\#") which removes any trailing backslashes
or hashes individually; change this to explicitly remove the escaped sequence
for a literal hash: after escaped = re.escape(pattern) check if
escaped.endswith(r"\#") and if so set escaped = escaped[:-2] (or otherwise
remove the exact suffix), then append r"\s*#(\d+)" and compile the pattern (the
code around regex = re.compile(... ) in prioritiser.py). Ensure you preserve any
trailing whitespace handling (e.g., call .rstrip() only after removing the exact
'\#' suffix) so find_issue numbers still match as intended.
In `@loony_dev/project_manager.py`:
- Around line 82-105: The constructor stores milestone_cache_ttl but it is never
used when reading milestones, so make the option actually apply by plumbing the
value into the Github/Repo milestone call: update ProjectManager.__init__ to
either pass milestone_cache_ttl into the Repo instance (or store it as
self.milestone_cache_ttl) and then call Repo.milestones(...,
ttl=self.milestone_cache_ttl) wherever milestones are fetched; alternatively add
an optional ttl parameter to Repo.milestones and ensure calls from
ProjectManager (and any methods like fetch_milestones or get_milestones in
project_manager.py) supply self.milestone_cache_ttl so the Repo.milestones cache
TTL overrides the hardcoded 3600s.
- Around line 298-327: The bug is that self._ci_passed_at is keyed only by PR
number so a new head commit inherits the old timer; change the logic to track CI
pass per head SHA (e.g. key by (pr.number, pr.head_sha) or maintain a mapping
from pr.number -> (head_sha, timestamp)) in the block handling successful
checks: when you see pr.head_sha not in the map or different from the stored
SHA, reset/set the timestamp for that head (use self._ci_passed_at with keys
including pr.head_sha or update an auxiliary self._ci_head_sha map), then
compute elapsed using the timestamp tied to the current pr.head_sha so a fresh
push restarts the merge-delay clock.
- Around line 192-202: The snapshot of open PRs is taken into
open_prs/open_pr_by_number before calling _maybe_merge_ready_prs(), causing
stale data when merges happen; move or re-create the snapshot after auto-merges
by calling _maybe_merge_ready_prs(open_prs) first (or call it, then reassign
open_prs = list(self.github.pull_requests.open) and recompute open_pr_by_number
and open_pr_issue_numbers) so that _check_merged_for_deployment(all_issues,
open_pr_by_number) operates on an up-to-date view; update references to
open_prs/open_pr_by_number/open_pr_issue_numbers accordingly.
---
Nitpick comments:
In `@loony_dev/cli.py`:
- Around line 221-226: The code calls Repo.detect() without a cwd like the
worker command does; add a --work-dir click option to the project-manager
command and pass it to Repo.detect to make behavior consistent: declare a
work_dir parameter for the command (matching worker's name/type), call
Repo.detect(cwd=str(work_dir)) when repo is None, and use that detected path for
github = Repo(repo) so detection occurs relative to the provided work-dir; also
update the option help text accordingly.
In `@loony_dev/github/branch.py`:
- Around line 23-36: The Branch.sha property currently stores the commit SHA on
the Branch instance (self._sha) which causes stale values across ticks; update
Branch.sha to use the repo tick cache instead of an instance-level field: in the
sha property's getter (method name: sha on class Branch) read/write the SHA from
self._repo._tick_cache under a stable key (e.g. include the branch name) so the
value is refreshed each tick, falling back to calling self._repo.client.gh when
the cache miss occurs and then storing the result in _tick_cache; if you prefer
the alternate approach, remove the instance cache and document that Branch
objects must be recreated per tick. Ensure exception handling
(subprocess.CalledProcessError) and the existing logger.warning remain when
populating the tick cache.
In `@loony_dev/github/check_run.py`:
- Around line 100-105: The current CheckRun.list_all error path caches an empty
result on subprocess.CalledProcessError (assigning runs = [] then writing
repo._tick_cache[cache_key] = runs), causing failed API calls to be suppressed
for the whole tick; change this to match list_failing behavior by moving the
cache write (repo._tick_cache[cache_key] = runs) inside the try block so caching
only occurs on successful responses, and leave the except block to return
without writing to repo._tick_cache (or add a short comment if caching on error
was intentional). Ensure you update the logic in the list_all function and
reference cache_key, repo._tick_cache, and the CalledProcessError except handler
when making the change.
In `@loony_dev/github/pull_request.py`:
- Around line 164-178: The merged_at property currently queries GitHub every
access; change it to cache the fetched value on the instance: add a private
attribute (e.g., self._merged_at or self._cached_merged_at) initialized to a
sentinel (None or a separate _UNSET) and have merged_at check that attribute
first, performing the API call only if the cache is unset, storing the parsed
datetime (or None) into the cache before returning; update any error-handling to
still set the cache on failure so subsequent accesses don't re-query repeatedly.
In `@loony_dev/github/workflow.py`:
- Around line 52-79: The runs property currently always calls the API; add
tick-caching inside the Workflow.runs property so repeated accesses within the
same tick return a cached WorkflowRunCollection. Implement by storing a small
cache (e.g. self._cached_runs and self._cached_runs_tick) on the Workflow
instance, compare the current tick value from the repository (e.g.
self._repo.tick or equivalent tick provider used by other collection properties)
to _cached_runs_tick, and only call self._repo.client.gh_api(...) to refresh and
set _cached_runs/_cached_runs_tick when the tick has advanced or cache is empty;
otherwise return the cached WorkflowRunCollection. Ensure you still parse
entries into WorkflowRun and log as before.
In `@loony_dev/prioritiser.py`:
- Around line 302-311: The prompt construction currently passes untrusted GitHub
issue content into subprocess.run (see prompt variable used in the ["claude",
...] call) without sanitization; before building prompt, add input
validation/sanitization for issue titles, bodies and labels (strip or escape
control characters, remove unexpected injection tokens like instruction
separators), and wrap all user-controlled data in explicit, machine-readable
delimiters/tags (e.g., <ISSUE>...</ISSUE>) so system instructions are separate
from data; after subprocess.run returns, validate result.stdout against the
expected schema/format and reject or re-request parsing if it doesn't conform
(use result.returncode and result.stderr for error handling as now).
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro Plus
Run ID: 442673a3-6471-410b-bda5-9376aae5d854
📒 Files selected for processing (11)
loony_dev/cli.pyloony_dev/github/__init__.pyloony_dev/github/branch.pyloony_dev/github/check_run.pyloony_dev/github/issue.pyloony_dev/github/milestone.pyloony_dev/github/pull_request.pyloony_dev/github/repo.pyloony_dev/github/workflow.pyloony_dev/prioritiser.pyloony_dev/project_manager.py
- fix(issue): rename `l` → `label` in _from_api to satisfy Ruff E741 - fix(pull_request): add IssueCollection to TYPE_CHECKING to avoid F821 - fix(repo): prefer merged PRs in find_pr_for_issue to prevent abandoned PRs from shadowing the actual merged PR - fix(repo): honour milestone_cache_ttl instead of hardcoded 3600s; inline TTL check in milestones property using self.milestone_cache_ttl - fix(prioritiser): strip trailing # before re.escape to fix rstrip logic - fix(project_manager): wire milestone_cache_ttl into github.milestone_cache_ttl - fix(project_manager): refresh open-PR snapshot after _maybe_merge_ready_prs so _check_merged_for_deployment sees post-merge state - fix(project_manager): key _ci_passed_at by (head_sha, ts) so a fresh push resets the merge-delay clock Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
|
Review comments addressed. The PR addressed six review comments across five files: fixing a Ruff linter warning ( |
There was a problem hiding this comment.
Actionable comments posted: 4
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@loony_dev/github/pull_request.py`:
- Around line 136-163: The issues property currently only scans title and
branch; update the issues property in pull_request.py to also scan the PR body
(self.body) for referenced issue numbers, including both bare "#123" and keyword
patterns like "closes `#123`", "fixes `#123`" (case-insensitive), while preserving
the existing deduplication via seen and appending into numbers; return the same
IssueCollection(Issue(number=n, _repo=self._repo) for n in numbers) so
downstream code (ProjectManager) sees issues referenced in the body as well.
In `@loony_dev/github/repo.py`:
- Around line 325-335: The add_label and remove_label helpers currently swallow
subprocess.CalledProcessError which makes callers like
ProjectManager._promote_to_pipeline and _check_merged_for_deployment believe
labeling succeeded; update add_label and remove_label (methods on the repo
client that call self.client.gh("issue","edit",...)) to not swallow
failures—either remove the try/except so CalledProcessError propagates, or log
and re-raise the exception (or change the signature to return a boolean success
and have the ProjectManager callers check it); ensure
ProjectManager._promote_to_pipeline and _check_merged_for_deployment are updated
to handle a raised exception or check the returned bool and retry/abort
accordingly so label failures are not treated as successes.
In `@loony_dev/project_manager.py`:
- Around line 316-344: The merge-delay timer isn't reset when checks regress on
the same SHA: in the branches that early-continue for entry.all_completed ==
False or check_runs (failing checks), if self._ci_passed_at contains an entry
for this pr and its SHA equals pr.head_sha you must clear that stored timestamp
so a later successful run of the same SHA will restart the merge_delay; update
the blocks handling entry.all_completed and check_runs to check
self._ci_passed_at.get(pr.number) and if stored and stored[0] == pr.head_sha
then del self._ci_passed_at[pr.number] and
self._actions_taken.discard(f"merge-delay:{pr.number}") before continuing (use
symbols _ci_passed_at, _actions_taken, pr.number, pr.head_sha,
entry.all_completed, check_runs, merge_delay to locate the code).
- Around line 263-281: In _count_in_flight, skip any issue that has the "done"
label so completed items don’t occupy pipeline slots: when computing labels for
each issue in _count_in_flight check if "done" is in labels and continue (do not
increment) before applying the existing _PIPELINE_LABELS logic; similarly, when
adding self._merged_issue_prs only count merged issues that are not labeled
"done" (e.g. filter self._merged_issue_prs by looking up the corresponding Issue
in all_issues and excluding those whose labels contain "done") so
merged-but-complete items aren’t counted as in-flight.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro Plus
Run ID: 9e8dba46-3994-4844-b624-5cfec2acb0c2
📒 Files selected for processing (5)
loony_dev/github/issue.pyloony_dev/github/pull_request.pyloony_dev/github/repo.pyloony_dev/prioritiser.pyloony_dev/project_manager.py
- fix(pull_request): scan PR body for issue references (Closes #N, Fixes #N, bare #N) so PRs using body-only links are visible to ProjectManager - fix(pull_request): add body field to PullRequest model and JSON fetches; fix label comprehension variable l → label (Ruff E741) - fix(repo): re-raise in add_label/remove_label so callers can detect transient failures and retry instead of silently treating them as success - fix(project_manager): handle add_label failures in _promote_to_pipeline and _check_merged_for_deployment (retry next tick on error) - fix(project_manager): skip done-labelled issues in _count_in_flight so completed issues don't occupy pipeline slots indefinitely - fix(project_manager): reset _ci_passed_at when checks regress (not-yet- complete or failing) so the merge-delay clock restarts on re-green Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
|
Review comments addressed. Four review comments were addressed across |
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (4)
loony_dev/project_manager.py (2)
279-281: Redundant condition on line 280.The check
not (labels & _PIPELINE_LABELS)is always true at this point because line 276-278 alreadycontinues when pipeline labels are present.Simplify
# PR open with no pipeline label (implementation submitted, awaiting merge) - if issue.number in open_pr_issue_numbers and not (labels & _PIPELINE_LABELS): + if issue.number in open_pr_issue_numbers: count += 1🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@loony_dev/project_manager.py` around lines 279 - 281, Redundant boolean check: inside the loop that builds the PR count (uses open_pr_issue_numbers, labels, _PIPELINE_LABELS, count, issue.number) you can remove the unnecessary "not (labels & _PIPELINE_LABELS)" clause because earlier code already continues when pipeline labels are present; simplify the condition to just check "if issue.number in open_pr_issue_numbers: count += 1" (or equivalent) so the branch is clearer and the redundant bitwise label test is dropped.
404-416: Consider handlingpost_commentfailure.If
add_labelsucceeds butpost_commentraises,action_keyis added but the confirmation comment is missing. The issue will still be marked done, so this is low-impact, but users won't see the confirmation message.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@loony_dev/project_manager.py` around lines 404 - 416, The code currently adds the action key then calls self.github.post_comment, so if post_comment fails the issue is marked done but the confirmation comment is missing; change the order and error handling: call self.github.add_label(issue_number, _DONE_LABEL), then attempt self.github.post_comment(...) inside a try/except (catch subprocess.CalledProcessError or a broad Exception if post_comment can raise others), only add self._actions_taken.add(action_key) after post_comment succeeds, and on exception log a warning referencing PM_MARKER and issue_number and leave the action key out so the operation will be retried on the next tick; still call self.github.issues.invalidate() as appropriate (either on success or failure) to refresh state.loony_dev/github/repo.py (2)
339-342: Consider consistent error handling forpost_comment.Unlike
add_label/remove_label,post_commentdoesn't catch and logsubprocess.CalledProcessErrorbefore propagating. If this inconsistency is intentional (comment failures should halt operations), it's fine—otherwise consider aligning the error handling pattern.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@loony_dev/github/repo.py` around lines 339 - 342, post_comment currently calls self.client.gh(...) without catching subprocess.CalledProcessError, unlike add_label/remove_label; wrap the gh invocation inside a try/except that catches subprocess.CalledProcessError, log an error via logger.error including the exception and the truncated body (use truncate_for_log) and then either re-raise or return/handle consistently with add_label/remove_label's behavior so error handling is aligned with the other label methods.
91-111:ttl_cacheddecorator is defined but unused.The
milestonesproperty manually implements TTL caching to support instance-levelmilestone_cache_ttlconfiguration. Consider removing this decorator if no other methods will use it, or document why it exists for future use.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@loony_dev/github/repo.py` around lines 91 - 111, The ttl_cached decorator is dead code because the milestones property implements its own TTL using milestone_cache_ttl and self._ttl_cache; either remove the unused function ttl_cached or refactor milestones to reuse it: if removing, delete the ttl_cached definition and any imports (functools, time) only used by it; if reusing, change the milestones method to use `@ttl_cached`(default_ttl=self.milestone_cache_ttl) or adapt ttl_cached to accept a callable/default resolver so it reads milestone_cache_ttl from the instance at call time, ensuring references to ttl_cached, milestones, milestone_cache_ttl and self._ttl_cache are updated consistently.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@loony_dev/github/repo.py`:
- Around line 79-87: The wrapper currently treats cached None as a cache miss
because it checks `if cached is not None`; change the logic to check presence of
the key in the cache instead. In the `wrapper` function that wraps `method`,
replace the `cached = self._tick_cache.get(key)` / `if cached is not None`
branch with a presence check like `if key in self._tick_cache: return
self._tick_cache[key]`, then call `result = method(self, *args)` and always
assign `self._tick_cache[key] = result` (so legitimate None results are cached).
---
Nitpick comments:
In `@loony_dev/github/repo.py`:
- Around line 339-342: post_comment currently calls self.client.gh(...) without
catching subprocess.CalledProcessError, unlike add_label/remove_label; wrap the
gh invocation inside a try/except that catches subprocess.CalledProcessError,
log an error via logger.error including the exception and the truncated body
(use truncate_for_log) and then either re-raise or return/handle consistently
with add_label/remove_label's behavior so error handling is aligned with the
other label methods.
- Around line 91-111: The ttl_cached decorator is dead code because the
milestones property implements its own TTL using milestone_cache_ttl and
self._ttl_cache; either remove the unused function ttl_cached or refactor
milestones to reuse it: if removing, delete the ttl_cached definition and any
imports (functools, time) only used by it; if reusing, change the milestones
method to use `@ttl_cached`(default_ttl=self.milestone_cache_ttl) or adapt
ttl_cached to accept a callable/default resolver so it reads milestone_cache_ttl
from the instance at call time, ensuring references to ttl_cached, milestones,
milestone_cache_ttl and self._ttl_cache are updated consistently.
In `@loony_dev/project_manager.py`:
- Around line 279-281: Redundant boolean check: inside the loop that builds the
PR count (uses open_pr_issue_numbers, labels, _PIPELINE_LABELS, count,
issue.number) you can remove the unnecessary "not (labels & _PIPELINE_LABELS)"
clause because earlier code already continues when pipeline labels are present;
simplify the condition to just check "if issue.number in open_pr_issue_numbers:
count += 1" (or equivalent) so the branch is clearer and the redundant bitwise
label test is dropped.
- Around line 404-416: The code currently adds the action key then calls
self.github.post_comment, so if post_comment fails the issue is marked done but
the confirmation comment is missing; change the order and error handling: call
self.github.add_label(issue_number, _DONE_LABEL), then attempt
self.github.post_comment(...) inside a try/except (catch
subprocess.CalledProcessError or a broad Exception if post_comment can raise
others), only add self._actions_taken.add(action_key) after post_comment
succeeds, and on exception log a warning referencing PM_MARKER and issue_number
and leave the action key out so the operation will be retried on the next tick;
still call self.github.issues.invalidate() as appropriate (either on success or
failure) to refresh state.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro Plus
Run ID: 0555a671-25a6-4a78-8e92-a1736f55334f
📒 Files selected for processing (3)
loony_dev/github/pull_request.pyloony_dev/github/repo.pyloony_dev/project_manager.py
✅ Files skipped from review due to trivial changes (1)
- loony_dev/github/pull_request.py
Methods that legitimately return None were not being cached — the old `if cached is not None` check treated them as a miss and re-invoked the method on every access within the same tick. Switch to `key in self._tick_cache` so any result, including None, is cached after the first call. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
|
Review comments addressed. Fixed a bug in the |
There was a problem hiding this comment.
🧹 Nitpick comments (1)
loony_dev/github/repo.py (1)
343-346: Consider adding a more specific return type hint.The return type
-> listcould be more specific as-> list[Comment]for better IDE support and type safety. This requires importingCommentin aTYPE_CHECKINGblock to avoid circular imports at runtime.♻️ Proposed fix
Add to imports at top of file:
from typing import TYPE_CHECKING if TYPE_CHECKING: from loony_dev.github.comment import CommentThen update the method signature:
- def get_issue_comments(self, number: int) -> list: + def get_issue_comments(self, number: int) -> list[Comment]:🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@loony_dev/github/repo.py` around lines 343 - 346, Add a precise return type for get_issue_comments by changing its annotation to return list[Comment]; to avoid runtime circular imports, import TYPE_CHECKING from typing and put "from loony_dev.github.comment import Comment" inside an "if TYPE_CHECKING:" block at the top of the module, then update the get_issue_comments signature to -> list[Comment] and keep the runtime import inside the method as-is.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@loony_dev/github/repo.py`:
- Around line 343-346: Add a precise return type for get_issue_comments by
changing its annotation to return list[Comment]; to avoid runtime circular
imports, import TYPE_CHECKING from typing and put "from loony_dev.github.comment
import Comment" inside an "if TYPE_CHECKING:" block at the top of the module,
then update the get_issue_comments signature to -> list[Comment] and keep the
runtime import inside the method as-is.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro Plus
Run ID: bb60130a-c836-4be1-b560-3733e1b5c42a
📒 Files selected for processing (1)
loony_dev/github/repo.py
|
Merge conflicts resolved. Merge conflicts on the |
There was a problem hiding this comment.
Actionable comments posted: 3
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@loony_dev/cli.py`:
- Line 244: The deploy_workflow value is being forced to a string which converts
None to "None" and breaks ProjectManager.__init__'s expected str | None; change
the call that sets deploy_workflow (the expression deploying
deploy_workflow=str(pm_cfg.get("deploy_workflow",
config.settings.deploy_workflow))) to preserve None instead of coercing it—i.e.,
pass the raw value from pm_cfg.get (or explicitly check for None and only call
str() for non-None values) so ProjectManager receives either a real string or
None as intended.
- Around line 237-248: The ProjectManager constructor call is missing two
supported settings (milestone_cache_ttl and dependency_patterns) so values from
project_manager config are ignored; update the call that creates ProjectManager
to forward these two arguments from pm_cfg using the same pattern as the other
options (e.g. milestone_cache_ttl=int(pm_cfg.get("milestone_cache_ttl",
config.settings.milestone_cache_ttl)) and
dependency_patterns=pm_cfg.get("dependency_patterns",
config.settings.dependency_patterns)) so ProjectManager.__init__ receives both
values.
- Around line 225-228: Repo autodetection via Repo.detect() can raise
subprocess.CalledProcessError and currently bubbles up causing a traceback; wrap
the call that assigns repo when config.settings.repo is None (the Repo.detect()
call near repo = config.settings.repo) in a try/except that catches
subprocess.CalledProcessError (ensure subprocess is imported) and convert it
into a CLI-friendly error (e.g., raise click.ClickException or click.echo an
explanatory message and exit with non‑zero status) instructing the user to pass
--repo instead of letting the exception propagate.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro Plus
Run ID: a7e546d9-9c9b-4e88-89d1-493bc8fa808e
📒 Files selected for processing (3)
loony_dev/cli.pyloony_dev/github/issue.pyloony_dev/github/pull_request.py
🚧 Files skipped from review as they are similar to previous changes (1)
- loony_dev/github/issue.py
- wrap Repo.detect() in try/except to produce a friendly CLI error instead of a traceback when not in a git checkout or gh auth fails - forward milestone_cache_ttl and dependency_patterns to ProjectManager so config values aren't silently ignored - preserve None for deploy_workflow instead of coercing to "None" string Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
|
Review comments addressed. Three bug fixes were applied and pushed to the branch: |
Summary
loony-dev project-managercommand that acts as a higher-level orchestration layer on top of the existing worker pipelinelist_milestones(),list_issues_all(),find_pr_for_issue(),merge_pull_request(),get_deployment_run(),get_pr_merged_at(), and related caching helpers toloony_dev/github/repo.pyloony_dev/prioritiser.pywithPrioritiserclass and full heuristic + AI ranking logicloony_dev/project_manager.pywithProjectManagerpolling looploony_dev/cli.pywith theproject-managersub-command and all configuration optionsfind_pr_for_issueto search--state all(not just open PRs) so merged PRs are found during deployment verificationself.github.repo→self.github.nameCloses #94
Test plan
loony-dev project-manager --helpshows all optionspython -c "from loony_dev.project_manager import ProjectManager; print('OK')"passespython -c "from loony_dev.prioritiser import Prioritiser; print('OK')"passes--n 1 --interval 30 --skip-mergeand verify an issue is labelledready-for-planningorready-for-development🤖 Generated with Claude Code
Summary by CodeRabbit
Release Notes
New Features
project-managerCLI command for automated GitHub repository project orchestration with configurable polling intervals and concurrency controls.Bug Fixes