diff --git a/.claude/agents/skill-pr-addresser.md b/.claude/agents/skill-pr-addresser.md new file mode 100644 index 0000000..b366c6f --- /dev/null +++ b/.claude/agents/skill-pr-addresser.md @@ -0,0 +1,217 @@ +--- +description: Address PR review feedback for skills, continuing development until approved +tools: [Read, Write, Edit, Bash, Glob, Grep, Task] +--- + +# Skill PR Addresser + +Address PR review feedback for skills, continuing development until the PR is approved. + +## Overview + +This agent picks up where `skill-reviewer` leaves off. When a PR has review feedback requesting changes, this agent: + +1. **Discovers** PRs with pending feedback (blocking reviews, unresolved threads) +2. **Analyzes** the feedback using the `feedback-analyzer` sub-agent +3. **Fixes** issues using the `feedback-fixer` sub-agent with model escalation +4. **Commits** and pushes changes to the PR branch +5. **Comments** on the PR summarizing what was addressed +6. **Requests re-review** when all feedback is addressed +7. **Repeats** until approved or max iterations reached + +## Usage + +### Address feedback on a specific PR + +```bash +just -f .claude/agents/skill-pr-addresser/justfile address 795 +``` + +### Check status of a PR + +```bash +just -f .claude/agents/skill-pr-addresser/justfile status 795 +``` + +### With options + +```bash +# Dry run - show what would be done without making changes +just -f .claude/agents/skill-pr-addresser/justfile address-dry 795 + +# Specify skill explicitly (instead of auto-detecting from changed files) +just -f .claude/agents/skill-pr-addresser/justfile address-skill 795 components/skills/lang-rust-dev + +# Force addressing even if no pending feedback detected +just -f .claude/agents/skill-pr-addresser/justfile address 795 --force +``` + +### Session management + +```bash +# List active sessions +just -f .claude/agents/skill-pr-addresser/justfile sessions + +# List all sessions (including completed) +just -f .claude/agents/skill-pr-addresser/justfile sessions-all +``` + +## Sub-Agents + +| Agent | Model | Purpose | +| ------------------- | ---------- | ------------------------------------ | +| `feedback-analyzer` | Haiku 3.5 | Parse and categorize feedback items | +| `feedback-fixer` | Sonnet 4 | Implement fixes in skill files | + +### Model Escalation + +The fixer uses automatic model escalation: +- **Simple nitpicks** (2 or fewer): Haiku 3.5 +- **Complex changes**: Sonnet 4 + +## Architecture + +``` +skill-pr-addresser/ +├── skill-pr-addresser.md # This file (discovered by Claude Code) +├── main.py # CLI entry point +├── justfile # Task runner recipes +├── pyproject.toml # Python package config +├── src/ +│ ├── app.py # Cement CLI application +│ ├── discovery.py # PR and session discovery +│ ├── github_pr.py # GitHub PR operations +│ ├── feedback.py # Feedback analysis/fixing +│ ├── addresser.py # Main orchestration loop +│ ├── templates.py # Mustache template rendering +│ └── exceptions.py # Custom exceptions +├── subagents/ +│ ├── feedback-analyzer/ # Analyzes feedback into items +│ │ ├── prompt.md +│ │ └── config.yml +│ └── feedback-fixer/ # Implements fixes +│ ├── prompt.md +│ └── config.yml +├── templates/ +│ ├── iteration_comment.hbs # PR comment for each iteration +│ ├── ready_comment.hbs # "Ready for review" comment +│ └── skipped_feedback.hbs # Skipped items summary +├── data/ +│ ├── config.json # Runtime configuration +│ └── sessions/ # Session state (gitignored) +└── tests/ + └── *.py # Unit tests +``` + +### Shared Library + +Uses `skill-agents-common` for: +- GitHub operations (`github_ops.py`) +- Worktree management (`worktree.py`) +- Session tracking (`session.py`, `models.py`) + +### Worktree Reuse + +Reuses worktrees created by `skill-reviewer` when available: +- Pattern: `/private/tmp/worktrees//issue-/` +- Uses `get_or_create_worktree()` to find or create + +### Session Continuity + +Links to original `skill-reviewer` session: +- Finds by issue number: `find_session_by_issue()` +- Finds by PR number: `find_session_by_pr()` +- Creates new if needed: `create_session_from_pr()` +- Tracks iterations and results in session data + +## Feedback Loop + +``` +┌──────────────────────────────────────────────────────────────────┐ +│ Feedback Addressing Loop │ +├──────────────────────────────────────────────────────────────────┤ +│ │ +│ ┌─────────────┐ ┌─────────────┐ ┌─────────────────────┐ │ +│ │ Discovery │───▶│ Analyzer │───▶│ Fixer (Haiku/ │ │ +│ │ (gh CLI) │ │ (Haiku) │ │ Sonnet) │ │ +│ └─────────────┘ └─────────────┘ └─────────────────────┘ │ +│ │ │ │ +│ │ ┌──────────────────────┘ │ +│ │ │ │ +│ │ ▼ │ +│ │ ┌─────────────────┐ │ +│ │ │ Commit & Push │ │ +│ │ │ (git CLI) │ │ +│ │ └─────────────────┘ │ +│ │ │ │ +│ │ ▼ │ +│ │ ┌─────────────────┐ │ +│ │ │ Post Comment │ │ +│ │ │ (gh CLI) │ │ +│ │ └─────────────────┘ │ +│ │ │ │ +│ │ ▼ │ +│ │ ┌─────────────────┐ │ +│ └────────▶│ Check Complete? │◀───────────────────────────│ +│ │ success >= 90% │ │ +│ └─────────────────┘ │ +│ │ │ +│ Yes │ No (iterate) │ +│ ┌───────┴───────┐ │ +│ ▼ ▼ │ +│ ┌────────────────┐ (loop) │ +│ │ Request Review │ │ +│ │ (gh CLI) │ │ +│ └────────────────┘ │ +│ │ +└──────────────────────────────────────────────────────────────────┘ +``` + +## Configuration + +### Environment Config + +See `config/skill-pr-addresser.conf`: + +```ini +[skill-pr-addresser] +repo_owner = aRustyDev +repo_name = ai +max_iterations = 3 +rate_limit_delay = 1.0 +worktree_base = /private/tmp/worktrees + +[otel] +enabled = false +endpoint = http://localhost:4317 +service_name = skill-pr-addresser +``` + +### Runtime Config + +Edit `data/config.json` for batch settings: + +```json +{ + "review_labels": ["review", "skills"], + "max_parallel": 3, + "max_cost_per_pr": 2.0 +} +``` + +## Cost Estimation + +Per PR addressing cycle (typical): + +| Model | Calls | Cost/call | Total | +| ---------- | ----- | --------- | ------- | +| Haiku 3.5 | 1-2 | ~$0.02 | ~$0.04 | +| Sonnet 4 | 1-2 | ~$0.35 | ~$0.70 | +| **Total** | | | ~$0.75 | + +Complex PRs with multiple iterations: ~$1-2 + +## Related + +- **skill-reviewer**: Creates the initial PR from issues +- **skill-agents-common**: Shared library for both agents diff --git a/.claude/agents/skill-pr-addresser/README.md b/.claude/agents/skill-pr-addresser/README.md new file mode 100644 index 0000000..30d728c --- /dev/null +++ b/.claude/agents/skill-pr-addresser/README.md @@ -0,0 +1,251 @@ +# Skill PR Addresser + +An automated agent that addresses PR review feedback for skill files, continuing development until the PR is approved. + +## Quick Start + +### Prerequisites + +- Python 3.11+ +- Claude CLI installed and authenticated +- `gh` CLI authenticated +- `uv` package manager + +### Install Dependencies + +```bash +cd .claude/agents/skill-pr-addresser +uv sync +``` + +### Address PR Feedback + +```bash +# Address feedback on PR #795 +just -f .claude/agents/skill-pr-addresser/justfile address 795 + +# Dry run (see what would be done) +just -f .claude/agents/skill-pr-addresser/justfile address-dry 795 + +# Check current status +just -f .claude/agents/skill-pr-addresser/justfile status 795 +``` + +## Commands + +| Command | Description | +|---------|-------------| +| `just address ` | Address feedback on a PR | +| `just address-dry ` | Dry run - show what would be done | +| `just address-skill ` | Address with explicit skill path | +| `just status ` | Check PR addressing status | +| `just sessions` | List active sessions | +| `just sessions-all` | List all sessions (including completed) | +| `just batch [--label X]` | Address all PRs with pending feedback | + +## How It Works + +### 1. Discovery + +Finds PRs with pending feedback: +- Reviews requesting changes +- Unresolved review threads +- Unresolved comments + +### 2. Analysis + +The `feedback-analyzer` sub-agent (Haiku 3.5) extracts structured feedback items: + +```json +{ + "feedback_items": [ + { + "id": "thread-123", + "type": "change_request", + "file": "SKILL.md", + "line": 42, + "description": "Add example for Result", + "priority": "high" + } + ] +} +``` + +### 3. Fixing + +The `feedback-fixer` sub-agent implements changes with model escalation: +- **Simple nitpicks**: Haiku 3.5 (fast, cheap) +- **Complex changes**: Sonnet 4 (thorough) + +### 4. Commit & Push + +Changes are committed with a conventional commit message: + +``` +fix(skills): address PR #795 feedback (iteration 1) + +### Changed +- Addressed 3 feedback items +- Modified files: SKILL.md, examples/error.md +``` + +### 5. Comment & Re-review + +Posts an iteration summary comment and requests re-review when complete. + +## Configuration + +### Cement Config (`config/skill-pr-addresser.conf`) + +```ini +[skill-pr-addresser] +repo_owner = aRustyDev +repo_name = ai +max_iterations = 3 +rate_limit_delay = 1.0 +worktree_base = /private/tmp/worktrees + +[otel] +enabled = false +endpoint = http://localhost:4317 +``` + +### Runtime Config (`data/config.json`) + +```json +{ + "repo_owner": "aRustyDev", + "repo_name": "ai", + "review_labels": ["review", "skills"], + "max_parallel": 3, + "max_cost_per_pr": 2.0 +} +``` + +## Architecture + +``` +skill-pr-addresser/ +├── main.py # CLI entry point +├── justfile # Task runner +├── pyproject.toml # Package config +├── src/ +│ ├── app.py # Cement application +│ ├── discovery.py # PR/session discovery +│ ├── github_pr.py # GitHub operations +│ ├── feedback.py # Analysis/fixing logic +│ ├── addresser.py # Main orchestration +│ ├── templates.py # Mustache rendering +│ └── exceptions.py # Error types +├── subagents/ +│ ├── feedback-analyzer/ # Parses feedback (Haiku) +│ └── feedback-fixer/ # Implements fixes (Sonnet) +├── templates/ +│ ├── iteration_comment.hbs +│ ├── ready_comment.hbs +│ └── skipped_feedback.hbs +├── data/ +│ ├── config.json # Runtime config +│ └── sessions/ # Session data +└── tests/ + ├── test_addresser.py + ├── test_discovery.py + ├── test_feedback.py + ├── test_github_pr.py + └── test_templates.py +``` + +## Shared Library + +Uses `skill-agents-common` shared by both agents: + +| Module | Purpose | +|--------|---------| +| `github_ops.py` | GitHub API wrappers | +| `worktree.py` | Git worktree management | +| `session.py` | Session persistence | +| `models.py` | Shared data classes | + +## Session Management + +Sessions track addressing progress across iterations: + +```bash +# List sessions +just sessions + +# View session details +cat data/sessions//session.json | jq . +``` + +Session states: +- `init` - Just created +- `analysis` - Analyzing feedback +- `fixing` - Implementing fixes +- `complete` - Successfully addressed +- `failed` - Could not address + +## Cost Estimation + +Per PR (typical): + +| Model | Usage | Cost | +|-------|-------|------| +| Haiku 3.5 | 1-2 analysis calls | ~$0.04 | +| Sonnet 4 | 1-2 fix calls | ~$0.70 | +| **Total** | | **~$0.75** | + +Complex PRs with multiple iterations: ~$1-2 + +## Development + +### Run Tests + +```bash +just -f .claude/agents/skill-pr-addresser/justfile test +just -f .claude/agents/skill-pr-addresser/justfile test-cov +``` + +### Lint/Format + +```bash +just -f .claude/agents/skill-pr-addresser/justfile lint +just -f .claude/agents/skill-pr-addresser/justfile fmt +``` + +### Verify Installation + +```bash +just -f .claude/agents/skill-pr-addresser/justfile verify +``` + +## Troubleshooting + +### "PR has no pending feedback" + +The PR doesn't have any blocking reviews or unresolved threads. Use `--force` to run anyway: + +```bash +just address 795 --force +``` + +### "PR is already merged/closed" + +The agent only works on open PRs. Check the PR state: + +```bash +gh pr view 795 --json state +``` + +### "Could not infer skill path" + +The changed files don't match `components/skills/*`. Specify explicitly: + +```bash +just address-skill 795 components/skills/lang-rust-dev +``` + +## Related + +- [skill-reviewer](../skill-reviewer/) - Creates initial PRs from issues +- [skill-agents-common](../skill-agents-common/) - Shared library diff --git a/.claude/agents/skill-pr-addresser/__init__.py b/.claude/agents/skill-pr-addresser/__init__.py new file mode 100644 index 0000000..d85d9b1 --- /dev/null +++ b/.claude/agents/skill-pr-addresser/__init__.py @@ -0,0 +1,3 @@ +"""Skill PR Addresser - Address PR review feedback for skills.""" + +__version__ = "0.1.0" diff --git a/.claude/agents/skill-pr-addresser/__main__.py b/.claude/agents/skill-pr-addresser/__main__.py new file mode 100644 index 0000000..0fa9b51 --- /dev/null +++ b/.claude/agents/skill-pr-addresser/__main__.py @@ -0,0 +1,6 @@ +"""Allow running as python -m skill_pr_addresser.""" + +from .main import main + +if __name__ == "__main__": + main() diff --git a/.claude/agents/skill-pr-addresser/config.toml b/.claude/agents/skill-pr-addresser/config.toml new file mode 100644 index 0000000..d379582 --- /dev/null +++ b/.claude/agents/skill-pr-addresser/config.toml @@ -0,0 +1,15 @@ +# skill-pr-addresser configuration + +[skill-pr-addresser] +repo_owner = "aRustyDev" +repo_name = "ai" +max_iterations = 3 +rate_limit_delay = 1.0 +worktree_base = "/private/tmp/worktrees" + +[otel] +enabled = true +# gRPC endpoint (no http:// prefix for gRPC) +endpoint = "localhost:4317" +service_name = "skill-pr-addresser" +version = "0.1.0" diff --git a/.claude/agents/skill-pr-addresser/config/skill-pr-addresser.conf b/.claude/agents/skill-pr-addresser/config/skill-pr-addresser.conf new file mode 100644 index 0000000..77128e3 --- /dev/null +++ b/.claude/agents/skill-pr-addresser/config/skill-pr-addresser.conf @@ -0,0 +1,26 @@ +# Skill PR Addresser Configuration +# This file provides default configuration for the skill-pr-addresser CLI. + +[skill-pr-addresser] +# GitHub repository settings +repo_owner = aRustyDev +repo_name = ai + +# Iteration limits +max_iterations = 3 + +# Rate limiting (seconds between API calls) +rate_limit_delay = 1.0 + +# Worktree base directory +worktree_base = /private/tmp/worktrees + +# Base branch for worktrees +base_branch = main + +[otel] +# OpenTelemetry settings for observability +enabled = false +endpoint = http://localhost:4317 +service_name = skill-pr-addresser +version = 0.1.0 diff --git a/.claude/agents/skill-pr-addresser/justfile b/.claude/agents/skill-pr-addresser/justfile new file mode 100644 index 0000000..0bf5cce --- /dev/null +++ b/.claude/agents/skill-pr-addresser/justfile @@ -0,0 +1,121 @@ +# Skill PR Addresser - Justfile +# Run: just -f .claude/agents/skill-pr-addresser/justfile + +set shell := ["bash", "-cu"] + +# Default recipe: show help +default: + @just -f {{justfile()}} --list + +# Agent directory +agent_dir := justfile_directory() + +# Use uv for dependency management +python := "uv run python" + +# ───────────────────────────────────────────────────────────────────────────── +# Main Commands +# ───────────────────────────────────────────────────────────────────────────── + +# Address review feedback on a PR +address pr_number *FLAGS: + cd "{{agent_dir}}" && {{python}} main.py address {{pr_number}} {{FLAGS}} + +# Address with specific skill path +address-skill pr_number skill *FLAGS: + cd "{{agent_dir}}" && {{python}} main.py address {{pr_number}} --skill {{skill}} {{FLAGS}} + +# Address in dry-run mode +address-dry pr_number *FLAGS: + cd "{{agent_dir}}" && {{python}} main.py address {{pr_number}} --dry-run {{FLAGS}} + +# Check addressing status for a PR +status pr_number: + cd "{{agent_dir}}" && {{python}} main.py status {{pr_number}} + +# ───────────────────────────────────────────────────────────────────────────── +# Batch Processing +# ───────────────────────────────────────────────────────────────────────────── + +# Find PRs with pending feedback +find *FLAGS: + cd "{{agent_dir}}" && {{python}} main.py find {{FLAGS}} + +# Address all PRs with pending feedback +batch *FLAGS: + cd "{{agent_dir}}" && {{python}} main.py batch {{FLAGS}} + +# Address all PRs with pending feedback (dry run) +batch-dry *FLAGS: + cd "{{agent_dir}}" && {{python}} main.py batch --dry-run {{FLAGS}} + +# Address PRs with specific label +batch-label label *FLAGS: + cd "{{agent_dir}}" && {{python}} main.py batch --label {{label}} {{FLAGS}} + +# ───────────────────────────────────────────────────────────────────────────── +# Session Management +# ───────────────────────────────────────────────────────────────────────────── + +# List all sessions +sessions: + cd "{{agent_dir}}" && {{python}} main.py sessions + +# List all sessions including completed +sessions-all: + cd "{{agent_dir}}" && {{python}} main.py sessions --all + +# ───────────────────────────────────────────────────────────────────────────── +# Development +# ───────────────────────────────────────────────────────────────────────────── + +# Install dependencies +install: + pip install -e "{{agent_dir}}[dev]" + +# Run tests +test *FLAGS: + pytest "{{agent_dir}}/tests" {{FLAGS}} + +# Run tests with coverage +test-cov: + pytest "{{agent_dir}}/tests" --cov="{{agent_dir}}/src" --cov-report=term-missing + +# Lint code +lint: + ruff check "{{agent_dir}}/src" "{{agent_dir}}/tests" + +# Format code +fmt: + ruff format "{{agent_dir}}/src" "{{agent_dir}}/tests" + +# Show help +help: + cd "{{agent_dir}}" && {{python}} main.py --help + +# ───────────────────────────────────────────────────────────────────────────── +# Debugging +# ───────────────────────────────────────────────────────────────────────────── + +# Address PR with sub-agent output streaming (recommended for debugging) +address-stream pr_number *FLAGS: + cd "{{agent_dir}}" && {{python}} main.py address {{pr_number}} --stream {{FLAGS}} + +# Address PR in TUI mode (watch-only, workflow won't complete) +address-interactive pr_number *FLAGS: + cd "{{agent_dir}}" && {{python}} main.py address {{pr_number}} --interactive {{FLAGS}} + +# Show config +config: + @cat "{{agent_dir}}/config/skill-pr-addresser.conf" + +# Verify CLI works +verify: + @echo "Checking CLI..." + @cd "{{agent_dir}}" && {{python}} main.py --help > /dev/null && echo "✓ CLI works" + @cd "{{agent_dir}}" && {{python}} main.py address --help > /dev/null && echo "✓ address command works" + @cd "{{agent_dir}}" && {{python}} main.py status --help > /dev/null && echo "✓ status command works" + @cd "{{agent_dir}}" && {{python}} main.py find --help > /dev/null && echo "✓ find command works" + @cd "{{agent_dir}}" && {{python}} main.py batch --help > /dev/null && echo "✓ batch command works" + @cd "{{agent_dir}}" && {{python}} main.py sessions --help > /dev/null && echo "✓ sessions command works" + @echo "All checks passed!" diff --git a/.claude/agents/skill-pr-addresser/main.py b/.claude/agents/skill-pr-addresser/main.py new file mode 100644 index 0000000..46e6fb1 --- /dev/null +++ b/.claude/agents/skill-pr-addresser/main.py @@ -0,0 +1,22 @@ +#!/usr/bin/env python3 +"""CLI entry point for skill-pr-addresser.""" + +import sys +from pathlib import Path + +# Add the agent directory to path for imports +agent_dir = Path(__file__).parent +if str(agent_dir) not in sys.path: + sys.path.insert(0, str(agent_dir)) + +from src.app import SkillPRAddresser + + +def main(): + """Run the skill-pr-addresser CLI.""" + with SkillPRAddresser() as app: + app.run() + + +if __name__ == "__main__": + main() diff --git a/.claude/agents/skill-pr-addresser/pyproject.toml b/.claude/agents/skill-pr-addresser/pyproject.toml new file mode 100644 index 0000000..b654e10 --- /dev/null +++ b/.claude/agents/skill-pr-addresser/pyproject.toml @@ -0,0 +1,51 @@ +[project] +name = "skill-pr-addresser" +version = "0.1.0" +description = "Address PR review feedback for skills, continuing development until approved" +requires-python = ">=3.11" + +dependencies = [ + "cement>=3.0.10", + "colorlog>=6.8.0", + "textual>=0.47.0", + "opentelemetry-api>=1.22.0", + "opentelemetry-sdk>=1.22.0", + "opentelemetry-exporter-otlp-proto-grpc>=1.22.0", + "chevron>=0.14.0", + "pyyaml>=6.0", + "toml>=0.10.2", + "skill-agents-common", +] + +[tool.uv.sources] +skill-agents-common = { path = "../skill-agents-common", editable = true } + +[project.optional-dependencies] +dev = [ + "pytest>=8.0.0", + "pytest-asyncio>=0.23.0", + "pytest-cov>=4.1.0", + "ruff>=0.2.0", +] + +[project.scripts] +skill-pr-addresser = "skill_pr_addresser.main:main" + +[build-system] +requires = ["setuptools>=61.0"] +build-backend = "setuptools.build_meta" + +[tool.setuptools.packages.find] +where = ["."] +include = ["skill_pr_addresser*", "src*"] + +[tool.ruff] +line-length = 100 +target-version = "py311" + +[tool.ruff.lint] +select = ["E", "F", "I", "W"] + +[tool.pytest.ini_options] +testpaths = ["tests"] +asyncio_mode = "auto" diff --git a/.claude/agents/skill-pr-addresser/src/__init__.py b/.claude/agents/skill-pr-addresser/src/__init__.py new file mode 100644 index 0000000..59c5ac6 --- /dev/null +++ b/.claude/agents/skill-pr-addresser/src/__init__.py @@ -0,0 +1 @@ +"""Skill PR Addresser - Address PR review feedback for skills.""" diff --git a/.claude/agents/skill-pr-addresser/src/addresser.py b/.claude/agents/skill-pr-addresser/src/addresser.py new file mode 100644 index 0000000..6593e51 --- /dev/null +++ b/.claude/agents/skill-pr-addresser/src/addresser.py @@ -0,0 +1,581 @@ +"""Main addresser orchestration for skill-pr-addresser. + +Orchestrates the full feedback addressing loop: +1. Analyze feedback +2. Fix issues +3. Commit and push changes +4. Post iteration comment +5. Request re-review when done +""" + +import logging +import subprocess +import sys +from dataclasses import dataclass, field +from datetime import datetime +from pathlib import Path + +# Add parent directory to path for shared library import +_agents_dir = Path(__file__).parent.parent.parent +if str(_agents_dir) not in sys.path: + sys.path.insert(0, str(_agents_dir)) + +from skill_agents_common.models import Stage + +from .costs import CallCost, SessionCosts, format_cost +from .discovery import DiscoveryContext +from .exceptions import ConflictError, IterationLimitError +from .feedback import ( + AnalysisResult, + FixResult, + SubstantiveCheckResult, + analyze_feedback, + check_substantive_feedback, + fix_with_escalation, +) +from .github_pr import add_pr_comment, request_rereview +from .templates import render_template +from .tracing import span, record_iteration, traced + + +log = logging.getLogger(__name__) + + +@dataclass +class IterationResult: + """Result from a single addressing iteration.""" + + iteration: int + analysis: AnalysisResult + fix_result: FixResult + commit_sha: str | None = None + pushed: bool = False + comment_url: str | None = None + costs: list[CallCost] = field(default_factory=list) + + @property + def iteration_cost(self) -> float: + """Total cost for this iteration.""" + return sum(c.total_cost for c in self.costs) + + +@dataclass +class AddressingResult: + """Final result from the addressing process.""" + + success: bool + iterations_run: int + total_addressed: int + total_skipped: int + files_modified: list[str] = field(default_factory=list) + final_commit_sha: str | None = None + ready_for_review: bool = False + error: str | None = None + iteration_results: list[IterationResult] = field(default_factory=list) + total_cost: float = 0.0 + + @property + def cost_formatted(self) -> str: + """Format total cost for display.""" + return format_cost(self.total_cost) + + +class Addresser: + """Orchestrates the feedback addressing loop.""" + + def __init__( + self, + agent_dir: Path, + sessions_dir: Path, + owner: str, + repo: str, + rate_limit_delay: float = 1.0, + ): + """Initialize the addresser. + + Args: + agent_dir: Path to the agent directory + sessions_dir: Path to sessions directory + owner: Repository owner + repo: Repository name + rate_limit_delay: Delay between API calls in seconds + """ + self.agent_dir = agent_dir + self.sessions_dir = sessions_dir + self.owner = owner + self.repo = repo + self.rate_limit_delay = rate_limit_delay + + @traced("addresser.address") + def address( + self, + ctx: DiscoveryContext, + max_iterations: int = 3, + ) -> AddressingResult: + """Run the feedback addressing loop. + + Args: + ctx: Discovery context with PR info + max_iterations: Maximum number of addressing iterations + + Returns: + AddressingResult with summary of what was done + """ + total_addressed = 0 + total_skipped = 0 + all_files_modified: set[str] = set() + iteration_results: list[IterationResult] = [] + final_sha: str | None = None + + # Initialize cost tracking + session_costs = SessionCosts( + session_id=ctx.session.session_id, + pr_number=ctx.pr_number, + ) + + # Run substantive checks on pending feedback (if any) + if ctx.needs_substantive_check: + log.info( + f"Running substantive check on {len(ctx.pending_reviews)} reviews, " + f"{len(ctx.pending_comments)} comments..." + ) + with span("substantive_check"): + subst_result, subst_cost = check_substantive_feedback( + self.agent_dir, + ctx.pending_reviews, + ctx.pending_comments, + ) + if subst_cost: + session_costs.add_call(subst_cost) + log.debug(f"Substantive check cost: {format_cost(subst_cost.total_cost)}") + + # Move substantive items into context + ctx.substantive_reviews = subst_result.substantive_reviews + ctx.substantive_comments = subst_result.substantive_comments + + log.info( + f"Substantive check: {len(subst_result.substantive_reviews)} reviews, " + f"{len(subst_result.substantive_comments)} comments are actionable" + ) + + if subst_result.not_substantive_ids: + log.debug(f"Not substantive: {subst_result.not_substantive_ids}") + + # Check if we have any feedback to address + log.debug( + f"Feedback counts - blocking: {len(ctx.blocking_reviews)}, " + f"actionable: {len(ctx.actionable_reviews)}, " + f"substantive: {len(ctx.substantive_reviews)}, " + f"threads: {len(ctx.unresolved_threads)}, " + f"total: {ctx.feedback_count}" + ) + + if not ctx.needs_changes: + log.info("No actionable feedback to address") + return AddressingResult( + success=True, + iterations_run=0, + total_addressed=0, + total_skipped=0, + ready_for_review=True, + total_cost=session_costs.total_cost, + ) + + for iteration in range(1, max_iterations + 1): + log.info(f"=== Iteration {iteration}/{max_iterations} ===") + iteration_costs: list[CallCost] = [] + + with span(f"iteration.{iteration}", {"pr": ctx.pr_number}): + # Update session stage + ctx.session.update_stage(Stage.ANALYSIS) + ctx.session.save(self.sessions_dir) + + # Step 1: Analyze feedback + log.info("Analyzing feedback...") + log.debug( + f"Passing to analyzer - reviews: {len(ctx.all_reviews)}, " + f"comments: {len(ctx.all_comments)}, threads: {len(ctx.unresolved_threads)}" + ) + with span("analyze_feedback"): + analysis, analysis_cost = analyze_feedback(self.agent_dir, ctx) + if analysis_cost: + iteration_costs.append(analysis_cost) + session_costs.add_call(analysis_cost) + + # Check for feedback in both new (action_groups) and legacy (feedback_items) formats + has_feedback = bool(analysis.action_groups) or bool(analysis.feedback_items) + if not has_feedback: + log.info("No feedback items found from analyzer") + log.debug(f"Analyzer summary: {analysis.summary}") + break + + # Log what was found + if analysis.action_groups: + log.info( + f"Found {len(analysis.action_groups)} action groups " + f"({len(analysis.guidance)} guidance items)" + ) + else: + log.info(f"Found {len(analysis.feedback_items)} feedback items (legacy format)") + + if not analysis.actionable_count: + log.info("No actionable items to fix") + break + + # Step 2: Fix feedback + ctx.session.update_stage(Stage.FIXING) + ctx.session.save(self.sessions_dir) + + log.info("Fixing feedback...") + with span("fix_feedback"): + fix_result, fix_costs = fix_with_escalation(self.agent_dir, ctx, analysis) + for cost in fix_costs: + iteration_costs.append(cost) + session_costs.add_call(cost) + + total_addressed += len(fix_result.addressed) + total_skipped += len(fix_result.skipped) + all_files_modified.update(fix_result.files_modified) + + # Step 3: Commit and push if changes were made + commit_sha = None + pushed = False + + if fix_result.addressed: + try: + commit_sha = self._commit_changes( + ctx, iteration, fix_result + ) + if commit_sha: + self._push_changes(ctx) + pushed = True + final_sha = commit_sha + log.info(f"Pushed changes: {commit_sha[:8]}") + except ConflictError as e: + log.error(f"Git conflict: {e}") + ctx.session.add_error(str(e)) + break + + # Step 4: Post iteration comment + comment_url = self._add_iteration_comment( + ctx, iteration, analysis, fix_result, commit_sha + ) + + iteration_result = IterationResult( + iteration=iteration, + analysis=analysis, + fix_result=fix_result, + commit_sha=commit_sha, + pushed=pushed, + comment_url=comment_url, + costs=iteration_costs, + ) + iteration_results.append(iteration_result) + + # Record iteration for tracing + record_iteration( + iteration=iteration, + feedback_count=analysis.actionable_count, + addressed_count=len(fix_result.addressed), + skipped_count=len(fix_result.skipped), + success_rate=fix_result.success_rate, + ) + + # Store in session + ctx.session.results[f"iteration_{iteration}"] = { + "analysis_summary": analysis.summary, + "feedback_count": analysis.actionable_count, + "action_groups": len(analysis.action_groups), + "guidance": analysis.guidance, + "addressed": len(fix_result.addressed), + "skipped": len(fix_result.skipped), + "files_modified": fix_result.files_modified, + "commit_sha": commit_sha, + "cost": iteration_result.iteration_cost, + } + ctx.session.save(self.sessions_dir) + + # Log iteration cost + if iteration_costs: + log.info(f"Iteration {iteration} cost: {format_cost(iteration_result.iteration_cost)}") + + # Check if we're done + if fix_result.success_rate >= 0.9: + log.info("High success rate - addressing complete") + break + + if not fix_result.addressed: + log.warning("No items addressed in this iteration") + break + + # Final status update + ready_for_review = total_addressed > 0 and total_skipped == 0 + success = total_addressed > 0 + + # Save session costs + session_costs.save(self.sessions_dir) + total_cost = session_costs.total_cost + + if success: + ctx.session.update_stage(Stage.COMPLETE) + log.info("Feedback addressing complete!") + log.info(f"Total cost: {format_cost(total_cost)}") + + if ready_for_review: + # Request re-review from blocking reviewers + self._request_rereview(ctx) + else: + ctx.session.update_stage(Stage.FAILED) + log.warning("Could not address feedback") + + # Store final cost in session + ctx.session.results["total_cost"] = total_cost + ctx.session.save(self.sessions_dir) + + return AddressingResult( + success=success, + iterations_run=len(iteration_results), + total_addressed=total_addressed, + total_skipped=total_skipped, + files_modified=list(all_files_modified), + final_commit_sha=final_sha, + ready_for_review=ready_for_review, + iteration_results=iteration_results, + total_cost=total_cost, + ) + + def _commit_changes( + self, + ctx: DiscoveryContext, + iteration: int, + fix_result: FixResult, + ) -> str | None: + """Commit changes made in the worktree. + + Args: + ctx: Discovery context + iteration: Current iteration number + fix_result: Result from fixing + + Returns: + Commit SHA if successful, None otherwise + + Raises: + ConflictError: If there are merge conflicts + """ + worktree_path = Path(ctx.worktree.path) + + # Check for unstaged changes + status_result = subprocess.run( + ["git", "status", "--porcelain"], + cwd=worktree_path, + capture_output=True, + text=True, + ) + + if not status_result.stdout.strip(): + log.info("No changes to commit") + return None + + # Stage all changes + subprocess.run( + ["git", "add", "-A"], + cwd=worktree_path, + check=True, + ) + + # Build commit message + addressed_summary = ", ".join( + item.get("id", "unknown")[:20] for item in fix_result.addressed[:5] + ) + if len(fix_result.addressed) > 5: + addressed_summary += f", ... (+{len(fix_result.addressed) - 5} more)" + + commit_message = f"""fix(skills): address PR #{ctx.pr_number} feedback (iteration {iteration}) + +### Changed +- Addressed {len(fix_result.addressed)} feedback items: {addressed_summary} +- Modified files: {', '.join(fix_result.files_modified)} + +### Stats +- Lines added: +{fix_result.lines_added} +- Lines removed: -{fix_result.lines_removed} + +🤖 Generated with [Claude Code](https://claude.com/claude-code) + +Co-Authored-By: Claude Sonnet 4 +""" + + # Create commit + result = subprocess.run( + ["git", "commit", "-m", commit_message], + cwd=worktree_path, + capture_output=True, + text=True, + ) + + if result.returncode != 0: + if "conflict" in result.stderr.lower(): + raise ConflictError(f"Merge conflict during commit: {result.stderr}") + log.error(f"Commit failed: {result.stderr}") + return None + + # Get commit SHA + sha_result = subprocess.run( + ["git", "rev-parse", "HEAD"], + cwd=worktree_path, + capture_output=True, + text=True, + ) + + return sha_result.stdout.strip() if sha_result.returncode == 0 else None + + def _push_changes(self, ctx: DiscoveryContext) -> bool: + """Push changes to remote. + + Attempts to push, and if rejected due to remote changes, + pulls with rebase and retries. + + Args: + ctx: Discovery context + + Returns: + True if push succeeded + """ + worktree_path = Path(ctx.worktree.path) + + # First attempt to push + result = subprocess.run( + ["git", "push", "origin", ctx.pr.branch], + cwd=worktree_path, + capture_output=True, + text=True, + ) + + if result.returncode == 0: + return True + + # Check if rejected due to remote changes + if "fetch first" in result.stderr or "non-fast-forward" in result.stderr: + log.info("Remote has changes, pulling with rebase...") + + # Pull with rebase + pull_result = subprocess.run( + ["git", "pull", "--rebase", "origin", ctx.pr.branch], + cwd=worktree_path, + capture_output=True, + text=True, + ) + + if pull_result.returncode != 0: + if "conflict" in pull_result.stderr.lower() or "conflict" in pull_result.stdout.lower(): + log.error(f"Rebase conflict: {pull_result.stderr}") + # Abort the rebase + subprocess.run( + ["git", "rebase", "--abort"], + cwd=worktree_path, + capture_output=True, + ) + raise ConflictError(f"Rebase conflict during pull: {pull_result.stderr}") + log.error(f"Pull failed: {pull_result.stderr}") + return False + + # Retry push + retry_result = subprocess.run( + ["git", "push", "origin", ctx.pr.branch], + cwd=worktree_path, + capture_output=True, + text=True, + ) + + if retry_result.returncode != 0: + log.error(f"Push failed after rebase: {retry_result.stderr}") + return False + + log.info("Push succeeded after rebase") + return True + + log.error(f"Push failed: {result.stderr}") + return False + + def _add_iteration_comment( + self, + ctx: DiscoveryContext, + iteration: int, + analysis: AnalysisResult, + fix_result: FixResult, + commit_sha: str | None, + ) -> str | None: + """Add a comment summarizing the iteration. + + Args: + ctx: Discovery context + iteration: Iteration number + analysis: Analysis result + fix_result: Fix result + commit_sha: Commit SHA if changes were made + + Returns: + Comment URL if successful + """ + template_data = { + "iteration": iteration, + "pr_number": ctx.pr_number, + "skill_path": ctx.skill_path, + "feedback_count": analysis.actionable_count, + "action_groups_count": len(analysis.action_groups), + "guidance": analysis.guidance, + "addressed_count": len(fix_result.addressed), + "skipped_count": len(fix_result.skipped), + "addressed": fix_result.addressed, + "skipped": fix_result.skipped, + "files_modified": fix_result.files_modified, + "lines_added": fix_result.lines_added, + "lines_removed": fix_result.lines_removed, + "commit_sha": commit_sha, + "commit_short": commit_sha[:8] if commit_sha else None, + "success_rate": f"{fix_result.success_rate * 100:.0f}%", + "timestamp": datetime.utcnow().isoformat(), + } + + body = render_template(self.agent_dir / "templates", "iteration_comment", template_data) + + return add_pr_comment( + self.owner, self.repo, ctx.pr_number, body + ) + + def _request_rereview(self, ctx: DiscoveryContext) -> bool: + """Request re-review from blocking reviewers. + + Args: + ctx: Discovery context + + Returns: + True if re-review was requested + """ + if not ctx.blocking_reviewers: + log.info("No blocking reviewers to request re-review from") + return False + + # Post ready comment + template_data = { + "pr_number": ctx.pr_number, + "skill_path": ctx.skill_path, + "reviewers": ctx.blocking_reviewers, + "timestamp": datetime.utcnow().isoformat(), + } + + body = render_template(self.agent_dir / "templates", "ready_comment", template_data) + add_pr_comment(self.owner, self.repo, ctx.pr_number, body) + + # Request re-review + success = request_rereview( + self.owner, self.repo, ctx.pr_number, ctx.blocking_reviewers + ) + + if success: + log.info(f"Requested re-review from: {', '.join(ctx.blocking_reviewers)}") + else: + log.warning("Failed to request re-review") + + return success diff --git a/.claude/agents/skill-pr-addresser/src/app.py b/.claude/agents/skill-pr-addresser/src/app.py new file mode 100644 index 0000000..3b9fff2 --- /dev/null +++ b/.claude/agents/skill-pr-addresser/src/app.py @@ -0,0 +1,582 @@ +"""Cement application class for skill-pr-addresser.""" + +import sys +from pathlib import Path + +from cement import App, Controller, ex +from cement.ext.ext_colorlog import ColorLogHandler + +# Add parent directory to path for shared library import +_agents_dir = Path(__file__).parent.parent.parent +if str(_agents_dir) not in sys.path: + sys.path.insert(0, str(_agents_dir)) + +from skill_agents_common.models import Stage + +from .addresser import Addresser +from .costs import format_cost, estimate_pr_cost +from .discovery import discover, DiscoveryContext +from .exceptions import ( + PRNotFoundError, + PRClosedError, + NoFeedbackError, + AddresserError, +) +from .ext_toml import TomlConfigHandler +from .github_pr import find_prs_with_feedback +from .progress import ProgressTracker +from .tracing import TracingConfig, init_tracing + + +class Base(Controller): + """Base controller for skill-pr-addresser.""" + + class Meta: + label = "base" + description = "Address PR review feedback for skills" + arguments = [ + ( + ["-v", "--version"], + {"action": "version", "version": "skill-pr-addresser 0.1.0"}, + ), + ] + + def _default(self): + """Default action when no subcommand given.""" + self.app.args.print_help() + + @ex( + help="Address review feedback on a PR", + arguments=[ + (["pr_number"], {"help": "Pull request number", "type": int}), + ( + ["--skill"], + {"help": "Skill path (auto-detected from PR if not specified)"}, + ), + ( + ["--max-iterations"], + { + "help": "Maximum addressing iterations (default: 3)", + "type": int, + "default": 3, + }, + ), + ( + ["--dry-run"], + { + "help": "Show what would be done without making changes", + "action": "store_true", + }, + ), + ( + ["--force"], + { + "help": "Force addressing even if PR has no pending feedback", + "action": "store_true", + }, + ), + ( + ["--verbose"], + { + "help": "Enable verbose/debug output", + "action": "store_true", + }, + ), + ( + ["--interactive"], + { + "help": "Run sub-agents in TUI mode (for watching only, workflow won't complete)", + "action": "store_true", + }, + ), + ( + ["--stream"], + { + "help": "Stream sub-agent output in real-time (recommended for debugging)", + "action": "store_true", + }, + ), + ], + ) + def address(self): + """Address review feedback on a PR.""" + import logging + from .feedback import set_debug_mode + + pr_number = self.app.pargs.pr_number + skill = self.app.pargs.skill + dry_run = self.app.pargs.dry_run + force = self.app.pargs.force + max_iterations = self.app.pargs.max_iterations + verbose = self.app.pargs.verbose + interactive = self.app.pargs.interactive + stream = self.app.pargs.stream + + # Set sub-agent debug mode + if interactive or stream: + set_debug_mode(interactive=interactive, verbose=stream) + if interactive: + self.app.log.info("[DEBUG] Interactive mode: sub-agents will run in TUI") + self.app.log.warning("[DEBUG] Workflow will NOT complete in interactive mode") + self.app.log.warning("[DEBUG] Use --stream instead for debugging with workflow completion") + if stream: + self.app.log.info("[DEBUG] Stream mode: sub-agent output will be shown") + + # Enable debug logging if verbose + if verbose: + for handler in self.app.log.backend.handlers: + handler.setLevel(logging.DEBUG) + self.app.log.backend.setLevel(logging.DEBUG) + # Also set module-level loggers + logging.getLogger("src").setLevel(logging.DEBUG) + + owner = self.app.config.get("skill-pr-addresser", "repo_owner") + repo = self.app.config.get("skill-pr-addresser", "repo_name") + + self.app.log.info(f"Discovering context for PR #{pr_number}...") + + try: + ctx = discover( + owner=owner, + repo=repo, + pr_number=pr_number, + sessions_dir=self.app.sessions_dir, + worktree_base=self.app.worktree_base, + repo_path=self.app.repo_path, + skill_path=skill, + force=force, + ) + + # Print discovery summary + self.app.log.info("Discovery complete:") + for line in ctx.summary().split("\n"): + self.app.log.info(line) + + if dry_run: + self.app.log.info("[DRY RUN] Would address feedback:") + for review in ctx.blocking_reviews: + self.app.log.info(f" - Review from {review.author}: {review.state}") + for thread in ctx.unresolved_threads: + self.app.log.info( + f" - Thread on {thread.path}:{thread.line} by {thread.author}" + ) + self.app.log.info("[DRY RUN] No changes made") + return + + # Run the addresser + agent_dir = Path(__file__).parent.parent + rate_limit = self.app.config.get("skill-pr-addresser", "rate_limit_delay") + + addresser = Addresser( + agent_dir=agent_dir, + sessions_dir=self.app.sessions_dir, + owner=owner, + repo=repo, + rate_limit_delay=float(rate_limit) if rate_limit else 1.0, + ) + + result = addresser.address(ctx, max_iterations) + + # Report final summary + if result.success: + self.app.log.info( + f"Addressed {result.total_addressed} items across " + f"{result.iterations_run} iteration(s)" + ) + if result.total_skipped: + self.app.log.warning(f"Skipped {result.total_skipped} items") + if result.ready_for_review: + self.app.log.info("PR is ready for re-review") + self.app.log.info(f"Total cost: {result.cost_formatted}") + else: + self.app.log.warning("Could not address feedback") + if result.error: + self.app.log.error(result.error) + if result.total_cost > 0: + self.app.log.info(f"Cost incurred: {result.cost_formatted}") + + except PRNotFoundError as e: + self.app.log.error(str(e)) + self.app.exit_code = 1 + + except PRClosedError as e: + self.app.log.info(str(e)) + # Exit code 0 - not an error, just nothing to do + + except NoFeedbackError as e: + self.app.log.info(str(e)) + # Exit code 0 - not an error, just nothing to do + + except AddresserError as e: + self.app.log.error(str(e)) + self.app.exit_code = e.exit_code + + @ex( + help="Check addressing status for a PR", + arguments=[ + (["pr_number"], {"help": "Pull request number", "type": int}), + ], + ) + def status(self): + """Check addressing status for a PR.""" + pr_number = self.app.pargs.pr_number + owner = self.app.config.get("skill-pr-addresser", "repo_owner") + repo = self.app.config.get("skill-pr-addresser", "repo_name") + + self.app.log.info(f"Checking status for PR #{pr_number}...") + + try: + # Use discovery to get current state + ctx = discover( + owner=owner, + repo=repo, + pr_number=pr_number, + sessions_dir=self.app.sessions_dir, + worktree_base=self.app.worktree_base, + repo_path=self.app.repo_path, + force=True, # Don't fail if no feedback + ) + + print(ctx.summary()) + + if ctx.pr.review_decision == "APPROVED": + self.app.log.info("PR is approved!") + elif ctx.needs_changes: + self.app.log.info(f"PR needs changes ({ctx.feedback_count} items)") + else: + self.app.log.info("PR has no pending feedback") + + except PRNotFoundError as e: + self.app.log.error(str(e)) + self.app.exit_code = 1 + + except PRClosedError as e: + self.app.log.info(str(e)) + + @ex( + help="List sessions", + arguments=[ + ( + ["--all"], + { + "help": "Include completed sessions", + "action": "store_true", + "dest": "show_all", + }, + ), + ], + ) + def sessions(self): + """List all sessions.""" + from skill_agents_common.session import list_sessions as list_sessions_common + + sessions = list_sessions_common(self.app.sessions_dir) + + if not sessions: + self.app.log.info("No sessions found") + return + + show_all = self.app.pargs.show_all + + # Filter if not showing all + if not show_all: + sessions = [s for s in sessions if s.get("stage") not in ("complete", "failed")] + + if not sessions: + self.app.log.info("No active sessions found (use --all to see completed)") + return + + # Print header + print(f"{'ID':<10} {'PR':<8} {'Stage':<20} {'Skill':<40}") + print("-" * 80) + + for s in sessions: + pr = s.get("pr_number", "-") + print( + f"{s['session_id']:<10} " + f"{'#' + str(pr) if pr else '-':<8} " + f"{s['stage']:<20} " + f"{s.get('skill_path', '-'):<40}" + ) + + @ex( + help="Find PRs with pending feedback", + arguments=[ + ( + ["--label"], + { + "help": "Filter by label (can be used multiple times)", + "action": "append", + "dest": "labels", + }, + ), + ( + ["--limit"], + { + "help": "Maximum number of PRs to return (default: 50)", + "type": int, + "default": 50, + }, + ), + ], + ) + def find(self): + """Find PRs with pending feedback.""" + owner = self.app.config.get("skill-pr-addresser", "repo_owner") + repo = self.app.config.get("skill-pr-addresser", "repo_name") + labels = self.app.pargs.labels + limit = self.app.pargs.limit + + self.app.log.info(f"Finding PRs with pending feedback in {owner}/{repo}...") + + prs = find_prs_with_feedback( + owner=owner, + repo=repo, + labels=labels, + limit=limit, + ) + + if not prs: + self.app.log.info("No PRs with pending feedback found") + return + + # Print header + print(f"{'PR':<8} {'Reviewers':<30} {'Title':<50}") + print("-" * 90) + + for pr in prs: + reviewers = ", ".join(pr.get("blocking_reviewers", []))[:28] + title = pr.get("title", "")[:48] + print(f"#{pr['pr_number']:<7} {reviewers:<30} {title:<50}") + + print(f"\nFound {len(prs)} PR(s) with pending feedback") + + @ex( + help="Address feedback on all PRs with pending reviews", + arguments=[ + ( + ["--label"], + { + "help": "Filter by label (can be used multiple times)", + "action": "append", + "dest": "labels", + }, + ), + ( + ["--max-iterations"], + { + "help": "Maximum addressing iterations per PR (default: 3)", + "type": int, + "default": 3, + }, + ), + ( + ["--dry-run"], + { + "help": "Show what would be done without making changes", + "action": "store_true", + }, + ), + ( + ["--limit"], + { + "help": "Maximum number of PRs to process (default: 10)", + "type": int, + "default": 10, + }, + ), + ], + ) + def batch(self): + """Address feedback on all PRs with pending reviews.""" + owner = self.app.config.get("skill-pr-addresser", "repo_owner") + repo = self.app.config.get("skill-pr-addresser", "repo_name") + labels = self.app.pargs.labels + max_iterations = self.app.pargs.max_iterations + dry_run = self.app.pargs.dry_run + limit = self.app.pargs.limit + + self.app.log.info(f"Finding PRs with pending feedback in {owner}/{repo}...") + + prs = find_prs_with_feedback( + owner=owner, + repo=repo, + labels=labels, + limit=limit, + ) + + if not prs: + self.app.log.info("No PRs with pending feedback found") + return + + self.app.log.info(f"Found {len(prs)} PR(s) with pending feedback") + + if dry_run: + self.app.log.info("[DRY RUN] Would address:") + for pr in prs: + reviewers = ", ".join(pr.get("blocking_reviewers", [])) + self.app.log.info(f" PR #{pr['pr_number']}: {pr['title']}") + if reviewers: + self.app.log.info(f" Blocking reviewers: {reviewers}") + self.app.log.info("[DRY RUN] No changes made") + return + + # Initialize progress tracker + tracker = ProgressTracker(self.app.data_dir) + pr_numbers = [pr["pr_number"] for pr in prs] + tracker.start_batch(pr_numbers) + + # Track total cost across batch + total_batch_cost = 0.0 + + for i, pr_info in enumerate(prs, 1): + pr_number = pr_info["pr_number"] + self.app.log.info(f"\n[{i}/{len(prs)}] Addressing PR #{pr_number}...") + + try: + ctx = discover( + owner=owner, + repo=repo, + pr_number=pr_number, + sessions_dir=self.app.sessions_dir, + worktree_base=self.app.worktree_base, + repo_path=self.app.repo_path, + force=False, + ) + + tracker.start_pr( + pr_number, + title=pr_info.get("title", ""), + skill_path=ctx.skill_path, + ) + + # Run the addresser + agent_dir = Path(__file__).parent.parent + rate_limit = self.app.config.get("skill-pr-addresser", "rate_limit_delay") + + addresser = Addresser( + agent_dir=agent_dir, + sessions_dir=self.app.sessions_dir, + owner=owner, + repo=repo, + rate_limit_delay=float(rate_limit) if rate_limit else 1.0, + ) + + result = addresser.address(ctx, max_iterations) + + if result.success: + self.app.log.info( + f" ✓ Addressed {result.total_addressed} items ({result.cost_formatted})" + ) + tracker.update_iteration( + pr_number, + iteration=result.iterations_run, + feedback_count=result.total_addressed + result.total_skipped, + addressed_count=result.total_addressed, + skipped_count=result.total_skipped, + cost=result.total_cost, + ) + tracker.complete_pr(pr_number, success=True) + total_batch_cost += result.total_cost + else: + self.app.log.warning(f" ✗ Failed: {result.error}") + tracker.complete_pr(pr_number, success=False, error=result.error) + total_batch_cost += result.total_cost + + except (PRClosedError, NoFeedbackError) as e: + self.app.log.info(f" - Skipped: {e}") + tracker.skip_pr(pr_number, reason=str(e)) + + except (PRNotFoundError, AddresserError) as e: + self.app.log.error(f" ✗ Error: {e}") + tracker.complete_pr(pr_number, success=False, error=str(e)) + + # Complete batch and print summary + tracker.complete_batch() + + self.app.log.info("\n" + "=" * 50) + self.app.log.info(tracker.get_summary()) + self.app.log.info(f"Total batch cost: {format_cost(total_batch_cost)}") + + +class SkillPRAddresser(App): + """Cement application for addressing PR review feedback.""" + + class Meta: + label = "skill-pr-addresser" + handlers = [Base, ColorLogHandler, TomlConfigHandler] + config_handler = "toml" + extensions = ["colorlog"] + config_file_suffix = ".toml" + config_files = [ + # Local config (relative to agent directory) + str(Path(__file__).parent.parent / "config.toml"), + # User config + "~/.config/skill-pr-addresser/config.toml", + ] + config_defaults = { + "skill-pr-addresser": { + "repo_owner": "aRustyDev", + "repo_name": "ai", + "max_iterations": 3, + "rate_limit_delay": 1.0, + "worktree_base": "/private/tmp/worktrees", + }, + "otel": { + "enabled": False, + "endpoint": "localhost:4317", # gRPC endpoint (no http:// prefix) + "service_name": "skill-pr-addresser", + "version": "0.1.0", + }, + } + exit_on_close = True + + def setup(self): + """Set up the application.""" + super().setup() + self._init_tracing() + + def _init_tracing(self): + """Initialize OpenTelemetry tracing from config.""" + otel_enabled = self.config.get("otel", "enabled") + if otel_enabled and str(otel_enabled).lower() in ("true", "1", "yes"): + tracing_config = TracingConfig( + enabled=True, + endpoint=self.config.get("otel", "endpoint"), + service_name=self.config.get("otel", "service_name"), + version=self.config.get("otel", "version"), + ) + if init_tracing(tracing_config): + self.log.debug("OpenTelemetry tracing initialized") + else: + self.log.debug("OpenTelemetry tracing not available") + + @property + def data_dir(self) -> Path: + """Directory for agent data storage.""" + return Path(__file__).parent.parent / "data" + + @property + def sessions_dir(self) -> Path: + """Directory for session storage.""" + sessions = self.data_dir / "sessions" + sessions.mkdir(parents=True, exist_ok=True) + return sessions + + @property + def worktree_base(self) -> Path: + """Base directory for worktrees.""" + base = self.config.get("skill-pr-addresser", "worktree_base") + return Path(base) + + @property + def repo_path(self) -> Path: + """Path to the main repository. + + Walks up from the agent directory to find the repo root. + """ + # Agent is at .claude/agents/skill-pr-addresser/ + # Repo root is 4 levels up + agent_dir = Path(__file__).parent.parent + repo_root = agent_dir.parent.parent.parent + return repo_root diff --git a/.claude/agents/skill-pr-addresser/src/commit.py b/.claude/agents/skill-pr-addresser/src/commit.py new file mode 100644 index 0000000..0587802 --- /dev/null +++ b/.claude/agents/skill-pr-addresser/src/commit.py @@ -0,0 +1,349 @@ +# src/commit.py +"""Git commit and PR comment operations. + +Stage 7.5 interface module that provides commit and comment +functionality for the pipeline. +""" + +import logging +import subprocess +from pathlib import Path +from typing import TYPE_CHECKING + +if TYPE_CHECKING: + from .feedback import FixResult + + +log = logging.getLogger(__name__) + + +class ConflictError(Exception): + """Raised when a git conflict is detected.""" + pass + + +def commit_and_push( + worktree_path: Path, + fix_results: list["FixResult"], + iteration: int, + branch: str | None = None, +) -> str: + """Commit changes and push to remote. + + Args: + worktree_path: Path to git worktree + fix_results: List of fix results with changes + iteration: Current iteration number + branch: Optional branch name (defaults to current branch) + + Returns: + Commit SHA + + Raises: + ConflictError: If merge conflict detected + RuntimeError: If commit or push fails + + Commit message format: + fix(pr-feedback): address review feedback (iteration N) + + ### Changed + - {list of changes from fix_results} + + 🤖 Generated with skill-pr-addresser + """ + # Check for unstaged changes + status_result = subprocess.run( + ["git", "status", "--porcelain"], + cwd=worktree_path, + capture_output=True, + text=True, + ) + + if not status_result.stdout.strip(): + log.info("No changes to commit") + raise RuntimeError("No changes to commit") + + # Stage all changes + subprocess.run( + ["git", "add", "-A"], + cwd=worktree_path, + check=True, + ) + + # Collect all addressed items and files + all_addressed: list[dict] = [] + all_files: set[str] = set() + total_lines_added = 0 + total_lines_removed = 0 + + for result in fix_results: + all_addressed.extend(result.addressed) + all_files.update(result.files_modified) + total_lines_added += result.lines_added + total_lines_removed += result.lines_removed + + # Build addressed summary + addressed_summary = ", ".join( + item.get("id", "unknown")[:20] for item in all_addressed[:5] + ) + if len(all_addressed) > 5: + addressed_summary += f", ... (+{len(all_addressed) - 5} more)" + + files_list = ", ".join(sorted(all_files)[:5]) + if len(all_files) > 5: + files_list += f", ... (+{len(all_files) - 5} more)" + + commit_message = f"""fix(pr-feedback): address review feedback (iteration {iteration}) + +### Changed +- Addressed {len(all_addressed)} feedback items: {addressed_summary} +- Modified files: {files_list} + +### Stats +- Lines added: +{total_lines_added} +- Lines removed: -{total_lines_removed} + +🤖 Generated with [Claude Code](https://claude.com/claude-code) + +Co-Authored-By: Claude Sonnet 4 +""" + + # Create commit + result = subprocess.run( + ["git", "commit", "-m", commit_message], + cwd=worktree_path, + capture_output=True, + text=True, + ) + + if result.returncode != 0: + if "conflict" in result.stderr.lower(): + raise ConflictError(f"Merge conflict during commit: {result.stderr}") + raise RuntimeError(f"Commit failed: {result.stderr}") + + # Get commit SHA + sha_result = subprocess.run( + ["git", "rev-parse", "HEAD"], + cwd=worktree_path, + capture_output=True, + text=True, + ) + + if sha_result.returncode != 0: + raise RuntimeError("Failed to get commit SHA") + + commit_sha = sha_result.stdout.strip() + + # Push changes + if branch is None: + # Get current branch + branch_result = subprocess.run( + ["git", "rev-parse", "--abbrev-ref", "HEAD"], + cwd=worktree_path, + capture_output=True, + text=True, + ) + branch = branch_result.stdout.strip() + + push_result = subprocess.run( + ["git", "push", "origin", branch], + cwd=worktree_path, + capture_output=True, + text=True, + ) + + if push_result.returncode != 0: + # Check if rejected due to remote changes + if "fetch first" in push_result.stderr or "non-fast-forward" in push_result.stderr: + log.info("Remote has changes, pulling with rebase...") + _pull_and_retry_push(worktree_path, branch) + else: + raise RuntimeError(f"Push failed: {push_result.stderr}") + + log.info(f"Committed and pushed: {commit_sha[:8]}") + return commit_sha + + +def _pull_and_retry_push(worktree_path: Path, branch: str) -> None: + """Pull with rebase and retry push. + + Args: + worktree_path: Path to git worktree + branch: Branch name + + Raises: + ConflictError: If rebase conflict detected + RuntimeError: If push still fails + """ + # Pull with rebase + pull_result = subprocess.run( + ["git", "pull", "--rebase", "origin", branch], + cwd=worktree_path, + capture_output=True, + text=True, + ) + + if pull_result.returncode != 0: + if "conflict" in pull_result.stderr.lower() or "conflict" in pull_result.stdout.lower(): + # Abort the rebase + subprocess.run( + ["git", "rebase", "--abort"], + cwd=worktree_path, + capture_output=True, + ) + raise ConflictError(f"Rebase conflict: {pull_result.stderr}") + raise RuntimeError(f"Pull failed: {pull_result.stderr}") + + # Retry push + retry_result = subprocess.run( + ["git", "push", "origin", branch], + cwd=worktree_path, + capture_output=True, + text=True, + ) + + if retry_result.returncode != 0: + raise RuntimeError(f"Push failed after rebase: {retry_result.stderr}") + + +def post_pr_comment( + owner: str, + repo: str, + pr_number: int, + fix_results: list["FixResult"], + commit_sha: str, +) -> None: + """Post summary comment on PR. + + Args: + owner: Repository owner + repo: Repository name + pr_number: Pull request number + fix_results: List of fix results + commit_sha: Commit SHA + + Comment format: + ## ✅ Feedback Addressed + + **Iteration {n}** | Commit: {sha} + + ### Changes Made + | Action Group | Status | Details | + |--------------|--------|---------| + | ... | ✅ | ... | + + --- + *🤖 Automated by skill-pr-addresser* + """ + # Collect all results + all_addressed: list[dict] = [] + all_skipped: list[dict] = [] + all_files: set[str] = set() + + for result in fix_results: + all_addressed.extend(result.addressed) + all_skipped.extend(result.skipped) + all_files.update(result.files_modified) + + # Build table rows + rows = [] + for item in all_addressed: + item_id = item.get("id", "unknown")[:30] + details = item.get("description", "")[:50] + rows.append(f"| {item_id} | ✅ | {details} |") + + for item in all_skipped: + item_id = item.get("id", "unknown")[:30] + reason = item.get("reason", "")[:50] + rows.append(f"| {item_id} | ⏭️ Skipped | {reason} |") + + table = "\n".join(rows) if rows else "| (No changes) | - | - |" + + body = f"""## ✅ Feedback Addressed + +**Commit:** {commit_sha[:8]} + +### Changes Made +| Action Group | Status | Details | +|--------------|--------|---------| +{table} + +### Files Modified +{", ".join(sorted(all_files)) if all_files else "(none)"} + +--- +*🤖 Automated by skill-pr-addresser* +""" + + # Post comment using gh CLI + result = subprocess.run( + [ + "gh", + "pr", + "comment", + str(pr_number), + "--repo", + f"{owner}/{repo}", + "--body", + body, + ], + capture_output=True, + text=True, + ) + + if result.returncode != 0: + log.warning(f"Failed to post PR comment: {result.stderr}") + + +def post_iteration_limit_comment( + owner: str, + repo: str, + pr_number: int, + iterations: int, + resolved_count: int, +) -> None: + """Post comment when max iterations reached. + + Args: + owner: Repository owner + repo: Repository name + pr_number: Pull request number + iterations: Number of iterations run + resolved_count: Number of resolved threads + + Comment format: + ## ⚠️ Iteration Limit Reached + + Reached maximum iterations ({n}). Some feedback may require manual attention. + + **Resolved:** {count} threads + **Remaining:** See unresolved threads above. + """ + body = f"""## ⚠️ Iteration Limit Reached + +Reached maximum iterations ({iterations}). Some feedback may require manual attention. + +**Resolved:** {resolved_count} threads +**Remaining:** See unresolved review threads above. + +--- +*🤖 Automated by skill-pr-addresser* +""" + + result = subprocess.run( + [ + "gh", + "pr", + "comment", + str(pr_number), + "--repo", + f"{owner}/{repo}", + "--body", + body, + ], + capture_output=True, + text=True, + ) + + if result.returncode != 0: + log.warning(f"Failed to post iteration limit comment: {result.stderr}") diff --git a/.claude/agents/skill-pr-addresser/src/consolidate.py b/.claude/agents/skill-pr-addresser/src/consolidate.py new file mode 100644 index 0000000..e5c5576 --- /dev/null +++ b/.claude/agents/skill-pr-addresser/src/consolidate.py @@ -0,0 +1,220 @@ +# src/consolidate.py +"""LLM-powered feedback consolidation. + +Stage 7.5 interface module that wraps feedback.py's analyze_feedback +with the interface expected by stages 8-13. +""" + +from dataclasses import dataclass, field +from pathlib import Path +from typing import TYPE_CHECKING + +from .feedback import ( + ActionGroup, + AnalysisResult, + analyze_feedback as _analyze_feedback, +) +from .costs import CallCost + +if TYPE_CHECKING: + from .filter import FilteredFeedback + from .models import TokenUsage + from .pipeline import PipelineContext + + +@dataclass +class ConsolidationResult: + """Result of LLM consolidation.""" + + action_groups: list[ActionGroup] = field(default_factory=list) + guidance: list[str] = field(default_factory=list) + token_usage: "TokenUsage | None" = None + + # Cross-reference info passed through + thread_links: dict[str, list[str]] = field(default_factory=dict) + + # Additional metadata from analysis + blocking_reviews: list[str] = field(default_factory=list) + approved_by: list[str] = field(default_factory=list) + summary: str = "" + + @classmethod + def from_analysis_result( + cls, + result: AnalysisResult, + cost: CallCost | None = None, + ) -> "ConsolidationResult": + """Convert an AnalysisResult to ConsolidationResult. + + Args: + result: AnalysisResult from feedback.py + cost: Optional call cost for token tracking + + Returns: + ConsolidationResult with consolidated data + """ + # Build thread_links from action groups + thread_links: dict[str, list[str]] = {} + for group in result.action_groups: + if group.thread_ids: + thread_links[group.id] = group.thread_ids + + # Convert cost to TokenUsage if available + token_usage = None + if cost: + # Import here to avoid circular dependency + try: + from .models import TokenUsage + token_usage = TokenUsage( + input_tokens=0, # Cost estimate doesn't have token breakdown + output_tokens=0, + total_cost=cost.total_cost, + ) + except ImportError: + pass + + return cls( + action_groups=result.action_groups, + guidance=result.guidance, + token_usage=token_usage, + thread_links=thread_links, + blocking_reviews=result.blocking_reviews, + approved_by=result.approved_by, + summary=result.summary, + ) + + +def consolidate_feedback( + agent_dir: Path, + filtered: "FilteredFeedback", + ctx: "PipelineContext", + thread_links: dict[str, list[str]] | None = None, +) -> ConsolidationResult: + """Consolidate filtered feedback into action groups using LLM. + + The consolidator: + 1. Groups related feedback by theme/location + 2. Deduplicates overlapping requests + 3. Prioritizes by severity + 4. Creates actionable descriptions + + Args: + agent_dir: Path to agent directory (for prompts) + filtered: Filtered feedback from filter stage + ctx: Pipeline context with PR info + thread_links: Optional mapping of review IDs to linked thread IDs + + Returns: + ConsolidationResult with action groups + + Implementation: + Uses subagent/consolidator with structured output + """ + # Build a DiscoveryContext-compatible object for analyze_feedback + from .discovery import DiscoveryContext + + # Create a minimal discovery context from pipeline context + # This adapts the Stage 8+ interface to the existing implementation + discovery_ctx = DiscoveryContext( + pr=ctx.pr_info, + pr_number=ctx.pr_info.pr_number, + skill_path=ctx.skill_path, + blocking_reviews=filtered.reviews if hasattr(filtered, 'reviews') else [], + actionable_reviews=[], + actionable_comments=filtered.comments if hasattr(filtered, 'comments') else [], + unresolved_threads=filtered.threads if hasattr(filtered, 'threads') else [], + ) + + # Call the existing analyze_feedback + result, cost = _analyze_feedback(agent_dir, discovery_ctx) + + # Convert to ConsolidationResult + consolidation = ConsolidationResult.from_analysis_result(result, cost) + + # Merge in provided thread_links if any + if thread_links: + consolidation.thread_links.update(thread_links) + + return consolidation + + +def _build_consolidation_prompt( + filtered: "FilteredFeedback", + ctx: "PipelineContext", + thread_links: dict[str, list[str]] | None, +) -> str: + """Build the prompt for consolidation LLM call. + + This is used internally by the consolidator sub-agent. + """ + import json + + lines = [ + f"## PR Information", + f"- PR Number: {ctx.pr_info.pr_number}", + f"- Skill: {ctx.skill_path}", + "", + "## Feedback to Consolidate", + ] + + # Add reviews + if hasattr(filtered, 'reviews') and filtered.reviews: + lines.append(f"\n### Reviews ({len(filtered.reviews)})") + for review in filtered.reviews: + lines.append(f"- {review.author}: {review.body[:200] if review.body else '(no body)'}...") + + # Add comments + if hasattr(filtered, 'comments') and filtered.comments: + lines.append(f"\n### Comments ({len(filtered.comments)})") + for comment in filtered.comments: + lines.append(f"- {comment.author}: {comment.body[:200]}...") + + # Add threads + if hasattr(filtered, 'threads') and filtered.threads: + lines.append(f"\n### Review Threads ({len(filtered.threads)})") + for thread in filtered.threads: + lines.append(f"- {thread.path}:{thread.line}") + + # Add thread links if available + if thread_links: + lines.append("\n### Cross-References") + lines.append(json.dumps(thread_links, indent=2)) + + lines.append("\n## Instructions") + lines.append("Consolidate similar feedback into action groups.") + + return "\n".join(lines) + + +def _parse_consolidation_response(response: dict) -> ConsolidationResult: + """Parse structured output from consolidation LLM.""" + from .feedback import ActionGroup, Location + + action_groups = [] + for group_data in response.get("action_groups", []): + locations = [ + Location( + file=loc.get("file", ""), + line=loc.get("line"), + thread_id=loc.get("thread_id"), + ) + for loc in group_data.get("locations", []) + ] + action_groups.append( + ActionGroup( + id=group_data.get("id", "unknown"), + action=group_data.get("action", "other"), + description=group_data.get("description", ""), + locations=locations, + priority=group_data.get("priority", "medium"), + type=group_data.get("type", "suggestion"), + ) + ) + + return ConsolidationResult( + action_groups=action_groups, + guidance=response.get("guidance", []), + blocking_reviews=response.get("blocking_reviews", []), + approved_by=response.get("approved_by", []), + summary=response.get("summary", ""), + ) diff --git a/.claude/agents/skill-pr-addresser/src/controllers/__init__.py b/.claude/agents/skill-pr-addresser/src/controllers/__init__.py new file mode 100644 index 0000000..65fab6a --- /dev/null +++ b/.claude/agents/skill-pr-addresser/src/controllers/__init__.py @@ -0,0 +1 @@ +"""Controllers for skill-pr-addresser.""" diff --git a/.claude/agents/skill-pr-addresser/src/costs.py b/.claude/agents/skill-pr-addresser/src/costs.py new file mode 100644 index 0000000..c6b6537 --- /dev/null +++ b/.claude/agents/skill-pr-addresser/src/costs.py @@ -0,0 +1,277 @@ +"""Cost tracking for LLM API calls. + +Estimates costs based on model pricing and tracks cumulative spending. +""" + +import json +import logging +from dataclasses import dataclass, field +from datetime import datetime +from pathlib import Path + +log = logging.getLogger(__name__) + +# Approximate pricing per 1K tokens (as of 2024) +# These are estimates and should be updated when pricing changes +MODEL_PRICING = { + # Claude 3.5 Haiku + "claude-3-5-haiku-20241022": {"input": 0.001, "output": 0.005}, + # Claude Sonnet 4 + "claude-sonnet-4-20250514": {"input": 0.003, "output": 0.015}, + # Legacy names for compatibility + "haiku-35": {"input": 0.001, "output": 0.005}, + "sonnet-4": {"input": 0.003, "output": 0.015}, +} + +# Average tokens per call (rough estimates) +# Based on typical skill-pr-addresser usage patterns +ESTIMATED_TOKENS = { + "feedback-analyzer": {"input": 2000, "output": 500}, + "feedback-fixer": {"input": 3000, "output": 2000}, +} + + +@dataclass +class CallCost: + """Cost for a single API call.""" + + subagent: str + model: str + input_tokens: int + output_tokens: int + input_cost: float + output_cost: float + total_cost: float + timestamp: str = field(default_factory=lambda: datetime.now().isoformat()) + + +@dataclass +class SessionCosts: + """Cumulative costs for a session.""" + + session_id: str + pr_number: int + calls: list[CallCost] = field(default_factory=list) + total_input_tokens: int = 0 + total_output_tokens: int = 0 + total_cost: float = 0.0 + + def add_call(self, call: CallCost) -> None: + """Add a call to the session.""" + self.calls.append(call) + self.total_input_tokens += call.input_tokens + self.total_output_tokens += call.output_tokens + self.total_cost += call.total_cost + + def to_dict(self) -> dict: + """Convert to dictionary for serialization.""" + return { + "session_id": self.session_id, + "pr_number": self.pr_number, + "calls": [ + { + "subagent": c.subagent, + "model": c.model, + "input_tokens": c.input_tokens, + "output_tokens": c.output_tokens, + "input_cost": c.input_cost, + "output_cost": c.output_cost, + "total_cost": c.total_cost, + "timestamp": c.timestamp, + } + for c in self.calls + ], + "total_input_tokens": self.total_input_tokens, + "total_output_tokens": self.total_output_tokens, + "total_cost": self.total_cost, + } + + def save(self, sessions_dir: Path) -> None: + """Save costs to session directory.""" + session_path = sessions_dir / self.session_id + session_path.mkdir(parents=True, exist_ok=True) + costs_file = session_path / "costs.json" + costs_file.write_text(json.dumps(self.to_dict(), indent=2)) + + @classmethod + def load(cls, sessions_dir: Path, session_id: str) -> "SessionCosts | None": + """Load costs from session directory.""" + costs_file = sessions_dir / session_id / "costs.json" + if not costs_file.exists(): + return None + + try: + data = json.loads(costs_file.read_text()) + costs = cls( + session_id=data["session_id"], + pr_number=data["pr_number"], + total_input_tokens=data.get("total_input_tokens", 0), + total_output_tokens=data.get("total_output_tokens", 0), + total_cost=data.get("total_cost", 0.0), + ) + for call_data in data.get("calls", []): + costs.calls.append( + CallCost( + subagent=call_data["subagent"], + model=call_data["model"], + input_tokens=call_data["input_tokens"], + output_tokens=call_data["output_tokens"], + input_cost=call_data["input_cost"], + output_cost=call_data["output_cost"], + total_cost=call_data["total_cost"], + timestamp=call_data.get("timestamp", ""), + ) + ) + return costs + except Exception as e: + log.warning(f"Failed to load costs: {e}") + return None + + +def get_model_pricing(model: str) -> dict[str, float]: + """Get pricing for a model. + + Args: + model: Model name or ID + + Returns: + Dict with 'input' and 'output' prices per 1K tokens + """ + # Try exact match first + if model in MODEL_PRICING: + return MODEL_PRICING[model] + + # Try partial match + model_lower = model.lower() + for key, pricing in MODEL_PRICING.items(): + if key in model_lower or model_lower in key: + return pricing + + # Default to Sonnet pricing (most common) + return MODEL_PRICING["claude-sonnet-4-20250514"] + + +def estimate_call_cost( + subagent: str, + model: str, + input_tokens: int | None = None, + output_tokens: int | None = None, +) -> CallCost: + """Estimate the cost of a sub-agent call. + + Args: + subagent: Sub-agent name + model: Model used + input_tokens: Actual input tokens (or None for estimate) + output_tokens: Actual output tokens (or None for estimate) + + Returns: + CallCost with estimated values + """ + pricing = get_model_pricing(model) + + # Use estimates if actual tokens not provided + if input_tokens is None: + estimates = ESTIMATED_TOKENS.get(subagent, {"input": 2000, "output": 1000}) + input_tokens = estimates["input"] + if output_tokens is None: + estimates = ESTIMATED_TOKENS.get(subagent, {"input": 2000, "output": 1000}) + output_tokens = estimates["output"] + + input_cost = (input_tokens / 1000) * pricing["input"] + output_cost = (output_tokens / 1000) * pricing["output"] + + return CallCost( + subagent=subagent, + model=model, + input_tokens=input_tokens, + output_tokens=output_tokens, + input_cost=round(input_cost, 6), + output_cost=round(output_cost, 6), + total_cost=round(input_cost + output_cost, 6), + ) + + +def estimate_pr_cost( + num_iterations: int = 1, + use_haiku_for_analysis: bool = True, + use_sonnet_for_fixes: bool = True, +) -> float: + """Estimate total cost for addressing a PR. + + Args: + num_iterations: Expected number of iterations + use_haiku_for_analysis: Whether Haiku is used for analysis + use_sonnet_for_fixes: Whether Sonnet is used for fixes + + Returns: + Estimated total cost in USD + """ + total = 0.0 + + for _ in range(num_iterations): + # Analysis call + analysis_model = ( + "claude-3-5-haiku-20241022" + if use_haiku_for_analysis + else "claude-sonnet-4-20250514" + ) + total += estimate_call_cost("feedback-analyzer", analysis_model).total_cost + + # Fix call + fix_model = ( + "claude-sonnet-4-20250514" + if use_sonnet_for_fixes + else "claude-3-5-haiku-20241022" + ) + total += estimate_call_cost("feedback-fixer", fix_model).total_cost + + return round(total, 4) + + +def format_cost(cost: float) -> str: + """Format cost as currency string. + + Args: + cost: Cost in USD + + Returns: + Formatted string like "$0.75" + """ + if cost < 0.01: + return f"${cost:.4f}" + return f"${cost:.2f}" + + +def get_cost_summary(costs: SessionCosts) -> str: + """Generate a human-readable cost summary. + + Args: + costs: Session costs + + Returns: + Multi-line summary string + """ + lines = [ + f"Cost Summary for PR #{costs.pr_number}", + "-" * 40, + f"Total calls: {len(costs.calls)}", + f"Input tokens: {costs.total_input_tokens:,}", + f"Output tokens: {costs.total_output_tokens:,}", + f"Total cost: {format_cost(costs.total_cost)}", + "", + "By sub-agent:", + ] + + # Group by subagent + by_subagent: dict[str, list[CallCost]] = {} + for call in costs.calls: + if call.subagent not in by_subagent: + by_subagent[call.subagent] = [] + by_subagent[call.subagent].append(call) + + for subagent, calls in by_subagent.items(): + subagent_cost = sum(c.total_cost for c in calls) + lines.append(f" {subagent}: {len(calls)} calls, {format_cost(subagent_cost)}") + + return "\n".join(lines) diff --git a/.claude/agents/skill-pr-addresser/src/cross_reference.py b/.claude/agents/skill-pr-addresser/src/cross_reference.py new file mode 100644 index 0000000..cc25dd3 --- /dev/null +++ b/.claude/agents/skill-pr-addresser/src/cross_reference.py @@ -0,0 +1,231 @@ +# src/cross_reference.py +"""Detect cross-references between reviews and threads. + +Stage 9 implementation: Link reviews that mention specific lines +to their corresponding thread comments. +""" + +import re +from typing import TYPE_CHECKING + +if TYPE_CHECKING: + from .models import ReviewFeedback, ThreadFeedback + from .filter import FilteredFeedback + + +def extract_line_references(text: str) -> list[int]: + """Extract line number references from text. + + Detects patterns like: + - "line 42" + - "L42" + - "lines 10-20" + - "#L42" (GitHub link format) + - "at line 42" + + Args: + text: Text to search for line references + + Returns: + Sorted list of unique line numbers found + """ + if not text: + return [] + + patterns = [ + r"\bline\s+(\d+)\b", # "line 42" + r"\bL(\d+)\b", # "L42" + r"\blines?\s+(\d+)[-–](\d+)\b", # "lines 10-20" + r"#L(\d+)", # "#L42" (GitHub link) + r"\bat\s+line\s+(\d+)\b", # "at line 42" + ] + + lines: set[int] = set() + for pattern in patterns: + for match in re.finditer(pattern, text, re.IGNORECASE): + lines.add(int(match.group(1))) + if match.lastindex and match.lastindex >= 2: + try: + lines.add(int(match.group(2))) + except (IndexError, ValueError): + pass + + return sorted(lines) + + +def extract_file_references(text: str) -> list[str]: + """Extract file path references from text. + + Detects patterns like: + - "in SKILL.md" + - "see examples/foo.py" + - backtick-quoted paths: `path/to/file.py` + + Args: + text: Text to search for file references + + Returns: + List of file paths found + """ + if not text: + return [] + + patterns = [ + r"\bin\s+([A-Za-z_][A-Za-z0-9_./\-]+\.[a-z]+)\b", + r"\bsee\s+([A-Za-z_][A-Za-z0-9_./\-]+\.[a-z]+)\b", + r"`([A-Za-z_][A-Za-z0-9_./\-]+\.[a-z]+)`", + ] + + files: list[str] = [] + seen: set[str] = set() + + for pattern in patterns: + for match in re.finditer(pattern, text, re.IGNORECASE): + file_path = match.group(1) + if file_path not in seen: + files.append(file_path) + seen.add(file_path) + + return files + + +def link_reviews_to_threads( + reviews: list["ReviewFeedback"], + threads: list["ThreadFeedback"], +) -> dict[str, list[str]]: + """Link review IDs to related thread IDs based on line references. + + When a review body mentions "see line 42", find threads at that line + and link them to avoid double-processing the same feedback. + + Args: + reviews: List of review feedback items + threads: List of thread feedback items + + Returns: + Dict mapping review ID to list of related thread IDs + """ + links: dict[str, list[str]] = {} + + for review in reviews: + if not review.body: + continue + + referenced_lines = extract_line_references(review.body) + referenced_files = extract_file_references(review.body) + + if not referenced_lines and not referenced_files: + continue + + related_threads: list[str] = [] + for thread in threads: + # Match by line only + if thread.line and thread.line in referenced_lines: + related_threads.append(thread.id) + # Match by file + line + elif referenced_files and thread.path: + # Check if thread path matches any referenced file + path_match = any( + thread.path.endswith(f) or f in thread.path + for f in referenced_files + ) + if path_match and thread.line and thread.line in referenced_lines: + if thread.id not in related_threads: + related_threads.append(thread.id) + + if related_threads: + links[review.id] = related_threads + # Update review with detected references + review.references_lines = referenced_lines + review.references_files = referenced_files + + return links + + +def mark_linked_threads( + filtered: "FilteredFeedback", + links: dict[str, list[str]], +) -> None: + """Mark threads that are linked to reviews. + + Linked threads should be consolidated with their parent review + rather than processed separately. + + Args: + filtered: Filtered feedback to update + links: Dict mapping review ID to thread IDs + """ + linked_thread_ids: set[str] = set() + for thread_ids in links.values(): + linked_thread_ids.update(thread_ids) + + for filtered_thread in filtered.threads: + if filtered_thread.thread.id in linked_thread_ids: + # Find which review it's linked to + for review_id, thread_ids in links.items(): + if filtered_thread.thread.id in thread_ids: + filtered_thread.thread.linked_to_review = review_id + break + + +def find_duplicate_feedback( + filtered: "FilteredFeedback", +) -> dict[str, list[str]]: + """Find duplicate feedback items based on content similarity. + + Looks for: + - Reviews with very similar body text + - Comments that repeat the same request + - Threads at adjacent lines with similar content + + Args: + filtered: Filtered feedback + + Returns: + Dict mapping primary ID to list of duplicate IDs + """ + duplicates: dict[str, list[str]] = {} + + # Compare review bodies for similarity + for i, review1 in enumerate(filtered.reviews): + for review2 in filtered.reviews[i + 1 :]: + if _are_similar(review1.body, review2.body): + if review1.id not in duplicates: + duplicates[review1.id] = [] + duplicates[review1.id].append(review2.id) + + # Compare comment bodies + for i, comment1 in enumerate(filtered.comments): + for comment2 in filtered.comments[i + 1 :]: + if _are_similar(comment1.body, comment2.body): + if comment1.id not in duplicates: + duplicates[comment1.id] = [] + duplicates[comment1.id].append(comment2.id) + + return duplicates + + +def _are_similar(text1: str | None, text2: str | None, threshold: float = 0.8) -> bool: + """Check if two texts are similar using simple word overlap. + + Args: + text1: First text + text2: Second text + threshold: Minimum overlap ratio (0-1) + + Returns: + True if texts are similar + """ + if not text1 or not text2: + return False + + words1 = set(text1.lower().split()) + words2 = set(text2.lower().split()) + + if not words1 or not words2: + return False + + overlap = len(words1 & words2) + total = min(len(words1), len(words2)) + + return (overlap / total) >= threshold if total > 0 else False diff --git a/.claude/agents/skill-pr-addresser/src/discovery.py b/.claude/agents/skill-pr-addresser/src/discovery.py new file mode 100644 index 0000000..062bdaf --- /dev/null +++ b/.claude/agents/skill-pr-addresser/src/discovery.py @@ -0,0 +1,460 @@ +"""PR and session discovery for skill-pr-addresser. + +Gathers all context needed before LLM processing. +""" + +import logging +import sys +from dataclasses import dataclass, field +from pathlib import Path + +# Add parent directory to path for shared library import +_agents_dir = Path(__file__).parent.parent.parent +if str(_agents_dir) not in sys.path: + sys.path.insert(0, str(_agents_dir)) + +from skill_agents_common.models import AgentSession +from skill_agents_common.session import ( + extract_linked_issues, + find_session_by_issue, + find_session_by_pr, + create_session_from_pr, + generate_session_id, +) +from skill_agents_common.worktree import get_or_create_worktree, WorktreeInfo + +from .exceptions import PRNotFoundError, PRClosedError, NoFeedbackError +from .github_pr import ( + PRDetails, + PendingFeedback, + Review, + Comment, + ReviewThread, + get_pr_details, + get_pr_reviews, + get_pr_comments, + get_review_threads, + get_pending_feedback, + infer_skill_from_files, +) + + +# Stage 7.5 interface aliases for compatibility with stages 8-13 +# These re-export github_pr functions with the interface names expected by the plan + + +def fetch_reviews(owner: str, repo: str, pr_number: int) -> list[dict]: + """Fetch all reviews on a PR. + + Stage 7.5 interface wrapper around get_pr_reviews(). + + Args: + owner: Repository owner + repo: Repository name + pr_number: PR number + + Returns: + List of review dicts from GitHub API + """ + reviews = get_pr_reviews(owner, repo, pr_number) + return [ + { + "id": r.id, + "state": r.state, + "body": r.body, + "author": {"login": r.author}, + "submittedAt": r.submitted_at, + } + for r in reviews + ] + + +def fetch_comments(owner: str, repo: str, pr_number: int) -> list[dict]: + """Fetch all issue comments on a PR. + + Stage 7.5 interface wrapper around get_pr_comments(). + + Args: + owner: Repository owner + repo: Repository name + pr_number: PR number + + Returns: + List of comment dicts from GitHub API + """ + comments = get_pr_comments(owner, repo, pr_number) + return [ + { + "id": c.id, + "body": c.body, + "author": {"login": c.author}, + "createdAt": c.created_at, + "url": c.url, + } + for c in comments + ] + + +def fetch_threads(owner: str, repo: str, pr_number: int) -> list[dict]: + """Fetch all review threads on a PR. + + Stage 7.5 interface wrapper around get_review_threads(). + + Args: + owner: Repository owner + repo: Repository name + pr_number: PR number + + Returns: + List of thread dicts from GitHub GraphQL + """ + threads = get_review_threads(owner, repo, pr_number) + return [ + { + "id": t.id, + "path": t.path, + "line": t.line, + "isResolved": t.is_resolved, + "isOutdated": t.is_outdated, + "comments": {"nodes": t.comments}, + } + for t in threads + ] + + +def discover_pr_info(pr_number: int, owner: str, repo: str) -> "PRInfo": + """Discover PR metadata from GitHub. + + Stage 7.5 interface for simple PR info lookup. + + Args: + pr_number: Pull request number + owner: Repository owner + repo: Repository name + + Returns: + PRInfo with PR metadata + """ + pr = get_pr_details(owner, repo, pr_number) + if not pr: + from .exceptions import PRNotFoundError + raise PRNotFoundError(f"PR #{pr_number} does not exist") + + return PRInfo( + pr_number=pr_number, + owner=owner, + repo=repo, + author="", # Not in PRDetails, would need separate fetch + branch=pr.branch, + base_branch=pr.base_branch, + title=pr.title, + worktree_path=Path("."), # Placeholder, set by caller + ) + + +@dataclass +class PRInfo: + """Information about a pull request (Stage 7.5 interface).""" + + pr_number: int + owner: str + repo: str + author: str + branch: str + base_branch: str + title: str + worktree_path: Path + + +log = logging.getLogger(__name__) + + +@dataclass +class DiscoveryContext: + """Context gathered during discovery phase. + + Contains all information needed to address PR feedback. + """ + + # PR information + pr: PRDetails + pr_number: int + skill_path: str | None + + # Session tracking + session: AgentSession | None = None + is_new_session: bool = False + + # Worktree + worktree: WorktreeInfo | None = None + + # Feedback to address (deterministically actionable) + blocking_reviews: list[Review] = field(default_factory=list) + actionable_reviews: list[Review] = field(default_factory=list) + actionable_comments: list[Comment] = field(default_factory=list) + unresolved_threads: list[ReviewThread] = field(default_factory=list) + + # Feedback needing substantive check (LLM decides if actionable) + pending_reviews: list[Review] = field(default_factory=list) + pending_comments: list[Comment] = field(default_factory=list) + + # Feedback after substantive check (populated later) + substantive_reviews: list[Review] = field(default_factory=list) + substantive_comments: list[Comment] = field(default_factory=list) + + # Tracking already-addressed feedback + addressed_ids: set[str] = field(default_factory=set) + + @property + def feedback_count(self) -> int: + """Total number of deterministically actionable feedback items.""" + return ( + len(self.blocking_reviews) + + len(self.actionable_reviews) + + len(self.actionable_comments) + + len(self.unresolved_threads) + + len(self.substantive_reviews) + + len(self.substantive_comments) + ) + + @property + def pending_count(self) -> int: + """Number of items needing substantive check.""" + return len(self.pending_reviews) + len(self.pending_comments) + + @property + def blocking_reviewers(self) -> list[str]: + """Reviewers who requested changes.""" + return [r.author for r in self.blocking_reviews] + + @property + def has_blocking_reviews(self) -> bool: + """Whether there are reviews requesting changes.""" + return len(self.blocking_reviews) > 0 + + @property + def needs_changes(self) -> bool: + """Whether the PR needs changes (has any actionable feedback).""" + return self.feedback_count > 0 + + @property + def needs_substantive_check(self) -> bool: + """Whether there's pending feedback needing LLM check.""" + return self.pending_count > 0 + + @property + def all_reviews(self) -> list[Review]: + """All actionable reviews (blocking + actionable + substantive).""" + return self.blocking_reviews + self.actionable_reviews + self.substantive_reviews + + @property + def all_comments(self) -> list[Comment]: + """All actionable comments.""" + return self.actionable_comments + self.substantive_comments + + def summary(self) -> str: + """Generate a summary of discovered context.""" + lines = [ + f"PR #{self.pr_number}: {self.pr.title}", + f" State: {self.pr.state}", + f" Review Decision: {self.pr.review_decision or 'None'}", + f" Skill: {self.skill_path or '(unknown)'}", + f" Blocking Reviews: {len(self.blocking_reviews)}", + f" Actionable Reviews: {len(self.actionable_reviews)}", + f" Actionable Comments: {len(self.actionable_comments)}", + f" Unresolved Threads: {len(self.unresolved_threads)}", + ] + + if self.pending_count > 0: + lines.append(f" Pending Substantive Check: {self.pending_count}") + + if self.substantive_reviews or self.substantive_comments: + lines.append( + f" Substantive (after LLM): {len(self.substantive_reviews)} reviews, " + f"{len(self.substantive_comments)} comments" + ) + + if self.blocking_reviewers: + lines.append(f" Blocking Reviewers: {', '.join(self.blocking_reviewers)}") + + if self.worktree: + lines.append(f" Worktree: {self.worktree.path}") + + if self.session: + lines.append(f" Session: {self.session.session_id}") + if self.is_new_session: + lines.append(" (newly created)") + + return "\n".join(lines) + + +def _load_addressed_ids(session: AgentSession) -> set[str]: + """Load IDs of already-addressed feedback from session. + + Args: + session: Agent session + + Returns: + Set of addressed feedback IDs + """ + addressed = session.results.get("addressed_feedback_ids", []) + return set(addressed) + + +def discover( + owner: str, + repo: str, + pr_number: int, + sessions_dir: Path, + worktree_base: Path, + repo_path: Path, + skill_path: str | None = None, + force: bool = False, +) -> DiscoveryContext: + """Gather all context needed for addressing review feedback. + + Args: + owner: Repository owner + repo: Repository name + pr_number: Pull request number + sessions_dir: Directory containing session data + worktree_base: Base directory for worktrees + repo_path: Path to the main repository + skill_path: Optional explicit skill path (otherwise inferred) + force: If True, proceed even without pending feedback + + Returns: + DiscoveryContext with all gathered information + + Raises: + PRNotFoundError: If PR does not exist + PRClosedError: If PR is already merged or closed + NoFeedbackError: If no feedback to address (unless force=True) + """ + log.info(f"Discovering context for PR #{pr_number}") + + # 1. Get PR details and validate + log.debug("Fetching PR details...") + pr = get_pr_details(owner, repo, pr_number) + if not pr: + raise PRNotFoundError(f"PR #{pr_number} does not exist") + + log.debug(f"PR state: {pr.state}, review_decision: {pr.review_decision}") + + if pr.state in ("MERGED", "CLOSED"): + raise PRClosedError(f"PR #{pr_number} is already {pr.state.lower()}") + + # 2. Infer skill path from changed files if not provided + if not skill_path: + skill_path = infer_skill_from_files(pr.changed_files) + if skill_path: + log.info(f"Inferred skill path: {skill_path}") + else: + log.warning("Could not infer skill path from changed files") + + # 3. Get pending feedback (structured result) + log.debug("Fetching pending feedback...") + feedback = get_pending_feedback(owner, repo, pr_number) + + log.info( + f"Found {len(feedback.blocking_reviews)} blocking reviews, " + f"{len(feedback.actionable_reviews)} actionable reviews, " + f"{len(feedback.actionable_comments)} actionable comments, " + f"{len(feedback.unresolved_threads)} unresolved threads" + ) + + if feedback.needs_substantive_check: + log.info( + f" + {len(feedback.pending_reviews)} reviews, " + f"{len(feedback.pending_comments)} comments need substantive check" + ) + + # Check if we have any deterministic feedback or pending checks + has_feedback = feedback.has_deterministic_feedback or feedback.needs_substantive_check + + if not has_feedback and not force: + raise NoFeedbackError(f"PR #{pr_number} has no pending feedback to address") + + # 4. Find or create session + log.debug("Looking up session...") + session = None + is_new_session = False + linked_issues = [] + + # Try to find by PR number first + session = find_session_by_pr(sessions_dir, pr_number) + + # If not found, try by linked issue + if not session: + linked_issues = extract_linked_issues(pr.body) + if linked_issues: + log.debug(f"Found linked issues: {linked_issues}") + session = find_session_by_issue(sessions_dir, linked_issues[0]) + + # If still not found, create new session + if not session: + log.info("No existing session found, creating new one") + session = create_session_from_pr( + pr_number=pr_number, + pr_branch=pr.branch, + issue_number=linked_issues[0] if linked_issues else None, + skill_path=skill_path or "", + worktree_path=str(worktree_base / f"pr-{pr_number}"), + repo_owner=owner, + repo_name=repo, + ) + is_new_session = True + session.save(sessions_dir) + log.info(f"Created session: {session.session_id}") + else: + log.info(f"Found existing session: {session.session_id}") + + # Load already-addressed feedback IDs + addressed_ids = _load_addressed_ids(session) + if addressed_ids: + log.info(f"Previously addressed: {len(addressed_ids)} items") + + # Filter out already-addressed feedback + def not_addressed(item) -> bool: + item_id = getattr(item, "id", None) + return item_id is None or item_id not in addressed_ids + + blocking_reviews = [r for r in feedback.blocking_reviews if not_addressed(r)] + actionable_reviews = [r for r in feedback.actionable_reviews if not_addressed(r)] + actionable_comments = [c for c in feedback.actionable_comments if not_addressed(c)] + pending_reviews = [r for r in feedback.pending_reviews if not_addressed(r)] + pending_comments = [c for c in feedback.pending_comments if not_addressed(c)] + # Threads don't have persistent IDs in our model, so always include + unresolved_threads = feedback.unresolved_threads + + # 5. Get or create worktree + log.debug("Setting up worktree...") + worktree = get_or_create_worktree( + repo_path=repo_path, + worktree_base=worktree_base, + branch_name=pr.branch, + base_branch=pr.base_branch, + identifier=f"pr-{pr_number}", + ) + log.info(f"Worktree ready: {worktree.path}") + + # Update session with worktree path if needed + if session.worktree_path != str(worktree.path): + session.worktree_path = str(worktree.path) + session.save(sessions_dir) + + return DiscoveryContext( + pr=pr, + pr_number=pr_number, + skill_path=skill_path, + session=session, + is_new_session=is_new_session, + worktree=worktree, + blocking_reviews=blocking_reviews, + actionable_reviews=actionable_reviews, + actionable_comments=actionable_comments, + unresolved_threads=unresolved_threads, + pending_reviews=pending_reviews, + pending_comments=pending_comments, + addressed_ids=addressed_ids, + ) diff --git a/.claude/agents/skill-pr-addresser/src/dry_run.py b/.claude/agents/skill-pr-addresser/src/dry_run.py new file mode 100644 index 0000000..1315a3a --- /dev/null +++ b/.claude/agents/skill-pr-addresser/src/dry_run.py @@ -0,0 +1,220 @@ +# src/dry_run.py +"""Dry-run mode support for previewing changes. + +Stage 10 implementation: Preview pipeline stages without making changes. +""" + +from dataclasses import dataclass, field +from typing import TYPE_CHECKING + +if TYPE_CHECKING: + from .filter import FilteredFeedback + + +@dataclass +class DryRunSummary: + """Summary of what would be done in a full run.""" + + pr_number: int + discovery_summary: dict = field(default_factory=dict) + filter_summary: dict = field(default_factory=dict) + consolidation_summary: dict | None = None + plan_summary: dict | None = None + + def to_text(self) -> str: + """Format as human-readable text.""" + lines = [ + f"DRY RUN - Previewing what would be addressed for PR #{self.pr_number}", + "", + "=== Discovery ===", + f" Reviews: {self.discovery_summary.get('reviews', 0)}", + f" Comments: {self.discovery_summary.get('comments', 0)}", + f" Threads: {self.discovery_summary.get('threads', 0)} unresolved", + "", + "=== Filter (Is-New) ===", + f" New items: {self.filter_summary.get('item_count', 0)}", + f" Unchanged: {len(self.filter_summary.get('skipped_unchanged', []))} (already addressed)", + f" Resolved: {len(self.filter_summary.get('skipped_resolved', []))}", + "", + ] + + if self.consolidation_summary: + lines.extend([ + "=== Consolidation ===", + f" Action Groups: {len(self.consolidation_summary.get('action_groups', []))}", + ]) + for group in self.consolidation_summary.get("action_groups", []): + lines.append( + f" - {group['id']}: {group['type']} ({group['location_count']} locations)" + ) + + lines.extend([ + f" Guidance: {len(self.consolidation_summary.get('guidance', []))} items", + "", + ]) + + if self.plan_summary: + lines.extend([ + "=== Execution Plan ===", + ]) + for i, step in enumerate(self.plan_summary.get("steps", []), 1): + lines.append( + f" {i}. [{step['priority']}] {step['group_id']}: {step['description']}" + ) + + lines.extend([ + "", + f"Would address {self.plan_summary.get('total_items', 0)} feedback items " + f"in {len(self.plan_summary.get('steps', []))} action groups.", + ]) + + lines.append("") + lines.append("No changes made (dry run).") + + return "\n".join(lines) + + def to_dict(self) -> dict: + """Convert to dictionary for JSON output.""" + return { + "pr_number": self.pr_number, + "discovery": self.discovery_summary, + "filter": self.filter_summary, + "consolidation": self.consolidation_summary, + "plan": self.plan_summary, + } + + +def run_dry_run( + addresser, + pr_number: int, + stop_after: str = "plan", +) -> DryRunSummary: + """Execute pipeline stages up to stop_after for preview. + + Args: + addresser: Addresser instance + pr_number: PR to analyze + stop_after: Stage to stop after (discovery, filter, consolidate, plan) + + Returns: + DryRunSummary with stage outputs + """ + summary = DryRunSummary( + pr_number=pr_number, + discovery_summary={}, + filter_summary={}, + ) + + # Run discovery + ctx = addresser.run_discovery(pr_number) + summary.discovery_summary = { + "reviews": len(ctx.raw_reviews), + "comments": len(ctx.raw_comments), + "threads": len(ctx.raw_threads), + } + + if stop_after == "discovery": + return summary + + # Run filter + filtered = addresser.run_filter(ctx) + summary.filter_summary = filtered.summary() + summary.filter_summary["item_count"] = filtered.item_count + + if stop_after == "filter": + return summary + + # Run consolidation + consolidated = addresser.run_consolidate(ctx, filtered) + summary.consolidation_summary = { + "action_groups": [ + { + "id": g.id, + "type": g.type, + "location_count": len(g.locations), + } + for g in consolidated.action_groups + ], + "guidance": consolidated.guidance if hasattr(consolidated, "guidance") else [], + } + + if stop_after == "consolidate": + return summary + + # Run planning + plan = addresser.run_plan(ctx, consolidated) + summary.plan_summary = { + "steps": [ + { + "group_id": s.group_id, + "priority": s.priority, + "description": s.description if hasattr(s, "description") else "", + } + for s in plan.steps + ], + "total_items": plan.total_items if hasattr(plan, "total_items") else len(plan.steps), + } + + return summary + + +class DryRunMode: + """Context for dry-run execution mode.""" + + def __init__(self, enabled: bool = False, stop_after: str = "plan"): + """Initialize dry-run mode. + + Args: + enabled: Whether dry-run is enabled + stop_after: Stage to stop after + """ + self.enabled = enabled + self.stop_after = stop_after + self._actions: list[dict] = [] + + def record_action(self, action_type: str, **details) -> None: + """Record an action that would be taken. + + Args: + action_type: Type of action (commit, resolve, comment, etc.) + **details: Action-specific details + """ + if self.enabled: + self._actions.append({ + "type": action_type, + **details, + }) + + @property + def recorded_actions(self) -> list[dict]: + """Get all recorded actions.""" + return self._actions.copy() + + def would_commit(self, message: str, files: list[str]) -> None: + """Record a commit that would be made.""" + self.record_action("commit", message=message, files=files) + + def would_resolve(self, thread_id: str) -> None: + """Record a thread that would be resolved.""" + self.record_action("resolve_thread", thread_id=thread_id) + + def would_comment(self, pr_number: int, body: str) -> None: + """Record a comment that would be posted.""" + self.record_action("comment", pr_number=pr_number, body=body[:100] + "...") + + def would_push(self, branch: str) -> None: + """Record a push that would be made.""" + self.record_action("push", branch=branch) + + def get_summary_text(self) -> str: + """Get summary of recorded actions.""" + if not self._actions: + return "No actions would be taken." + + lines = ["Actions that would be taken:", ""] + for i, action in enumerate(self._actions, 1): + action_type = action.pop("type") + details = ", ".join(f"{k}={v}" for k, v in action.items()) + lines.append(f" {i}. {action_type}: {details}") + + return "\n".join(lines) diff --git a/.claude/agents/skill-pr-addresser/src/exceptions.py b/.claude/agents/skill-pr-addresser/src/exceptions.py new file mode 100644 index 0000000..72e242c --- /dev/null +++ b/.claude/agents/skill-pr-addresser/src/exceptions.py @@ -0,0 +1,55 @@ +"""Custom exceptions for skill-pr-addresser.""" + + +class AddresserError(Exception): + """Base exception for skill-pr-addresser.""" + + exit_code = 1 + + +class PRNotFoundError(AddresserError): + """PR does not exist.""" + + pass + + +class PRClosedError(AddresserError): + """PR is already merged or closed.""" + + exit_code = 0 # Not an error - just nothing to do + + +class NoFeedbackError(AddresserError): + """No feedback to address.""" + + exit_code = 0 # Not an error - just nothing to do + + +class WorktreeError(AddresserError): + """Worktree operation failed.""" + + pass + + +class ConflictError(AddresserError): + """Git conflict detected.""" + + pass + + +class IterationLimitError(AddresserError): + """Max iterations reached without approval.""" + + pass + + +class SessionNotFoundError(AddresserError): + """Session not found.""" + + pass + + +class RateLimitError(AddresserError): + """API rate limit exceeded.""" + + pass diff --git a/.claude/agents/skill-pr-addresser/src/ext/__init__.py b/.claude/agents/skill-pr-addresser/src/ext/__init__.py new file mode 100644 index 0000000..5a759eb --- /dev/null +++ b/.claude/agents/skill-pr-addresser/src/ext/__init__.py @@ -0,0 +1 @@ +"""Cement extensions for skill-pr-addresser.""" diff --git a/.claude/agents/skill-pr-addresser/src/ext_toml.py b/.claude/agents/skill-pr-addresser/src/ext_toml.py new file mode 100644 index 0000000..b643f40 --- /dev/null +++ b/.claude/agents/skill-pr-addresser/src/ext_toml.py @@ -0,0 +1,68 @@ +"""TOML configuration extension for Cement. + +Allows using TOML files for configuration instead of INI files. +""" + +from pathlib import Path +from typing import Any + +import toml +from cement.core.config import ConfigHandler +from cement.ext.ext_configparser import ConfigParserConfigHandler + + +class TomlConfigHandler(ConfigParserConfigHandler): + """Config handler that reads TOML files. + + Extends the default ConfigParser handler to support TOML format. + """ + + class Meta: + label = "toml" + + def _parse_file(self, file_path: str) -> dict[str, Any]: + """Parse a TOML file into a dictionary. + + Args: + file_path: Path to the TOML file + + Returns: + Dictionary with configuration values + """ + path = Path(file_path).expanduser() + if not path.exists(): + return {} + + with open(path, "r") as f: + return toml.load(f) + + def parse_file(self, file_path: str) -> bool: + """Parse a TOML file and merge into configuration. + + Args: + file_path: Path to the TOML file + + Returns: + True if file was parsed, False otherwise + """ + path = Path(file_path).expanduser() + + if not path.exists(): + return False + + try: + data = self._parse_file(str(path)) + self.merge(data) + return True + except toml.TomlDecodeError as e: + self.app.log.warning(f"Failed to parse TOML config {file_path}: {e}") + return False + + +def load(app): + """Load the TOML extension. + + Args: + app: The Cement application instance + """ + app.handler.register(TomlConfigHandler) diff --git a/.claude/agents/skill-pr-addresser/src/feedback.py b/.claude/agents/skill-pr-addresser/src/feedback.py new file mode 100644 index 0000000..57948e6 --- /dev/null +++ b/.claude/agents/skill-pr-addresser/src/feedback.py @@ -0,0 +1,1203 @@ +"""Feedback analysis and fixing using LLM sub-agents. + +This module orchestrates the feedback loop: +1. Analyze feedback using feedback-analyzer sub-agent +2. Fix issues using feedback-fixer sub-agent +3. Track what was addressed and what was skipped +""" + +import json +import logging +import os +import re +import select +import subprocess +import sys +import time +from dataclasses import dataclass, field +from pathlib import Path +from threading import Thread +from queue import Queue, Empty + +import yaml + +# Add parent directory to path for shared library import +_agents_dir = Path(__file__).parent.parent.parent +if str(_agents_dir) not in sys.path: + sys.path.insert(0, str(_agents_dir)) + +from skill_agents_common.models import Model, SubagentConfig, SubagentResult + +from .discovery import DiscoveryContext +from .github_pr import Review, Comment +from .tracing import span, record_subagent_call +from .costs import estimate_call_cost, CallCost + +log = logging.getLogger(__name__) + + +@dataclass +class DebugConfig: + """Global debug configuration for sub-agent execution. + + Set these flags to control how sub-agents are run: + - interactive: Run in TUI mode (no --print flag) + - verbose: Stream output in real-time while capturing + """ + interactive: bool = False + verbose: bool = False + + @classmethod + def from_env(cls) -> "DebugConfig": + """Load config from environment variables.""" + return cls( + interactive=os.environ.get("SKILL_PR_INTERACTIVE", "").lower() in ("1", "true"), + verbose=os.environ.get("SKILL_PR_VERBOSE", "").lower() in ("1", "true"), + ) + + +# Global debug config - can be modified by CLI +debug_config = DebugConfig() + + +def set_debug_mode(interactive: bool = False, verbose: bool = False) -> None: + """Set global debug mode for sub-agent execution.""" + global debug_config + debug_config = DebugConfig(interactive=interactive, verbose=verbose) + + +@dataclass +class FeedbackItem: + """A single feedback item extracted from PR reviews (legacy format).""" + + id: str + type: str # change_request, suggestion, question, nitpick + file: str | None + line: int | None + description: str + priority: str # high, medium, low + resolved: bool + suggested_fix: str | None = None + + +@dataclass +class Location: + """A specific location in the codebase where feedback applies.""" + + file: str + line: int | None = None + thread_id: str | None = None + + +@dataclass +class ActionGroup: + """A consolidated group of similar feedback items. + + Multiple review comments requesting the same type of change + are consolidated into a single ActionGroup with multiple locations. + """ + + id: str + action: str # move_to_examples, move_to_references, add_section, etc. + description: str + locations: list[Location] + priority: str # high, medium, low + type: str # change_request, suggestion, nitpick + + @property + def location_count(self) -> int: + """Number of locations this action applies to.""" + return len(self.locations) + + @property + def thread_ids(self) -> list[str]: + """Get all thread IDs for tracking addressed feedback.""" + return [loc.thread_id for loc in self.locations if loc.thread_id] + + +@dataclass +class ExecutionStep: + """A step in the execution plan.""" + + order: int + group_id: str + rationale: str + + +@dataclass +class AnalysisResult: + """Result from analyzing PR feedback. + + Contains both the new consolidated format (action_groups) and + legacy format (feedback_items) for backwards compatibility. + """ + + # New consolidated format + guidance: list[str] = field(default_factory=list) + action_groups: list[ActionGroup] = field(default_factory=list) + execution_plan: list[ExecutionStep] = field(default_factory=list) + + # Metadata + blocking_reviews: list[str] = field(default_factory=list) + approved_by: list[str] = field(default_factory=list) + summary: str = "" + + # Legacy format (populated for backwards compatibility) + feedback_items: list[FeedbackItem] = field(default_factory=list) + + @property + def actionable_count(self) -> int: + """Count of action groups to process.""" + if self.action_groups: + return len(self.action_groups) + # Fallback to legacy format + return sum( + 1 + for item in self.feedback_items + if not item.resolved and item.type in ("change_request", "suggestion", "nitpick") + ) + + @property + def has_blocking_feedback(self) -> bool: + """Whether there's feedback that blocks the PR.""" + if self.blocking_reviews: + return True + if self.action_groups: + return any(g.type == "change_request" for g in self.action_groups) + # Fallback to legacy format + return any( + item.type == "change_request" and not item.resolved + for item in self.feedback_items + ) + + @property + def ordered_groups(self) -> list[ActionGroup]: + """Get action groups in execution order.""" + if not self.execution_plan: + return self.action_groups + + # Build order map + order_map = {step.group_id: step.order for step in self.execution_plan} + return sorted( + self.action_groups, + key=lambda g: order_map.get(g.id, 999) + ) + + def get_batch(self, batch_num: int, batch_size: int = 3) -> list[ActionGroup]: + """Get a batch of action groups for processing. + + Args: + batch_num: Zero-indexed batch number + batch_size: Number of groups per batch + + Returns: + List of ActionGroups for this batch + """ + groups = self.ordered_groups + start = batch_num * batch_size + end = start + batch_size + return groups[start:end] + + @property + def batch_count(self) -> int: + """Number of batches needed (with default batch size of 3).""" + return (len(self.action_groups) + 2) // 3 # Ceiling division + + +@dataclass +class FixResult: + """Result from fixing feedback items.""" + + addressed: list[dict] = field(default_factory=list) + skipped: list[dict] = field(default_factory=list) + files_modified: list[str] = field(default_factory=list) + lines_added: int = 0 + lines_removed: int = 0 + + @property + def success_rate(self) -> float: + """Percentage of items successfully addressed.""" + total = len(self.addressed) + len(self.skipped) + if total == 0: + return 1.0 + return len(self.addressed) / total + + +@dataclass +class SubstantiveCheckResult: + """Result from checking if pending feedback is substantive.""" + + substantive_reviews: list[Review] = field(default_factory=list) + substantive_comments: list[Comment] = field(default_factory=list) + not_substantive_ids: list[str] = field(default_factory=list) + + @property + def has_substantive(self) -> bool: + """Whether any feedback was found to be substantive.""" + return len(self.substantive_reviews) > 0 or len(self.substantive_comments) > 0 + + +def _stream_output(proc: subprocess.Popen, output_lines: list[str]) -> None: + """Stream process output to terminal while capturing it.""" + try: + for line in iter(proc.stdout.readline, ""): + if not line: + break + print(line, end="", flush=True) + output_lines.append(line) + except Exception: + pass + + +def run_subagent( + agent_dir: Path, + name: str, + task: str, + working_dir: Path, + model_override: Model | None = None, +) -> tuple[SubagentResult, CallCost | None]: + """Run a sub-agent and return result with cost tracking. + + Args: + agent_dir: Path to the agent directory (skill-pr-addresser) + name: Sub-agent name (e.g., "feedback-analyzer") + task: Task description to pass to the sub-agent + working_dir: Working directory for the sub-agent + model_override: Optional model to use instead of config default + + Returns: + Tuple of (SubagentResult with output and metadata, CallCost estimate) + + Debug modes (set via set_debug_mode() or environment variables): + - interactive: Run in TUI mode for real-time watching + - verbose: Stream output to terminal while capturing + """ + start_time = time.time() + + subagent_dir = agent_dir / "subagents" / name + prompt_file = subagent_dir / "prompt.md" + config_file = subagent_dir / "config.yml" + + if not prompt_file.exists(): + return SubagentResult( + name=name, + model=None, + output="", + exit_code=1, + duration_seconds=0, + error=f"Sub-agent prompt not found: {prompt_file}", + ), None + + prompt = prompt_file.read_text() + config = yaml.safe_load(config_file.read_text()) if config_file.exists() else {} + + # Determine model + if model_override: + model = model_override + elif config.get("model"): + try: + model = Model.from_string(config["model"]) + if model is None: + model = Model.SONNET_4 + except ValueError: + model = Model.SONNET_4 + else: + model = Model.SONNET_4 + + # Build full prompt + full_prompt = f"{prompt}\n\n## Current Task\n\n{task}" + + # Build command based on debug mode + max_turns = config.get("max_turns", 30) # Prevent infinite loops + cmd = [ + "claude", + "--model", + model.value, + "--max-turns", + str(max_turns), + ] + + # In interactive mode, don't use --print (run TUI) + # In normal/verbose mode, use --print for programmatic output + if not debug_config.interactive: + cmd.extend(["--print", "--output-format", "json"]) + + cmd.extend(["-p", full_prompt]) + + # Add allowed tools if specified + allowed_tools = config.get("allowed_tools", []) + if allowed_tools: + cmd.extend(["--allowedTools", ",".join(allowed_tools)]) + + log.debug(f"Running sub-agent {name} with model {model.value}") + log.debug(f"Working directory: {working_dir}") + + if debug_config.interactive: + log.info(f"[INTERACTIVE] Running {name} in TUI mode...") + log.warning("[INTERACTIVE] Workflow will NOT complete - output cannot be parsed from TUI") + log.warning("[INTERACTIVE] Use --stream instead for debugging with workflow completion") + elif debug_config.verbose: + log.info(f"[VERBOSE] Streaming output from {name}...") + + # Run with tracing + with span(f"subagent.{name}", {"model": model.value}): + try: + # Interactive mode: run TUI, no output capture + if debug_config.interactive: + result = subprocess.run( + cmd, + cwd=working_dir, + timeout=config.get("timeout", 600), + ) + duration = time.time() - start_time + + # In interactive mode, we don't get JSON output + cost = estimate_call_cost(name, model.value) + record_subagent_call( + name=name, + model=model.value, + duration_seconds=duration, + success=result.returncode == 0, + error=None, + ) + + return SubagentResult( + name=name, + model=model, + output="[Interactive mode - output shown in TUI]", + exit_code=result.returncode, + duration_seconds=duration, + parsed_output=None, + error=None if result.returncode == 0 else "Check TUI for errors", + ), cost + + # Verbose mode: stream output while capturing + elif debug_config.verbose: + output_lines: list[str] = [] + stderr_lines: list[str] = [] + + proc = subprocess.Popen( + cmd, + cwd=working_dir, + stdout=subprocess.PIPE, + stderr=subprocess.PIPE, + text=True, + bufsize=1, # Line buffered + ) + + # Stream stdout in a thread + stdout_thread = Thread(target=_stream_output, args=(proc, output_lines)) + stdout_thread.start() + + # Wait for completion with timeout + timeout = config.get("timeout", 600) + try: + proc.wait(timeout=timeout) + except subprocess.TimeoutExpired: + proc.kill() + stdout_thread.join(timeout=1) + raise + + stdout_thread.join() + stderr_output = proc.stderr.read() if proc.stderr else "" + + duration = time.time() - start_time + output_text = "".join(output_lines) + + # Parse JSON output + parsed_output = None + if proc.returncode == 0: + try: + json_response = json.loads(output_text) + output_text = json_response.get("result", output_text) + parsed_output = _extract_json(output_text) + except json.JSONDecodeError: + pass + + cost = estimate_call_cost(name, model.value) + record_subagent_call( + name=name, + model=model.value, + duration_seconds=duration, + success=proc.returncode == 0, + error=stderr_output if proc.returncode != 0 else None, + ) + + return SubagentResult( + name=name, + model=model, + output=output_text, + exit_code=proc.returncode, + duration_seconds=duration, + parsed_output=parsed_output, + error=stderr_output if proc.returncode != 0 else None, + ), cost + + # Normal mode: capture output silently + else: + result = subprocess.run( + cmd, + cwd=working_dir, + capture_output=True, + text=True, + timeout=config.get("timeout", 600), + ) + + duration = time.time() - start_time + + # Parse JSON output + parsed_output = None + output_text = result.stdout + + if result.returncode == 0: + try: + # Claude --output-format json wraps in {"result": ...} + json_response = json.loads(output_text) + output_text = json_response.get("result", output_text) + + # Try to extract JSON from the response text + parsed_output = _extract_json(output_text) + except json.JSONDecodeError: + pass + + # Estimate cost + cost = estimate_call_cost(name, model.value) + + # Record for tracing + record_subagent_call( + name=name, + model=model.value, + duration_seconds=duration, + success=result.returncode == 0, + error=result.stderr if result.returncode != 0 else None, + ) + + return SubagentResult( + name=name, + model=model, + output=output_text, + exit_code=result.returncode, + duration_seconds=duration, + parsed_output=parsed_output, + error=result.stderr if result.returncode != 0 else None, + ), cost + + except subprocess.TimeoutExpired: + duration = time.time() - start_time + record_subagent_call( + name=name, + model=model.value, + duration_seconds=duration, + success=False, + error=f"Timeout after {config.get('timeout', 600)}s", + ) + return SubagentResult( + name=name, + model=model, + output="", + exit_code=1, + duration_seconds=duration, + error=f"Sub-agent timed out after {config.get('timeout', 600)}s", + ), None + except Exception as e: + duration = time.time() - start_time + record_subagent_call( + name=name, + model=model.value, + duration_seconds=duration, + success=False, + error=str(e), + ) + return SubagentResult( + name=name, + model=model, + output="", + exit_code=1, + duration_seconds=duration, + error=str(e), + ), None + + +def _extract_json(text: str) -> dict | None: + """Extract JSON object from text, handling markdown code fences and nested objects.""" + if not text: + return None + + # Try direct parse first + try: + result = json.loads(text) + if isinstance(result, dict): + return result + except json.JSONDecodeError: + pass + + # Try to find JSON in markdown code fence + patterns = [ + r"```json\s*\n(.*?)\n```", + r"```\s*\n(.*?)\n```", + ] + + for pattern in patterns: + match = re.search(pattern, text, re.DOTALL) + if match: + try: + result = json.loads(match.group(1)) + if isinstance(result, dict): + return result + except (json.JSONDecodeError, IndexError): + continue + + # Try to find a JSON object by finding balanced braces + # This handles nested objects properly + start_idx = text.find("{") + if start_idx == -1: + return None + + # Find matching closing brace + brace_count = 0 + for i, char in enumerate(text[start_idx:], start=start_idx): + if char == "{": + brace_count += 1 + elif char == "}": + brace_count -= 1 + if brace_count == 0: + # Found complete object + try: + result = json.loads(text[start_idx : i + 1]) + if isinstance(result, dict): + return result + except json.JSONDecodeError: + pass + break + + return None + + +def analyze_feedback( + agent_dir: Path, + ctx: DiscoveryContext, +) -> tuple[AnalysisResult, CallCost | None]: + """Analyze feedback using feedback-analyzer sub-agent. + + Args: + agent_dir: Path to the agent directory + ctx: Discovery context with PR information + + Returns: + Tuple of (AnalysisResult with structured feedback items, CallCost estimate) + """ + log.info("Analyzing feedback...") + + # Build feedback data for the sub-agent + # Use all_reviews (blocking + actionable + substantive) + reviews_data = [ + { + "author": r.author, + "state": r.state, + "body": r.body, + "submitted_at": r.submitted_at, + } + for r in ctx.all_reviews + ] + + # Use all_comments (actionable + substantive) + comments_data = [ + { + "id": c.id, + "author": c.author, + "body": c.body, + "created_at": c.created_at, + } + for c in ctx.all_comments + ] + + threads_data = [ + { + "id": t.id, + "path": t.path, + "line": t.line, + "is_resolved": t.is_resolved, + "is_outdated": t.is_outdated, + "author": t.author, + "comments": t.comments, + } + for t in ctx.unresolved_threads + ] + + task = f"""Analyze the following feedback for PR #{ctx.pr_number}: + +## PR Information +- Title: {ctx.pr.title} +- Skill: {ctx.skill_path} +- Review Decision: {ctx.pr.review_decision} + +## Reviews +{json.dumps(reviews_data, indent=2)} + +## Comments +{json.dumps(comments_data, indent=2)} + +## Unresolved Review Threads +{json.dumps(threads_data, indent=2)} + +Consolidate similar feedback, separate guidance from actions, and create an execution plan. +""" + + result, cost = run_subagent(agent_dir, "feedback-analyzer", task, Path(ctx.worktree.path)) + + if not result.success: + log.error(f"Feedback analysis failed: {result.error}") + return AnalysisResult( + blocking_reviews=[r.author for r in ctx.blocking_reviews], + summary=f"Analysis failed: {result.error}", + ), cost + + parsed = result.parsed_output + if not parsed: + log.warning("Could not parse feedback analysis output") + # Log raw output for debugging (truncated) + raw_output = result.output[:500] if result.output else "(empty)" + log.debug(f"Raw sub-agent output (first 500 chars): {raw_output}") + return AnalysisResult( + blocking_reviews=[r.author for r in ctx.blocking_reviews], + summary="Could not parse analysis output", + ), cost + + # Parse the new consolidated format + action_groups = [] + for group_data in parsed.get("action_groups", []): + try: + locations = [ + Location( + file=loc.get("file", ""), + line=loc.get("line"), + thread_id=loc.get("thread_id"), + ) + for loc in group_data.get("locations", []) + ] + action_groups.append( + ActionGroup( + id=group_data.get("id", "unknown"), + action=group_data.get("action", "other"), + description=group_data.get("description", ""), + locations=locations, + priority=group_data.get("priority", "medium"), + type=group_data.get("type", "suggestion"), + ) + ) + except Exception as e: + log.warning(f"Failed to parse action group: {e}") + + # Parse execution plan + execution_plan = [] + for step_data in parsed.get("execution_plan", []): + try: + execution_plan.append( + ExecutionStep( + order=step_data.get("order", 999), + group_id=step_data.get("group_id", ""), + rationale=step_data.get("rationale", ""), + ) + ) + except Exception as e: + log.warning(f"Failed to parse execution step: {e}") + + # Also parse legacy format for backwards compatibility + legacy_items = [] + for item_data in parsed.get("feedback_items", []): + try: + legacy_items.append( + FeedbackItem( + id=item_data.get("id", "unknown"), + type=item_data.get("type", "suggestion"), + file=item_data.get("file"), + line=item_data.get("line"), + description=item_data.get("description", ""), + priority=item_data.get("priority", "medium"), + resolved=item_data.get("resolved", False), + suggested_fix=item_data.get("suggested_fix"), + ) + ) + except Exception as e: + log.warning(f"Failed to parse legacy feedback item: {e}") + + log.info( + f"Analyzed feedback: {len(action_groups)} action groups, " + f"{len(parsed.get('guidance', []))} guidance items, " + f"{len(execution_plan)} execution steps" + ) + + return AnalysisResult( + guidance=parsed.get("guidance", []), + action_groups=action_groups, + execution_plan=execution_plan, + blocking_reviews=parsed.get("blocking_reviews", []), + approved_by=parsed.get("approved_by", []), + summary=parsed.get("summary", ""), + feedback_items=legacy_items, + ), cost + + +def fix_action_group( + agent_dir: Path, + ctx: DiscoveryContext, + group: ActionGroup, + guidance: list[str], + model: Model = Model.SONNET_4, +) -> tuple[FixResult, CallCost | None]: + """Fix a single action group using feedback-fixer sub-agent. + + Args: + agent_dir: Path to the agent directory + ctx: Discovery context with worktree path + group: Action group to fix + guidance: List of guidance strings to include + model: Model to use for fixing + + Returns: + Tuple of (FixResult with addressed and skipped items, CallCost estimate) + """ + log.info(f"Fixing action group '{group.id}' ({group.action}) with {model.value}...") + + # Build locations data + locations_data = [ + { + "file": loc.file, + "line": loc.line, + "thread_id": loc.thread_id, + } + for loc in group.locations + ] + + # Build guidance section + guidance_section = "" + if guidance: + guidance_section = "\n## Guidance (apply to all changes)\n" + for g in guidance: + guidance_section += f"- {g}\n" + + task = f"""Fix the following action group in the skill: + +## Skill Path +{ctx.skill_path} + +## Action Group +- **ID**: {group.id} +- **Action**: {group.action} +- **Description**: {group.description} +- **Priority**: {group.priority} + +## Locations to Address ({len(group.locations)} locations) +{json.dumps(locations_data, indent=2)} +{guidance_section} +## Instructions +1. Read the relevant skill files +2. Make changes at ALL listed locations to address this action group +3. Stage your changes with `git add` +4. Return a JSON summary of what was done + +Work in the current directory (this is a git worktree). +""" + + result, cost = run_subagent( + agent_dir, "feedback-fixer", task, Path(ctx.worktree.path), model_override=model + ) + + if not result.success: + log.error(f"Fixing action group '{group.id}' failed: {result.error}") + return FixResult( + skipped=[{"id": group.id, "reason": f"Fixer failed: {result.error}"}] + ), cost + + parsed = result.parsed_output + if not parsed: + log.warning(f"Could not parse fix result for group '{group.id}'") + return FixResult( + skipped=[{"id": group.id, "reason": "Could not parse fixer output"}] + ), cost + + return FixResult( + addressed=parsed.get("addressed", []), + skipped=parsed.get("skipped", []), + files_modified=parsed.get("files_modified", []), + lines_added=parsed.get("lines_added", 0), + lines_removed=parsed.get("lines_removed", 0), + ), cost + + +def fix_batch( + agent_dir: Path, + ctx: DiscoveryContext, + analysis: AnalysisResult, + batch_num: int = 0, + batch_size: int = 3, + model: Model = Model.SONNET_4, +) -> tuple[FixResult, list[CallCost]]: + """Fix a batch of action groups. + + Args: + agent_dir: Path to the agent directory + ctx: Discovery context with worktree path + analysis: Analyzed feedback with action groups + batch_num: Which batch to process (0-indexed) + batch_size: Number of groups per batch + model: Model to use for fixing + + Returns: + Tuple of (combined FixResult, list of CallCosts) + """ + batch = analysis.get_batch(batch_num, batch_size) + if not batch: + log.info(f"No action groups in batch {batch_num}") + return FixResult(), [] + + log.info(f"Processing batch {batch_num + 1}/{analysis.batch_count}: {len(batch)} action groups") + + combined_result = FixResult() + costs: list[CallCost] = [] + + for group in batch: + result, cost = fix_action_group( + agent_dir, ctx, group, analysis.guidance, model + ) + if cost: + costs.append(cost) + + # Combine results + combined_result.addressed.extend(result.addressed) + combined_result.skipped.extend(result.skipped) + combined_result.files_modified.extend(result.files_modified) + combined_result.lines_added += result.lines_added + combined_result.lines_removed += result.lines_removed + + # Deduplicate files_modified + combined_result.files_modified = list(set(combined_result.files_modified)) + + return combined_result, costs + + +def fix_all_batches( + agent_dir: Path, + ctx: DiscoveryContext, + analysis: AnalysisResult, + batch_size: int = 3, + model: Model = Model.SONNET_4, +) -> tuple[FixResult, list[CallCost]]: + """Fix all action groups in batches. + + Args: + agent_dir: Path to the agent directory + ctx: Discovery context with worktree path + analysis: Analyzed feedback with action groups + batch_size: Number of groups per batch + model: Model to use for fixing + + Returns: + Tuple of (combined FixResult from all batches, list of all CallCosts) + """ + if not analysis.action_groups: + log.info("No action groups to fix") + return FixResult(), [] + + log.info( + f"Fixing {len(analysis.action_groups)} action groups in " + f"{analysis.batch_count} batches (batch_size={batch_size})" + ) + + combined_result = FixResult() + all_costs: list[CallCost] = [] + + for batch_num in range(analysis.batch_count): + result, costs = fix_batch( + agent_dir, ctx, analysis, batch_num, batch_size, model + ) + all_costs.extend(costs) + + # Combine results + combined_result.addressed.extend(result.addressed) + combined_result.skipped.extend(result.skipped) + combined_result.files_modified.extend(result.files_modified) + combined_result.lines_added += result.lines_added + combined_result.lines_removed += result.lines_removed + + # Deduplicate files_modified + combined_result.files_modified = list(set(combined_result.files_modified)) + + log.info( + f"Completed all batches: {len(combined_result.addressed)} addressed, " + f"{len(combined_result.skipped)} skipped" + ) + + return combined_result, all_costs + + +def fix_feedback( + agent_dir: Path, + ctx: DiscoveryContext, + analysis: AnalysisResult, + model: Model = Model.SONNET_4, +) -> tuple[FixResult, CallCost | None]: + """Fix feedback items using feedback-fixer sub-agent. + + This function supports both the new action_groups format and + the legacy feedback_items format for backwards compatibility. + + Args: + agent_dir: Path to the agent directory + ctx: Discovery context with worktree path + analysis: Analyzed feedback to fix + model: Model to use for fixing + + Returns: + Tuple of (FixResult with addressed and skipped items, CallCost estimate) + """ + # Use new batched processing if action_groups are available + if analysis.action_groups: + result, costs = fix_all_batches(agent_dir, ctx, analysis, model=model) + # Return first cost for backwards compatibility (or None if empty) + total_cost = costs[0] if costs else None + return result, total_cost + + # Legacy path: use feedback_items directly + log.info(f"Fixing feedback (legacy mode) with {model.value}...") + + # Filter to unresolved, fixable items (includes nitpicks) + items_to_fix = [ + item + for item in analysis.feedback_items + if not item.resolved and item.type in ("change_request", "suggestion", "nitpick") + ] + + if not items_to_fix: + log.info("No actionable items to fix") + return FixResult(), None + + # Build task for sub-agent + items_data = [ + { + "id": item.id, + "type": item.type, + "file": item.file, + "line": item.line, + "description": item.description, + "priority": item.priority, + "suggested_fix": item.suggested_fix, + } + for item in items_to_fix + ] + + task = f"""Fix the following feedback items in the skill: + +## Skill Path +{ctx.skill_path} + +## Feedback Items to Address ({len(items_to_fix)} items) +{json.dumps(items_data, indent=2)} + +## Instructions +1. Read the relevant skill files +2. Make changes to address each feedback item +3. Stage your changes with `git add` +4. Return a JSON summary of what was done + +Work in the current directory (this is a git worktree). +""" + + result, cost = run_subagent( + agent_dir, "feedback-fixer", task, Path(ctx.worktree.path), model_override=model + ) + + if not result.success: + log.error(f"Feedback fixing failed: {result.error}") + return FixResult( + skipped=[ + {"id": item.id, "reason": f"Fixer failed: {result.error}"} + for item in items_to_fix + ] + ), cost + + parsed = result.parsed_output + if not parsed: + log.warning("Could not parse fix result output") + return FixResult( + skipped=[ + {"id": item.id, "reason": "Could not parse fixer output"} + for item in items_to_fix + ] + ), cost + + return FixResult( + addressed=parsed.get("addressed", []), + skipped=parsed.get("skipped", []), + files_modified=parsed.get("files_modified", []), + lines_added=parsed.get("lines_added", 0), + lines_removed=parsed.get("lines_removed", 0), + ), cost + + +def fix_with_escalation( + agent_dir: Path, + ctx: DiscoveryContext, + analysis: AnalysisResult, +) -> tuple[FixResult, list[CallCost]]: + """Fix feedback with automatic model escalation. + + Uses Haiku for simple fixes, escalates to Sonnet for complex ones. + + Args: + agent_dir: Path to the agent directory + ctx: Discovery context + analysis: Analyzed feedback + + Returns: + Tuple of (FixResult from the best attempt, list of CallCosts from all attempts) + """ + costs: list[CallCost] = [] + + # Determine if this is a simple fix case + is_simple = False + + if analysis.action_groups: + # New format: check if all groups are low priority nitpicks + is_simple = ( + len(analysis.action_groups) <= 2 + and all(g.type == "nitpick" or g.priority == "low" for g in analysis.action_groups) + ) + elif analysis.feedback_items: + # Legacy format: check for simple nitpicks + is_simple = ( + analysis.actionable_count <= 2 + and all( + item.type == "nitpick" + for item in analysis.feedback_items + if not item.resolved + ) + ) + + # Try Haiku first for simple fixes + if is_simple: + log.info("Using Haiku for simple fixes") + result, cost = fix_feedback(agent_dir, ctx, analysis, Model.HAIKU_35) + if cost: + costs.append(cost) + if result.addressed and result.success_rate >= 0.8: + log.info(f"Haiku fixed {len(result.addressed)} items successfully") + return result, costs + log.info("Haiku incomplete, escalating to Sonnet") + + # Use Sonnet for complex fixes + log.info("Using Sonnet for feedback fixes") + result, cost = fix_feedback(agent_dir, ctx, analysis, Model.SONNET_4) + if cost: + costs.append(cost) + return result, costs + + +def check_substantive_feedback( + agent_dir: Path, + pending_reviews: list[Review], + pending_comments: list[Comment], +) -> tuple[SubstantiveCheckResult, CallCost | None]: + """Check if pending feedback is substantive using LLM. + + Evaluates reviews and comments that don't have checkboxes + to determine if they contain actionable feedback. + + Args: + agent_dir: Path to the agent directory + pending_reviews: Reviews needing substantive check + pending_comments: Comments needing substantive check + + Returns: + Tuple of (SubstantiveCheckResult, CallCost estimate) + """ + if not pending_reviews and not pending_comments: + log.debug("No pending feedback to check") + return SubstantiveCheckResult(), None + + log.info( + f"Checking substantive feedback: {len(pending_reviews)} reviews, " + f"{len(pending_comments)} comments" + ) + + # Build input data for the sub-agent + items_to_check = [] + + for review in pending_reviews: + items_to_check.append({ + "id": review.id or f"review-{review.author}-{review.submitted_at}", + "type": "review", + "author": review.author, + "state": review.state, + "body": review.body or "", + }) + + for comment in pending_comments: + items_to_check.append({ + "id": comment.id, + "type": "comment", + "author": comment.author, + "body": comment.body, + }) + + task = f"""Evaluate whether each of the following feedback items contains substantive actionable feedback. + +## Items to Check ({len(items_to_check)} items) +{json.dumps(items_to_check, indent=2)} + +Classify each item as either "substantive" (requires code changes) or "not_substantive" (acknowledgement, question, discussion only). +""" + + # Run the substantive checker sub-agent + # Note: working_dir isn't really needed since we're not doing file ops + result, cost = run_subagent( + agent_dir, "substantive-checker", task, agent_dir + ) + + if not result.success: + log.warning(f"Substantive check failed: {result.error}") + # On failure, assume all pending items are substantive to be safe + return SubstantiveCheckResult( + substantive_reviews=pending_reviews, + substantive_comments=pending_comments, + ), cost + + parsed = result.parsed_output + if not parsed: + log.warning("Could not parse substantive check output, treating all as substantive") + return SubstantiveCheckResult( + substantive_reviews=pending_reviews, + substantive_comments=pending_comments, + ), cost + + # Parse results and match back to original objects + substantive_ids = { + item.get("id") for item in parsed.get("substantive", []) + } + not_substantive_ids = [ + item.get("id") for item in parsed.get("not_substantive", []) + ] + + # Filter reviews + substantive_reviews = [] + for review in pending_reviews: + review_id = review.id or f"review-{review.author}-{review.submitted_at}" + if review_id in substantive_ids: + substantive_reviews.append(review) + + # Filter comments + substantive_comments = [] + for comment in pending_comments: + if comment.id in substantive_ids: + substantive_comments.append(comment) + + log.info( + f"Substantive check: {len(substantive_reviews)} reviews, " + f"{len(substantive_comments)} comments are actionable" + ) + + return SubstantiveCheckResult( + substantive_reviews=substantive_reviews, + substantive_comments=substantive_comments, + not_substantive_ids=not_substantive_ids, + ), cost diff --git a/.claude/agents/skill-pr-addresser/src/filter.py b/.claude/agents/skill-pr-addresser/src/filter.py new file mode 100644 index 0000000..4bbaf5b --- /dev/null +++ b/.claude/agents/skill-pr-addresser/src/filter.py @@ -0,0 +1,247 @@ +# src/filter.py +"""Filter stage for feedback delta detection. + +Stage 9 implementation: Only process new/changed feedback items. +""" + +from dataclasses import dataclass, field +from typing import TYPE_CHECKING + +from .hashing import hashes_match +from .session_schema import FeedbackState, ThreadState + +if TYPE_CHECKING: + from .models import ( + ReviewFeedback, + CommentFeedback, + ThreadFeedback, + ThreadComment, + RawFeedback, + ) + + +@dataclass +class FilteredThread: + """A thread with only new/unprocessed comments.""" + + thread: "ThreadFeedback" + new_comments: list["ThreadComment"] + has_author_response: bool = False + + def to_consolidation_dict(self) -> dict: + """Format for LLM consolidation.""" + return { + "id": self.thread.id, + "path": self.thread.path, + "line": self.thread.line, + "original_feedback": self.thread.content, + "new_comments": [ + { + "id": c.id, + "body": c.body, + "author": c.author, + "is_resolution_signal": c.is_resolution_signal, + } + for c in self.new_comments + ], + "has_author_response": self.has_author_response, + "linked_to_review": self.thread.linked_to_review, + } + + +@dataclass +class FilteredFeedback: + """Result of filter stage with only new/changed items.""" + + reviews: list["ReviewFeedback"] = field(default_factory=list) + comments: list["CommentFeedback"] = field(default_factory=list) + threads: list[FilteredThread] = field(default_factory=list) + + # Tracking what was skipped + skipped_unchanged: list[str] = field(default_factory=list) + skipped_resolved: list[str] = field(default_factory=list) + skipped_outdated: list[str] = field(default_factory=list) + + @property + def is_empty(self) -> bool: + """Check if no new feedback to process.""" + return not self.reviews and not self.comments and not self.threads + + @property + def item_count(self) -> int: + """Total count of items to process.""" + return len(self.reviews) + len(self.comments) + len(self.threads) + + def summary(self) -> dict: + """Summary for logging.""" + return { + "new_reviews": len(self.reviews), + "new_comments": len(self.comments), + "new_threads": len(self.threads), + "skipped_unchanged": len(self.skipped_unchanged), + "skipped_resolved": len(self.skipped_resolved), + "skipped_outdated": len(self.skipped_outdated), + } + + def to_consolidation_input(self) -> dict: + """Format all feedback for LLM consolidation.""" + return { + "reviews": [ + { + "id": r.id, + "state": r.state, + "body": r.body, + "author": r.author, + "references_lines": r.references_lines, + "references_files": r.references_files, + } + for r in self.reviews + ], + "comments": [ + { + "id": c.id, + "body": c.body, + "author": c.author, + } + for c in self.comments + ], + "threads": [t.to_consolidation_dict() for t in self.threads], + } + + +def filter_feedback( + raw: "RawFeedback", + session, + pr_author: str, +) -> FilteredFeedback: + """Filter feedback to only new/changed items. + + Implements "is-new" decision tree: + 1. Skip already-addressed items (unless content changed) + 2. Skip processed thread comments (unless new replies exist) + 3. Identify author responses that may resolve feedback + + Args: + raw: Raw feedback from discovery stage + session: Session with feedback_state + pr_author: GitHub username of PR author + + Returns: + FilteredFeedback with only new/changed items + """ + feedback_state = FeedbackState.from_session(session) + result = FilteredFeedback() + + # Filter reviews + for review in raw.reviews: + if _is_new_or_changed(review, feedback_state): + result.reviews.append(review) + else: + result.skipped_unchanged.append(review.id) + + # Filter comments + for comment in raw.comments: + if _is_new_or_changed(comment, feedback_state): + result.comments.append(comment) + else: + result.skipped_unchanged.append(comment.id) + + # Filter threads (more complex) + for thread in raw.threads: + if thread.is_resolved: + result.skipped_resolved.append(thread.id) + continue + + if thread.is_outdated: + result.skipped_outdated.append(thread.id) + continue + + thread_state = feedback_state.threads.get(thread.id) + new_comments = _get_new_thread_comments(thread, thread_state, pr_author) + + if new_comments: + # Check for author response or reviewer withdrawal + has_author_response = any( + c.author == pr_author and c.is_resolution_signal + for c in new_comments + ) + + # Skip if reviewer withdrew + if thread.has_reviewer_withdrawal(): + result.skipped_resolved.append(thread.id) + continue + + result.threads.append( + FilteredThread( + thread=thread, + new_comments=new_comments, + has_author_response=has_author_response, + ) + ) + else: + result.skipped_unchanged.append(thread.id) + + # Detect cross-references after filtering + from .cross_reference import link_reviews_to_threads, mark_linked_threads + + links = link_reviews_to_threads(result.reviews, [ft.thread for ft in result.threads]) + + if links: + mark_linked_threads(result, links) + + return result + + +def _is_new_or_changed( + item: "ReviewFeedback | CommentFeedback", + state: FeedbackState, +) -> bool: + """Check if item is new or content has changed. + + Decision tree: + 1. Item ID not in addressed → NEW → include + 2. Item ID in addressed AND hash matches → UNCHANGED → skip + 3. Item ID in addressed AND hash differs → UPDATED → include + + This implements issue #796 (detect updated comments). + + Args: + item: Feedback item to check + state: Current feedback state from session + + Returns: + True if item should be processed + """ + addressed = state.addressed.get(item.id) + if not addressed: + return True # Never seen before + + # Content hash comparison (addresses #796) + return not hashes_match(item.content_hash, addressed.content_hash) + + +def _get_new_thread_comments( + thread: "ThreadFeedback", + thread_state: "ThreadState | None", + pr_author: str, +) -> list["ThreadComment"]: + """Get only new comments from a thread. + + Args: + thread: The thread to check + thread_state: Previous state from session, if any + pr_author: GitHub username of PR author + + Returns: + List of unprocessed comments + """ + if not thread_state: + # First time seeing this thread - all comments are new + return thread.comments + + new_comments = [] + for comment in thread.comments: + if comment.id not in thread_state.comments_processed: + new_comments.append(comment) + + return new_comments diff --git a/.claude/agents/skill-pr-addresser/src/fix.py b/.claude/agents/skill-pr-addresser/src/fix.py new file mode 100644 index 0000000..d3de048 --- /dev/null +++ b/.claude/agents/skill-pr-addresser/src/fix.py @@ -0,0 +1,281 @@ +# src/fix.py +"""Fix execution for action groups. + +Stage 7.5 interface module that wraps feedback.py's fix functions +with the interface expected by stages 8-13. +""" + +from dataclasses import dataclass, field +from pathlib import Path +from typing import TYPE_CHECKING + +from .feedback import ( + ActionGroup, + FixResult, + fix_action_group as _fix_action_group, + fix_batch as _fix_batch, + fix_all_batches as _fix_all_batches, + fix_with_escalation as _fix_with_escalation, +) +from .costs import CallCost + +if TYPE_CHECKING: + from .pipeline import PipelineContext + from .planner import PlanStep + from .models import AddressedLocation, TokenUsage + + +@dataclass +class AddressedLocation: + """A location that was addressed during fixing.""" + + file: str + line: int | None = None + thread_id: str | None = None + description: str = "" + + +@dataclass +class FixStepResult: + """Result from fixing a single plan step. + + Extended version of FixResult with additional tracking. + """ + + group_id: str + addressed_locations: list[AddressedLocation] = field(default_factory=list) + changes_made: list[str] = field(default_factory=list) + token_usage: "TokenUsage | None" = None + error: str | None = None + + @property + def has_changes(self) -> bool: + return len(self.changes_made) > 0 + + @property + def failed(self) -> bool: + return self.error is not None + + @property + def addressed_thread_ids(self) -> list[str]: + """Extract thread IDs from addressed locations.""" + return [loc.thread_id for loc in self.addressed_locations if loc.thread_id] + + @classmethod + def from_fix_result( + cls, + group_id: str, + result: FixResult, + cost: CallCost | None = None, + ) -> "FixStepResult": + """Convert a FixResult to FixStepResult. + + Args: + group_id: ID of the action group + result: FixResult from feedback.py + cost: Optional call cost + + Returns: + FixStepResult with converted data + """ + # Convert addressed items to AddressedLocation + locations = [] + for item in result.addressed: + locations.append( + AddressedLocation( + file=item.get("file", ""), + line=item.get("line"), + thread_id=item.get("thread_id"), + description=item.get("description", ""), + ) + ) + + # Build changes list + changes = [] + for item in result.addressed: + if desc := item.get("description"): + changes.append(desc) + elif item_id := item.get("id"): + changes.append(f"Addressed {item_id}") + + # Add file modifications to changes + for f in result.files_modified: + changes.append(f"Modified {f}") + + # Convert cost to token usage + token_usage = None + if cost: + try: + from .models import TokenUsage + token_usage = TokenUsage( + input_tokens=0, + output_tokens=0, + total_cost=cost.total_cost, + ) + except ImportError: + pass + + # Check for errors (skipped items indicate partial failure) + error = None + if result.skipped and not result.addressed: + reasons = [s.get("reason", "Unknown") for s in result.skipped[:3]] + error = "; ".join(reasons) + + return cls( + group_id=group_id, + addressed_locations=locations, + changes_made=changes, + token_usage=token_usage, + error=error, + ) + + +def fix_action_group( + ctx: "PipelineContext", + step: "PlanStep", +) -> FixStepResult: + """Execute fixes for an action group. + + Args: + ctx: Pipeline context + step: Plan step with group_id and metadata + + Returns: + FixStepResult with addressed locations + """ + # Find the action group from context + group = _find_group(ctx, step.group_id) + if not group: + return FixStepResult( + group_id=step.group_id, + error=f"Action group {step.group_id} not found", + ) + + # Get agent directory from context + agent_dir = ctx.agent_dir + + # Build discovery context for fix function + from .discovery import DiscoveryContext + + discovery_ctx = DiscoveryContext( + pr=ctx.pr_info, + pr_number=ctx.pr_info.pr_number, + skill_path=ctx.skill_path, + worktree=ctx.worktree, + blocking_reviews=[], + actionable_reviews=[], + actionable_comments=[], + unresolved_threads=[], + ) + + # Get guidance from consolidation result if available + guidance = ctx.guidance if hasattr(ctx, 'guidance') else [] + + # Call the existing fix function + result, cost = _fix_action_group( + agent_dir, + discovery_ctx, + group, + guidance, + ) + + return FixStepResult.from_fix_result(step.group_id, result, cost) + + +def run_fixer_for_locations( + ctx: "PipelineContext", + group: ActionGroup, + pending_locations: list, +) -> FixStepResult: + """Run fixer sub-agent for pending locations. + + Args: + ctx: Pipeline context + group: Action group to fix + pending_locations: Only locations not yet addressed + + Returns: + FixStepResult with changes made + + Implementation: + Invokes subagent/fixer with: + - Skill file content + - Action group description + - Specific locations to fix + """ + from .feedback import Location + + # Create a modified group with only pending locations + modified_group = ActionGroup( + id=group.id, + action=group.action, + description=group.description, + locations=[ + Location( + file=loc.file if hasattr(loc, 'file') else loc.get("file", ""), + line=loc.line if hasattr(loc, 'line') else loc.get("line"), + thread_id=loc.thread_id if hasattr(loc, 'thread_id') else loc.get("thread_id"), + ) + for loc in pending_locations + ], + priority=group.priority, + type=group.type, + ) + + # Build discovery context + from .discovery import DiscoveryContext + + discovery_ctx = DiscoveryContext( + pr=ctx.pr_info, + pr_number=ctx.pr_info.pr_number, + skill_path=ctx.skill_path, + worktree=ctx.worktree, + blocking_reviews=[], + actionable_reviews=[], + actionable_comments=[], + unresolved_threads=[], + ) + + # Get guidance + guidance = ctx.guidance if hasattr(ctx, 'guidance') else [] + + # Call the existing fix function + result, cost = _fix_action_group( + ctx.agent_dir, + discovery_ctx, + modified_group, + guidance, + ) + + return FixStepResult.from_fix_result(group.id, result, cost) + + +def _find_group(ctx: "PipelineContext", group_id: str) -> ActionGroup | None: + """Find an action group by ID in the context. + + Args: + ctx: Pipeline context + group_id: ID to find + + Returns: + ActionGroup if found, None otherwise + """ + # Check consolidated result if available + if hasattr(ctx, 'consolidation') and ctx.consolidation: + for group in ctx.consolidation.action_groups: + if group.id == group_id: + return group + + # Check analysis result if available + if hasattr(ctx, 'analysis') and ctx.analysis: + for group in ctx.analysis.action_groups: + if group.id == group_id: + return group + + return None + + +# Re-export underlying functions for direct use +fix_batch = _fix_batch +fix_all_batches = _fix_all_batches +fix_with_escalation = _fix_with_escalation diff --git a/.claude/agents/skill-pr-addresser/src/github_pr.py b/.claude/agents/skill-pr-addresser/src/github_pr.py new file mode 100644 index 0000000..12a6db6 --- /dev/null +++ b/.claude/agents/skill-pr-addresser/src/github_pr.py @@ -0,0 +1,978 @@ +"""PR-specific GitHub operations for skill-pr-addresser. + +Extends skill-agents-common with PR review and comment handling. +""" + +import json +import re +import subprocess +import sys +from dataclasses import dataclass, field +from pathlib import Path +from typing import Callable + +# Add parent directory to path for shared library import +_agents_dir = Path(__file__).parent.parent.parent +if str(_agents_dir) not in sys.path: + sys.path.insert(0, str(_agents_dir)) + +from skill_agents_common.github_ops import PullRequest + + +def has_checkboxes(text: str | None) -> bool: + """Check if text contains markdown checkboxes. + + Detects patterns like: + - [ ] unchecked + - [x] checked + * [ ] with asterisk + + Args: + text: Text to check + + Returns: + True if checkboxes found + """ + if not text: + return False + # Match markdown checkbox patterns + pattern = r"^[\s]*[-*]\s*\[[xX ]\]" + return bool(re.search(pattern, text, re.MULTILINE)) + + +def has_unchecked_boxes(text: str | None) -> bool: + """Check if text contains unchecked markdown checkboxes. + + Args: + text: Text to check + + Returns: + True if unchecked checkboxes found + """ + if not text: + return False + pattern = r"^[\s]*[-*]\s*\[ \]" + return bool(re.search(pattern, text, re.MULTILINE)) + + +@dataclass +class Review: + """A PR review.""" + + author: str + state: str # APPROVED, CHANGES_REQUESTED, COMMENTED, DISMISSED, PENDING + body: str | None = None + submitted_at: str | None = None + id: str | None = None + + @property + def has_checkboxes(self) -> bool: + """Whether this review has checkbox items.""" + return has_checkboxes(self.body) + + @property + def has_unchecked_boxes(self) -> bool: + """Whether this review has unchecked checkbox items.""" + return has_unchecked_boxes(self.body) + + @property + def is_actionable(self) -> bool: + """Whether this review requires action. + + Returns True if: + - State is CHANGES_REQUESTED + - State is COMMENTED and has unchecked checkboxes + """ + if self.state == "CHANGES_REQUESTED": + return True + if self.state == "COMMENTED" and self.has_unchecked_boxes: + return True + return False + + @property + def needs_substantive_check(self) -> bool: + """Whether this review needs LLM check to determine if actionable. + + Returns True if COMMENTED without checkboxes but has body text. + """ + if self.state != "COMMENTED": + return False + if self.has_checkboxes: + return False # Already deterministically actionable + return bool(self.body and self.body.strip()) + + +@dataclass +class Comment: + """A PR comment (not a review comment).""" + + id: str + author: str + body: str + created_at: str + url: str | None = None + + @property + def has_checkboxes(self) -> bool: + """Whether this comment has checkbox items.""" + return has_checkboxes(self.body) + + @property + def has_unchecked_boxes(self) -> bool: + """Whether this comment has unchecked checkbox items.""" + return has_unchecked_boxes(self.body) + + @property + def is_actionable(self) -> bool: + """Whether this comment has unchecked checkboxes (deterministically actionable).""" + return self.has_unchecked_boxes + + @property + def needs_substantive_check(self) -> bool: + """Whether this comment needs LLM check to determine if actionable.""" + if self.has_checkboxes: + return False # Already deterministically actionable or not + return bool(self.body and self.body.strip()) + + +@dataclass +class ReviewThread: + """A review thread with comments on specific code.""" + + id: str + path: str + line: int | None + is_resolved: bool + is_outdated: bool + comments: list[dict] = field(default_factory=list) + + @property + def first_comment(self) -> dict | None: + return self.comments[0] if self.comments else None + + @property + def author(self) -> str | None: + if self.first_comment: + return self.first_comment.get("author", {}).get("login") + return None + + +@dataclass +class PRDetails(PullRequest): + """Extended PR details for addresser.""" + + body: str | None = None + is_draft: bool = False + mergeable: str | None = None # MERGEABLE, CONFLICTING, UNKNOWN + review_decision: str | None = None # APPROVED, CHANGES_REQUESTED, REVIEW_REQUIRED + base_branch: str = "main" + head_sha: str | None = None + changed_files: list[str] = field(default_factory=list) + + +def get_pr_details(owner: str, repo: str, pr_number: int) -> PRDetails | None: + """Get comprehensive PR details. + + Args: + owner: Repository owner + repo: Repository name + pr_number: Pull request number + + Returns: + PRDetails if found, None otherwise + """ + result = subprocess.run( + [ + "gh", + "pr", + "view", + str(pr_number), + "--repo", + f"{owner}/{repo}", + "--json", + "number,title,url,state,headRefName,body,isDraft,mergeable," + "reviewDecision,baseRefName,headRefOid,files", + ], + capture_output=True, + text=True, + ) + + if result.returncode != 0: + return None + + data = json.loads(result.stdout) + + return PRDetails( + number=data["number"], + title=data["title"], + url=data["url"], + state=data["state"], + branch=data["headRefName"], + body=data.get("body"), + is_draft=data.get("isDraft", False), + mergeable=data.get("mergeable"), + review_decision=data.get("reviewDecision"), + base_branch=data.get("baseRefName", "main"), + head_sha=data.get("headRefOid"), + changed_files=[f["path"] for f in data.get("files", [])], + ) + + +def get_pr_reviews(owner: str, repo: str, pr_number: int) -> list[Review]: + """Get all reviews on a PR. + + Args: + owner: Repository owner + repo: Repository name + pr_number: Pull request number + + Returns: + List of Review objects + """ + result = subprocess.run( + [ + "gh", + "pr", + "view", + str(pr_number), + "--repo", + f"{owner}/{repo}", + "--json", + "reviews", + ], + capture_output=True, + text=True, + ) + + if result.returncode != 0: + return [] + + data = json.loads(result.stdout) + reviews = [] + + for r in data.get("reviews", []): + author = r.get("author", {}) + reviews.append( + Review( + author=author.get("login", "unknown"), + state=r.get("state", "COMMENTED"), + body=r.get("body"), + submitted_at=r.get("submittedAt"), + id=r.get("id"), + ) + ) + + return reviews + + +def get_pr_comments(owner: str, repo: str, pr_number: int) -> list[Comment]: + """Get all comments on a PR (not review comments). + + Args: + owner: Repository owner + repo: Repository name + pr_number: Pull request number + + Returns: + List of Comment objects + """ + result = subprocess.run( + [ + "gh", + "pr", + "view", + str(pr_number), + "--repo", + f"{owner}/{repo}", + "--json", + "comments", + ], + capture_output=True, + text=True, + ) + + if result.returncode != 0: + return [] + + data = json.loads(result.stdout) + comments = [] + + for c in data.get("comments", []): + author = c.get("author", {}) + # Use url as id if no explicit id available + comment_id = c.get("id") or c.get("url") or c.get("createdAt", "") + comments.append( + Comment( + id=str(comment_id), + author=author.get("login", "unknown"), + body=c.get("body", ""), + created_at=c.get("createdAt", ""), + url=c.get("url"), + ) + ) + + return comments + + +def get_review_threads(owner: str, repo: str, pr_number: int) -> list[ReviewThread]: + """Get review threads with resolution status. + + Uses GraphQL to get detailed thread information. + + Args: + owner: Repository owner + repo: Repository name + pr_number: Pull request number + + Returns: + List of ReviewThread objects + """ + query = """ + query($owner: String!, $repo: String!, $pr: Int!) { + repository(owner: $owner, name: $repo) { + pullRequest(number: $pr) { + reviewThreads(first: 100) { + nodes { + id + path + line + isResolved + isOutdated + comments(first: 10) { + nodes { + author { login } + body + createdAt + } + } + } + } + } + } + } + """ + + result = subprocess.run( + [ + "gh", + "api", + "graphql", + "-f", + f"query={query}", + "-f", + f"owner={owner}", + "-f", + f"repo={repo}", + "-F", + f"pr={pr_number}", + ], + capture_output=True, + text=True, + ) + + if result.returncode != 0: + return [] + + data = json.loads(result.stdout) + threads = [] + + try: + thread_nodes = ( + data.get("data", {}) + .get("repository", {}) + .get("pullRequest", {}) + .get("reviewThreads", {}) + .get("nodes", []) + ) + + for t in thread_nodes: + comments = [c for c in t.get("comments", {}).get("nodes", [])] + threads.append( + ReviewThread( + id=t.get("id", ""), + path=t.get("path", ""), + line=t.get("line"), + is_resolved=t.get("isResolved", False), + is_outdated=t.get("isOutdated", False), + comments=comments, + ) + ) + except (KeyError, TypeError): + pass + + return threads + + +@dataclass +class PendingFeedback: + """Structured result from get_pending_feedback. + + Categorizes feedback by how it should be handled: + - blocking_reviews: CHANGES_REQUESTED reviews (always trigger) + - actionable_reviews: COMMENTED reviews with unchecked checkboxes (always trigger) + - pending_reviews: COMMENTED reviews needing substantive LLM check + - actionable_comments: Comments with unchecked checkboxes (always trigger) + - pending_comments: Comments needing substantive LLM check + - unresolved_threads: Code review threads not yet resolved + """ + + blocking_reviews: list[Review] = field(default_factory=list) + actionable_reviews: list[Review] = field(default_factory=list) + pending_reviews: list[Review] = field(default_factory=list) + actionable_comments: list[Comment] = field(default_factory=list) + pending_comments: list[Comment] = field(default_factory=list) + unresolved_threads: list[ReviewThread] = field(default_factory=list) + + @property + def has_deterministic_feedback(self) -> bool: + """Whether there's feedback that deterministically requires action.""" + return ( + len(self.blocking_reviews) > 0 + or len(self.actionable_reviews) > 0 + or len(self.actionable_comments) > 0 + or len(self.unresolved_threads) > 0 + ) + + @property + def needs_substantive_check(self) -> bool: + """Whether there's feedback needing LLM substantive check.""" + return len(self.pending_reviews) > 0 or len(self.pending_comments) > 0 + + @property + def deterministic_count(self) -> int: + """Count of deterministically actionable items.""" + return ( + len(self.blocking_reviews) + + len(self.actionable_reviews) + + len(self.actionable_comments) + + len(self.unresolved_threads) + ) + + +def get_pending_feedback( + owner: str, repo: str, pr_number: int +) -> PendingFeedback: + """Get all pending feedback that needs addressing. + + Categorizes feedback: + - CHANGES_REQUESTED reviews → blocking_reviews (always trigger) + - COMMENTED reviews with checkboxes → actionable_reviews (always trigger) + - COMMENTED reviews with body → pending_reviews (need LLM check) + - Comments with checkboxes → actionable_comments (always trigger) + - Comments with body → pending_comments (need LLM check) + - Unresolved review threads → always trigger + + Args: + owner: Repository owner + repo: Repository name + pr_number: Pull request number + + Returns: + PendingFeedback with categorized feedback + """ + reviews = get_pr_reviews(owner, repo, pr_number) + comments = get_pr_comments(owner, repo, pr_number) + threads = get_review_threads(owner, repo, pr_number) + + result = PendingFeedback() + + # Categorize reviews + for r in reviews: + if r.state == "CHANGES_REQUESTED": + result.blocking_reviews.append(r) + elif r.state == "COMMENTED": + if r.has_unchecked_boxes: + result.actionable_reviews.append(r) + elif r.needs_substantive_check: + result.pending_reviews.append(r) + # Reviews with all checked boxes or empty body are skipped + + # Categorize comments + for c in comments: + if c.has_unchecked_boxes: + result.actionable_comments.append(c) + elif c.needs_substantive_check: + result.pending_comments.append(c) + + # Filter threads to unresolved, non-outdated + result.unresolved_threads = [ + t for t in threads if not t.is_resolved and not t.is_outdated + ] + + return result + + +def infer_skill_from_files(changed_files: list[str]) -> str | None: + """Infer skill path from changed files. + + Args: + changed_files: List of file paths changed in PR + + Returns: + Skill path (e.g., "components/skills/lang-rust-dev") or None + """ + for f in changed_files: + if f.startswith("components/skills/"): + parts = f.split("/") + if len(parts) >= 3: + return "/".join(parts[:3]) + return None + + +def add_pr_comment(owner: str, repo: str, pr_number: int, body: str) -> str | None: + """Add a comment to a PR. + + Args: + owner: Repository owner + repo: Repository name + pr_number: Pull request number + body: Comment body (markdown) + + Returns: + Comment URL if successful, None otherwise + """ + result = subprocess.run( + [ + "gh", + "pr", + "comment", + str(pr_number), + "--repo", + f"{owner}/{repo}", + "--body", + body, + ], + capture_output=True, + text=True, + ) + + if result.returncode != 0: + return None + + # gh pr comment outputs the comment URL + return result.stdout.strip() if result.stdout else None + + +def request_rereview( + owner: str, repo: str, pr_number: int, reviewers: list[str] +) -> bool: + """Request re-review from specified reviewers. + + Args: + owner: Repository owner + repo: Repository name + pr_number: Pull request number + reviewers: List of reviewer usernames + + Returns: + True if successful + """ + if not reviewers: + return False + + # Use gh pr edit to add reviewers + result = subprocess.run( + [ + "gh", + "pr", + "edit", + str(pr_number), + "--repo", + f"{owner}/{repo}", + "--add-reviewer", + ",".join(reviewers), + ], + capture_output=True, + text=True, + ) + + return result.returncode == 0 + + +def find_prs_with_feedback( + owner: str, + repo: str, + labels: list[str] | None = None, + state: str = "open", + limit: int = 50, +) -> list[dict]: + """Find PRs with pending review feedback. + + Args: + owner: Repository owner + repo: Repository name + labels: Optional list of labels to filter by + state: PR state filter (open, closed, all) + limit: Maximum number of PRs to return + + Returns: + List of dicts with pr_number, title, feedback_count, reviewers + """ + # Build search query + args = [ + "gh", + "pr", + "list", + "--repo", + f"{owner}/{repo}", + "--state", + state, + "--json", + "number,title,reviewDecision,reviewRequests,reviews", + "--limit", + str(limit), + ] + + # Add label filters + if labels: + for label in labels: + args.extend(["--label", label]) + + result = subprocess.run(args, capture_output=True, text=True) + + if result.returncode != 0: + return [] + + prs = json.loads(result.stdout) + results = [] + + for pr in prs: + pr_number = pr.get("number") + review_decision = pr.get("reviewDecision") + + # Check if PR needs attention + needs_work = review_decision == "CHANGES_REQUESTED" + + # Also check for blocking reviews + blocking_reviewers = [] + for review in pr.get("reviews", []): + if review.get("state") == "CHANGES_REQUESTED": + author = review.get("author", {}).get("login") + if author and author not in blocking_reviewers: + blocking_reviewers.append(author) + + if needs_work or blocking_reviewers: + results.append( + { + "pr_number": pr_number, + "title": pr.get("title", ""), + "review_decision": review_decision, + "blocking_reviewers": blocking_reviewers, + } + ) + + return results + + +# ============================================================================= +# Thread Resolution (Stage 10) +# ============================================================================= + + +class RateLimitError(Exception): + """Raised when GitHub rate limit is hit.""" + + def __init__(self, retry_after: int, message: str = ""): + self.retry_after = retry_after + self.message = message + super().__init__(f"Rate limited. Retry after {retry_after}s: {message}") + + +def parse_rate_limit_error(stderr: str) -> int | None: + """Parse rate limit retry-after from GitHub error. + + Args: + stderr: Error output from gh command + + Returns: + Seconds to wait, or None if not a rate limit error + """ + # GitHub rate limit patterns + patterns = [ + r"rate limit.*?(\d+)\s*seconds?", + r"retry.after:\s*(\d+)", + r"wait\s+(\d+)\s*seconds?", + ] + + for pattern in patterns: + match = re.search(pattern, stderr, re.IGNORECASE) + if match: + return int(match.group(1)) + + # Check for generic rate limit message + if "rate limit" in stderr.lower(): + return 60 # Default wait time + + return None + + +RESOLVE_THREAD_MUTATION = """ +mutation($threadId: ID!) { + resolveReviewThread(input: {threadId: $threadId}) { + thread { + id + isResolved + } + } +} +""" + +UNRESOLVE_THREAD_MUTATION = """ +mutation($threadId: ID!) { + unresolveReviewThread(input: {threadId: $threadId}) { + thread { + id + isResolved + } + } +} +""" + + +def resolve_thread(owner: str, repo: str, thread_id: str) -> bool: + """Resolve a review thread via GitHub GraphQL API. + + Args: + owner: Repository owner + repo: Repository name + thread_id: Thread node ID (e.g., "PRRT_...") + + Returns: + True if resolved successfully + """ + import logging + + log = logging.getLogger(__name__) + + result = subprocess.run( + [ + "gh", + "api", + "graphql", + "-f", + f"query={RESOLVE_THREAD_MUTATION}", + "-f", + f"threadId={thread_id}", + ], + capture_output=True, + text=True, + ) + + if result.returncode != 0: + log.warning(f"Failed to resolve thread {thread_id}: {result.stderr}") + return False + + try: + data = json.loads(result.stdout) + is_resolved = data["data"]["resolveReviewThread"]["thread"]["isResolved"] + return is_resolved + except (json.JSONDecodeError, KeyError) as e: + log.warning(f"Failed to parse resolution response: {e}") + return False + + +def unresolve_thread(owner: str, repo: str, thread_id: str) -> bool: + """Unresolve a review thread (for testing/rollback). + + Args: + owner: Repository owner + repo: Repository name + thread_id: Thread node ID + + Returns: + True if unresolved successfully + """ + result = subprocess.run( + [ + "gh", + "api", + "graphql", + "-f", + f"query={UNRESOLVE_THREAD_MUTATION}", + "-f", + f"threadId={thread_id}", + ], + capture_output=True, + text=True, + ) + + return result.returncode == 0 + + +def resolve_addressed_threads( + owner: str, + repo: str, + addressed_thread_ids: list[str], + delay: float = 0.5, +) -> dict[str, bool]: + """Resolve multiple threads, returning success status for each. + + Args: + owner: Repository owner + repo: Repository name + addressed_thread_ids: List of thread IDs to resolve + delay: Delay between resolutions to avoid rate limiting + + Returns: + Dict mapping thread_id to success status + """ + import time + + results = {} + for thread_id in addressed_thread_ids: + results[thread_id] = resolve_thread(owner, repo, thread_id) + # Small delay to avoid rate limiting + if delay > 0: + time.sleep(delay) + + return results + + +def resolve_thread_with_retry( + owner: str, + repo: str, + thread_id: str, + max_retries: int = 3, + on_rate_limit: Callable | None = None, +) -> bool: + """Resolve a thread with rate limit retry. + + Args: + owner: Repository owner + repo: Repository name + thread_id: Thread ID to resolve + max_retries: Maximum retry attempts + on_rate_limit: Optional callback when rate limited (receives retry_after seconds) + + Returns: + True if resolved successfully + """ + import logging + import time + + log = logging.getLogger(__name__) + + for attempt in range(max_retries): + result = subprocess.run( + [ + "gh", + "api", + "graphql", + "-f", + f"query={RESOLVE_THREAD_MUTATION}", + "-f", + f"threadId={thread_id}", + ], + capture_output=True, + text=True, + ) + + if result.returncode == 0: + try: + data = json.loads(result.stdout) + return data["data"]["resolveReviewThread"]["thread"]["isResolved"] + except (json.JSONDecodeError, KeyError): + return False + + # Check for rate limit + retry_after = parse_rate_limit_error(result.stderr) + if retry_after: + # Trigger callback if provided + if on_rate_limit: + on_rate_limit(retry_after) + + if attempt < max_retries - 1: + log.warning( + f"Rate limited, waiting {retry_after}s (attempt {attempt + 1})" + ) + time.sleep(retry_after) + continue + + log.error(f"Failed to resolve thread {thread_id}: {result.stderr}") + return False + + return False + + +def update_project_status( + owner: str, + repo: str, + pr_number: int, + project_number: int, + status: str, +) -> bool: + """Update PR status in a GitHub Project. + + Args: + owner: Repository owner + repo: Repository name + pr_number: Pull request number + project_number: Project number + status: New status value + + Returns: + True if successful + """ + # First, get the project item ID for this PR + query = """ + query($owner: String!, $repo: String!, $pr: Int!) { + repository(owner: $owner, name: $repo) { + pullRequest(number: $pr) { + projectItems(first: 10) { + nodes { + id + project { + number + } + } + } + } + } + } + """ + + result = subprocess.run( + [ + "gh", + "api", + "graphql", + "-f", + f"query={query}", + "-f", + f"owner={owner}", + "-f", + f"repo={repo}", + "-F", + f"pr={pr_number}", + ], + capture_output=True, + text=True, + ) + + if result.returncode != 0: + return False + + data = json.loads(result.stdout) + items = ( + data.get("data", {}) + .get("repository", {}) + .get("pullRequest", {}) + .get("projectItems", {}) + .get("nodes", []) + ) + + # Find the item for our project + item_id = None + for item in items: + if item.get("project", {}).get("number") == project_number: + item_id = item.get("id") + break + + if not item_id: + return False + + # Now update the status field + # Note: This requires knowing the field ID and option ID + # For now, return True as a placeholder - full implementation would need + # to query project fields first + return True diff --git a/.claude/agents/skill-pr-addresser/src/hashing.py b/.claude/agents/skill-pr-addresser/src/hashing.py new file mode 100644 index 0000000..2f22892 --- /dev/null +++ b/.claude/agents/skill-pr-addresser/src/hashing.py @@ -0,0 +1,74 @@ +# src/hashing.py +"""Content hashing utilities for delta detection. + +Stage 8 implementation for #796: detect updated comments after addressing. +""" + +import hashlib + + +def hash_content(content: str | None) -> str: + """Generate SHA-256 hash of content. + + Args: + content: Text content to hash + + Returns: + Hash string prefixed with "sha256:" + """ + if not content: + return "sha256:empty" + + # Normalize whitespace for consistent hashing + normalized = " ".join(content.split()) + digest = hashlib.sha256(normalized.encode()).hexdigest() + return f"sha256:{digest[:16]}" # Truncate for readability + + +def hashes_match(hash1: str, hash2: str) -> bool: + """Compare two content hashes. + + Args: + hash1: First hash + hash2: Second hash + + Returns: + True if hashes match + """ + return hash1 == hash2 + + +def hash_file(file_path: str) -> str: + """Generate SHA-256 hash of a file's content. + + Args: + file_path: Path to file + + Returns: + Hash string prefixed with "sha256:" + """ + try: + with open(file_path, "r", encoding="utf-8") as f: + content = f.read() + return hash_content(content) + except (FileNotFoundError, IOError): + return "sha256:error" + + +def hash_lines(content: str, start_line: int, end_line: int) -> str: + """Generate hash of specific lines from content. + + Args: + content: Full text content + start_line: Starting line (1-indexed) + end_line: Ending line (1-indexed, inclusive) + + Returns: + Hash of the specified lines + """ + lines = content.splitlines() + if start_line < 1 or end_line > len(lines): + return "sha256:out_of_range" + + selected = "\n".join(lines[start_line - 1 : end_line]) + return hash_content(selected) diff --git a/.claude/agents/skill-pr-addresser/src/hooks.py b/.claude/agents/skill-pr-addresser/src/hooks.py new file mode 100644 index 0000000..52b5b13 --- /dev/null +++ b/.claude/agents/skill-pr-addresser/src/hooks.py @@ -0,0 +1,453 @@ +"""Cement hooks framework for skill-pr-addresser pipeline. + +This module defines hook points for each pipeline stage, allowing extensions +and plugins to customize behavior at various points in the addressing workflow. + +Hook Points +----------- +Each pipeline stage has pre_ and post_ hooks: + +- pre_discovery / post_discovery +- pre_filter / post_filter +- pre_consolidate / post_consolidate +- pre_plan / post_plan +- pre_fix / post_fix +- pre_commit / post_commit +- pre_notify / post_notify + +Hook Function Signature +----------------------- +All hook functions receive the application instance and a context dict: + + def my_hook(app, context): + # Modify context in place or perform side effects + # Return value is ignored for pre_ hooks + # Return value is yielded for post_ hooks + return context + +Registering Hooks +----------------- +Via Meta: + class MyApp(App): + class Meta: + hooks = [ + ('pre_discovery', my_discovery_hook), + ('post_fix', my_fix_hook, 10), # weight=10 (higher runs first) + ] + +Via hook manager: + app.hook.register('pre_discovery', my_hook) + +Running Hooks +------------- +Hooks are run by the pipeline executor: + + for result in app.hook.run('pre_discovery', context): + pass # Each hook's return value is yielded + +References +---------- +- https://docs.builtoncement.com/core-foundation/hooks +""" + +from dataclasses import dataclass, field +from datetime import datetime, timezone +from typing import Any, Callable + +# Hook names for each pipeline stage +PIPELINE_HOOKS = [ + # Discovery stage + "pre_discovery", + "post_discovery", + # Filter stage + "pre_filter", + "post_filter", + # Consolidation stage + "pre_consolidate", + "post_consolidate", + # Planning stage + "pre_plan", + "post_plan", + # Fix stage + "pre_fix", + "post_fix", + "pre_fix_group", # Before each action group + "post_fix_group", # After each action group + # Commit stage + "pre_commit", + "post_commit", + # Notification stage + "pre_notify", + "post_notify", + # Iteration lifecycle + "pre_iteration", + "post_iteration", + # Error handling + "on_error", + "on_rate_limit", +] + + +@dataclass +class HookContext: + """Context passed to hook functions. + + Hooks can read from and modify this context. Changes persist + across hooks in the same stage. + + Attributes: + pr_number: Pull request being processed + iteration: Current iteration number + stage: Current pipeline stage name + data: Stage-specific data (varies by hook) + metadata: Additional metadata for tracking + dry_run: Whether in dry-run mode + cancelled: Set to True to cancel the operation + """ + + pr_number: int + iteration: int = 1 + stage: str = "" + data: dict = field(default_factory=dict) + metadata: dict = field(default_factory=dict) + dry_run: bool = False + cancelled: bool = False + + def __post_init__(self): + if "timestamps" not in self.metadata: + self.metadata["timestamps"] = {} + + def record_timestamp(self, event: str) -> None: + """Record a timestamp for an event.""" + self.metadata["timestamps"][event] = datetime.now(timezone.utc).isoformat() + + def get_timestamp(self, event: str) -> str | None: + """Get the timestamp for an event.""" + return self.metadata.get("timestamps", {}).get(event) + + def to_dict(self) -> dict: + """Serialize context to dictionary.""" + return { + "pr_number": self.pr_number, + "iteration": self.iteration, + "stage": self.stage, + "data": self.data, + "metadata": self.metadata, + "dry_run": self.dry_run, + "cancelled": self.cancelled, + } + + @classmethod + def from_dict(cls, data: dict) -> "HookContext": + """Deserialize context from dictionary.""" + return cls( + pr_number=data["pr_number"], + iteration=data.get("iteration", 1), + stage=data.get("stage", ""), + data=data.get("data", {}), + metadata=data.get("metadata", {}), + dry_run=data.get("dry_run", False), + cancelled=data.get("cancelled", False), + ) + + +@dataclass +class HookResult: + """Result from running a hook function. + + Attributes: + hook_name: Name of the hook that was run + function_name: Name of the function that ran + success: Whether the hook completed successfully + duration_ms: Execution time in milliseconds + error: Error message if hook failed + output: Any output from the hook + """ + + hook_name: str + function_name: str + success: bool = True + duration_ms: float = 0.0 + error: str | None = None + output: Any = None + + def to_dict(self) -> dict: + """Serialize to dictionary.""" + return { + "hook_name": self.hook_name, + "function_name": self.function_name, + "success": self.success, + "duration_ms": self.duration_ms, + "error": self.error, + "output": str(self.output) if self.output else None, + } + + +class HookRegistry: + """Registry for managing hook functions outside of Cement. + + This provides a standalone hook registry for use in testing + or when not running within a Cement application. + + Usage: + registry = HookRegistry() + registry.register('pre_discovery', my_hook) + for result in registry.run('pre_discovery', context): + print(result) + """ + + def __init__(self): + self._hooks: dict[str, list[tuple[Callable, int]]] = {} + self._defined: set[str] = set() + + def define(self, name: str) -> None: + """Define a hook point.""" + self._defined.add(name) + if name not in self._hooks: + self._hooks[name] = [] + + def defined(self, name: str) -> bool: + """Check if a hook is defined.""" + return name in self._defined + + def register( + self, + name: str, + func: Callable, + weight: int = 0, + ) -> None: + """Register a function to a hook. + + Args: + name: Hook name + func: Function to register + weight: Priority weight (higher runs first) + """ + if name not in self._defined: + self.define(name) + self._hooks[name].append((func, weight)) + # Sort by weight descending + self._hooks[name].sort(key=lambda x: x[1], reverse=True) + + def run(self, name: str, *args, **kwargs): + """Run all functions registered to a hook. + + Yields: + HookResult for each registered function + """ + import time + + if name not in self._hooks: + return + + for func, _weight in self._hooks[name]: + start = time.perf_counter() + result = HookResult( + hook_name=name, + function_name=func.__name__, + ) + + try: + output = func(*args, **kwargs) + result.output = output + result.success = True + except Exception as e: + result.success = False + result.error = str(e) + + result.duration_ms = (time.perf_counter() - start) * 1000 + yield result + + def list(self, name: str | None = None) -> list[str]: + """List registered functions for a hook or all hooks.""" + if name: + if name not in self._hooks: + return [] + return [func.__name__ for func, _ in self._hooks[name]] + return list(self._defined) + + +def define_pipeline_hooks(app) -> None: + """Define all pipeline hooks for a Cement application. + + Call this in App.setup() to define the hook points: + + def setup(self): + super().setup() + define_pipeline_hooks(self) + + Args: + app: Cement App instance + """ + for hook_name in PIPELINE_HOOKS: + if not app.hook.defined(hook_name): + app.hook.define(hook_name) + + +def run_hook(app, hook_name: str, context: HookContext) -> list[HookResult]: + """Run a hook and collect results. + + This is a convenience wrapper around app.hook.run() that + collects all results into a list. + + Args: + app: Cement App instance + hook_name: Name of the hook to run + context: HookContext to pass to hook functions + + Returns: + List of HookResult objects + """ + if not app.hook.defined(hook_name): + return [] + + results = [] + for result in app.hook.run(hook_name, app, context): + # app.hook.run yields the return value of each function + # Wrap in HookResult if needed + if isinstance(result, HookResult): + results.append(result) + else: + results.append( + HookResult( + hook_name=hook_name, + function_name="unknown", + success=True, + output=result, + ) + ) + return results + + +# --- Built-in hook functions --- + + +def log_stage_start(app, context: HookContext) -> None: + """Log when a stage starts. + + Register to pre_* hooks for debugging. + """ + app.log.debug( + f"[Stage] {context.stage} starting " + f"(PR #{context.pr_number}, iteration {context.iteration})" + ) + context.record_timestamp(f"{context.stage}_start") + + +def log_stage_end(app, context: HookContext) -> None: + """Log when a stage ends. + + Register to post_* hooks for debugging. + """ + context.record_timestamp(f"{context.stage}_end") + start = context.get_timestamp(f"{context.stage}_start") + end = context.get_timestamp(f"{context.stage}_end") + + if start and end: + from datetime import datetime as dt + + start_dt = dt.fromisoformat(start) + end_dt = dt.fromisoformat(end) + duration = (end_dt - start_dt).total_seconds() + app.log.debug(f"[Stage] {context.stage} completed in {duration:.2f}s") + else: + app.log.debug(f"[Stage] {context.stage} completed") + + +def check_cancelled(app, context: HookContext) -> None: + """Check if operation was cancelled. + + Register to pre_* hooks to enable cancellation. + Raises RuntimeError if cancelled. + """ + if context.cancelled: + raise RuntimeError(f"Operation cancelled before {context.stage}") + + +def rate_limit_handler(app, context: HookContext) -> None: + """Handle rate limit events. + + Register to on_rate_limit hook. + Logs the rate limit and waits. + """ + retry_after = context.data.get("retry_after", 60) + app.log.warning(f"Rate limited. Waiting {retry_after}s...") + + import time + + time.sleep(retry_after) + + +def error_handler(app, context: HookContext) -> None: + """Handle error events. + + Register to on_error hook. + Logs the error details. + """ + error = context.data.get("error", "Unknown error") + stage = context.data.get("stage", "unknown") + app.log.error(f"Error in {stage}: {error}") + + +# --- Hook registration helpers --- + + +def register_logging_hooks(app) -> None: + """Register debug logging hooks for all stages. + + Args: + app: Cement App instance + """ + for hook_name in PIPELINE_HOOKS: + if hook_name.startswith("pre_"): + app.hook.register(hook_name, log_stage_start) + elif hook_name.startswith("post_"): + app.hook.register(hook_name, log_stage_end) + + +def register_cancellation_hooks(app) -> None: + """Register cancellation check hooks for all pre_ stages. + + Args: + app: Cement App instance + """ + for hook_name in PIPELINE_HOOKS: + if hook_name.startswith("pre_"): + app.hook.register(hook_name, check_cancelled, weight=100) + + +def register_error_handlers(app) -> None: + """Register default error handlers. + + Args: + app: Cement App instance + """ + app.hook.register("on_error", error_handler) + app.hook.register("on_rate_limit", rate_limit_handler) + + +# --- Meta configuration helpers --- + + +def get_hook_definitions() -> list[str]: + """Get list of hook names for CementApp.Meta.define_hooks.""" + return PIPELINE_HOOKS.copy() + + +def get_default_hooks() -> list[tuple]: + """Get default hooks for CementApp.Meta.hooks. + + Returns list of (hook_name, function, weight) tuples. + """ + hooks = [] + + # Add cancellation checks (high priority) + for hook_name in PIPELINE_HOOKS: + if hook_name.startswith("pre_"): + hooks.append((hook_name, check_cancelled, 100)) + + # Add error handlers + hooks.append(("on_error", error_handler, 0)) + hooks.append(("on_rate_limit", rate_limit_handler, 0)) + + return hooks diff --git a/.claude/agents/skill-pr-addresser/src/location_progress.py b/.claude/agents/skill-pr-addresser/src/location_progress.py new file mode 100644 index 0000000..bd6e87f --- /dev/null +++ b/.claude/agents/skill-pr-addresser/src/location_progress.py @@ -0,0 +1,277 @@ +# src/location_progress.py +"""Location-level progress tracking for partial addressing. + +Stage 10 implementation: Track progress at the file/line level within action groups. +This enables resumption from partial completion within an iteration. +""" + +from dataclasses import dataclass, field +from datetime import datetime, timezone +from typing import TYPE_CHECKING + +if TYPE_CHECKING: + pass + + +@dataclass +class AddressedLocation: + """Record of an addressed location within an action group.""" + + file: str + line: int | None + thread_id: str | None + addressed_at: datetime + commit_sha: str + + def to_dict(self) -> dict: + """Serialize to dictionary.""" + return { + "file": self.file, + "line": self.line, + "thread_id": self.thread_id, + "addressed_at": self.addressed_at.isoformat(), + "commit_sha": self.commit_sha, + } + + @classmethod + def from_dict(cls, data: dict) -> "AddressedLocation": + """Deserialize from dictionary.""" + addressed_at = data.get("addressed_at") + if isinstance(addressed_at, str): + addressed_at = datetime.fromisoformat(addressed_at) + else: + addressed_at = datetime.now(timezone.utc) + + return cls( + file=data["file"], + line=data.get("line"), + thread_id=data.get("thread_id"), + addressed_at=addressed_at, + commit_sha=data["commit_sha"], + ) + + +@dataclass +class ActionGroupProgress: + """Progress tracking for an action group.""" + + group_id: str + total_locations: int + addressed_locations: list[AddressedLocation] = field(default_factory=list) + + @property + def is_complete(self) -> bool: + """Check if all locations are addressed.""" + return len(self.addressed_locations) >= self.total_locations + + @property + def pending_count(self) -> int: + """Count of locations still to address.""" + return max(0, self.total_locations - len(self.addressed_locations)) + + @property + def addressed_count(self) -> int: + """Count of addressed locations.""" + return len(self.addressed_locations) + + @property + def progress_pct(self) -> float: + """Progress as percentage.""" + if self.total_locations == 0: + return 100.0 + return (len(self.addressed_locations) / self.total_locations) * 100 + + def has_location(self, file: str, line: int | None) -> bool: + """Check if location was already addressed.""" + return any( + loc.file == file and loc.line == line for loc in self.addressed_locations + ) + + def add_location( + self, + file: str, + line: int | None, + thread_id: str | None, + commit_sha: str, + ) -> None: + """Record an addressed location.""" + self.addressed_locations.append( + AddressedLocation( + file=file, + line=line, + thread_id=thread_id, + addressed_at=datetime.now(timezone.utc), + commit_sha=commit_sha, + ) + ) + + def get_pending_files(self, all_locations: list[dict]) -> list[dict]: + """Get locations that haven't been addressed yet. + + Args: + all_locations: List of all locations in the group + + Returns: + List of locations not yet addressed + """ + return [ + loc + for loc in all_locations + if not self.has_location(loc.get("file", ""), loc.get("line")) + ] + + def to_dict(self) -> dict: + """Serialize to dictionary.""" + return { + "group_id": self.group_id, + "total_locations": self.total_locations, + "addressed_locations": [loc.to_dict() for loc in self.addressed_locations], + } + + @classmethod + def from_dict(cls, data: dict) -> "ActionGroupProgress": + """Deserialize from dictionary.""" + return cls( + group_id=data["group_id"], + total_locations=data["total_locations"], + addressed_locations=[ + AddressedLocation.from_dict(loc) + for loc in data.get("addressed_locations", []) + ], + ) + + +@dataclass +class IterationProgress: + """Progress tracking for the current iteration.""" + + iteration: int + started_at: datetime + groups: dict[str, ActionGroupProgress] = field(default_factory=dict) + completed_at: datetime | None = None + + def get_or_create_group( + self, + group_id: str, + total_locations: int, + ) -> ActionGroupProgress: + """Get existing or create new group progress.""" + if group_id not in self.groups: + self.groups[group_id] = ActionGroupProgress( + group_id=group_id, + total_locations=total_locations, + ) + return self.groups[group_id] + + def get_group(self, group_id: str) -> ActionGroupProgress | None: + """Get group progress if exists.""" + return self.groups.get(group_id) + + @property + def all_complete(self) -> bool: + """Check if all groups are complete.""" + return all(g.is_complete for g in self.groups.values()) + + @property + def total_addressed(self) -> int: + """Total addressed locations across all groups.""" + return sum(g.addressed_count for g in self.groups.values()) + + @property + def total_pending(self) -> int: + """Total pending locations across all groups.""" + return sum(g.pending_count for g in self.groups.values()) + + def complete(self) -> None: + """Mark iteration as completed.""" + self.completed_at = datetime.now(timezone.utc) + + def to_dict(self) -> dict: + """Serialize to dictionary.""" + return { + "iteration": self.iteration, + "started_at": self.started_at.isoformat(), + "completed_at": self.completed_at.isoformat() if self.completed_at else None, + "groups": {k: v.to_dict() for k, v in self.groups.items()}, + } + + @classmethod + def from_dict(cls, data: dict) -> "IterationProgress": + """Deserialize from dictionary.""" + started_at = data.get("started_at") + if isinstance(started_at, str): + started_at = datetime.fromisoformat(started_at) + else: + started_at = datetime.now(timezone.utc) + + completed_at = data.get("completed_at") + if isinstance(completed_at, str): + completed_at = datetime.fromisoformat(completed_at) + else: + completed_at = None + + return cls( + iteration=data["iteration"], + started_at=started_at, + completed_at=completed_at, + groups={ + k: ActionGroupProgress.from_dict(v) + for k, v in data.get("groups", {}).items() + }, + ) + + +@dataclass +class PRLocationProgress: + """Location progress tracking for a single PR across iterations.""" + + pr_number: int + iterations: list[IterationProgress] = field(default_factory=list) + + @property + def current_iteration(self) -> IterationProgress | None: + """Get the current (most recent incomplete) iteration.""" + for it in reversed(self.iterations): + if it.completed_at is None: + return it + return None + + @property + def last_iteration_number(self) -> int: + """Get the last iteration number.""" + if not self.iterations: + return 0 + return max(it.iteration for it in self.iterations) + + def start_iteration(self) -> IterationProgress: + """Start a new iteration.""" + iteration = IterationProgress( + iteration=self.last_iteration_number + 1, + started_at=datetime.now(timezone.utc), + ) + self.iterations.append(iteration) + return iteration + + def get_or_start_iteration(self) -> IterationProgress: + """Get current iteration or start new one.""" + current = self.current_iteration + if current: + return current + return self.start_iteration() + + def to_dict(self) -> dict: + """Serialize to dictionary.""" + return { + "pr_number": self.pr_number, + "iterations": [it.to_dict() for it in self.iterations], + } + + @classmethod + def from_dict(cls, data: dict) -> "PRLocationProgress": + """Deserialize from dictionary.""" + return cls( + pr_number=data["pr_number"], + iterations=[ + IterationProgress.from_dict(it) for it in data.get("iterations", []) + ], + ) diff --git a/.claude/agents/skill-pr-addresser/src/locking.py b/.claude/agents/skill-pr-addresser/src/locking.py new file mode 100644 index 0000000..4f34fe6 --- /dev/null +++ b/.claude/agents/skill-pr-addresser/src/locking.py @@ -0,0 +1,183 @@ +# src/locking.py +"""File-based session locking to prevent concurrent runs. + +Stage 10 implementation: Ensures only one instance processes a PR at a time. +""" + +import fcntl +import json +import os +from contextlib import contextmanager +from dataclasses import dataclass +from datetime import datetime, timezone +from pathlib import Path +from typing import TYPE_CHECKING + +if TYPE_CHECKING: + from typing import Generator + + +class LockError(Exception): + """Raised when session lock cannot be acquired.""" + + pass + + +@dataclass +class SessionLock: + """File-based lock for a PR session.""" + + pr_number: int + lock_file: Path + holder_pid: int | None = None + acquired_at: datetime | None = None + _fd: object = None + + @classmethod + def acquire( + cls, + sessions_dir: Path, + pr_number: int, + timeout: float = 5.0, + ) -> "SessionLock": + """Acquire lock for a PR session. + + Creates a lock file in sessions_dir and acquires an exclusive lock. + + Args: + sessions_dir: Directory for session data + pr_number: PR number to lock + timeout: Not used currently (non-blocking) + + Returns: + SessionLock instance + + Raises: + LockError: If lock cannot be acquired + """ + lock_file = sessions_dir / f".lock-pr-{pr_number}" + lock_file.parent.mkdir(parents=True, exist_ok=True) + + fd = open(lock_file, "w") + try: + fcntl.flock(fd, fcntl.LOCK_EX | fcntl.LOCK_NB) + except BlockingIOError: + fd.close() + existing = cls._read_lock_info(lock_file) + raise LockError( + f"PR #{pr_number} is being processed by PID {existing.holder_pid} " + f"since {existing.acquired_at}" + ) + + # Write lock info + lock = cls( + pr_number=pr_number, + lock_file=lock_file, + holder_pid=os.getpid(), + acquired_at=datetime.now(timezone.utc), + ) + lock._fd = fd + + fd.write(json.dumps(lock.to_dict())) + fd.flush() + + return lock + + @classmethod + def _read_lock_info(cls, lock_file: Path) -> "SessionLock": + """Read lock info from file.""" + try: + with open(lock_file) as f: + data = json.load(f) + return cls( + pr_number=data["pr_number"], + lock_file=lock_file, + holder_pid=data.get("holder_pid"), + acquired_at=datetime.fromisoformat(data["acquired_at"]) + if data.get("acquired_at") + else None, + ) + except (json.JSONDecodeError, FileNotFoundError): + return cls(pr_number=0, lock_file=lock_file) + + def release(self): + """Release the lock.""" + if self._fd: + fcntl.flock(self._fd, fcntl.LOCK_UN) + self._fd.close() + self.lock_file.unlink(missing_ok=True) + self._fd = None + + def to_dict(self) -> dict: + """Serialize lock info.""" + return { + "pr_number": self.pr_number, + "holder_pid": self.holder_pid, + "acquired_at": self.acquired_at.isoformat() if self.acquired_at else None, + } + + +@contextmanager +def session_lock( + sessions_dir: Path, + pr_number: int, +) -> "Generator[SessionLock, None, None]": + """Context manager for session locking. + + Usage: + with session_lock(sessions_dir, 795) as lock: + # Process PR + pass + # Lock automatically released + """ + lock = SessionLock.acquire(sessions_dir, pr_number) + try: + yield lock + finally: + lock.release() + + +def force_unlock( + sessions_dir: Path, + pr_number: int, + force: bool = False, +) -> tuple[bool, str]: + """Force release a stuck session lock. + + Use when a previous run crashed without releasing the lock. + Will check if the holding PID is still running unless force=True. + + Args: + sessions_dir: Directory for session data + pr_number: PR number to unlock + force: Skip PID check + + Returns: + Tuple of (success, message) + """ + lock_file = sessions_dir / f".lock-pr-{pr_number}" + + if not lock_file.exists(): + return True, f"No lock exists for PR #{pr_number}" + + # Read lock info + try: + with open(lock_file) as f: + lock_info = json.load(f) + except (json.JSONDecodeError, FileNotFoundError): + lock_info = {} + + holder_pid = lock_info.get("holder_pid") + + # Check if PID is still running + if holder_pid and not force: + try: + os.kill(holder_pid, 0) # Signal 0 = check if running + return False, ( + f"PID {holder_pid} is still running. Use force=True to override." + ) + except OSError: + pass # Process not running, safe to unlock + + lock_file.unlink(missing_ok=True) + return True, f"Released lock for PR #{pr_number}" diff --git a/.claude/agents/skill-pr-addresser/src/models.py b/.claude/agents/skill-pr-addresser/src/models.py new file mode 100644 index 0000000..7bc51d0 --- /dev/null +++ b/.claude/agents/skill-pr-addresser/src/models.py @@ -0,0 +1,526 @@ +# src/models.py +"""Protocol-based feedback type definitions with content hashing. + +Stage 8 implementation: Feedback types that support delta detection +via content hashing for #796. +""" + +from dataclasses import dataclass, field +from datetime import datetime, timezone +from typing import Literal, Protocol, runtime_checkable + +from .hashing import hash_content + + +# ============================================================================= +# Feedback Protocol +# ============================================================================= + + +@runtime_checkable +class Feedback(Protocol): + """Common interface for all feedback types.""" + + @property + def id(self) -> str: + """Unique identifier for this feedback.""" + ... + + @property + def content(self) -> str: + """Text content of the feedback.""" + ... + + @property + def content_hash(self) -> str: + """SHA-256 hash of content for delta detection.""" + ... + + @property + def author(self) -> str: + """GitHub username of feedback author.""" + ... + + @property + def created_at(self) -> datetime: + """When the feedback was created.""" + ... + + def is_resolved_by(self, response: "Feedback") -> bool: + """Check if this feedback is resolved by a response.""" + ... + + +# ============================================================================= +# Feedback Types +# ============================================================================= + + +@dataclass +class ReviewFeedback: + """Feedback from a PR review.""" + + id: str + state: Literal["CHANGES_REQUESTED", "COMMENTED", "APPROVED", "DISMISSED", "PENDING"] + body: str + author: str + submitted_at: datetime + + # Computed on init + content_hash: str = field(init=False) + + # Cross-references detected from body text + references_lines: list[int] = field(default_factory=list) + references_files: list[str] = field(default_factory=list) + + def __post_init__(self): + self.content_hash = hash_content(self.body) + + @property + def content(self) -> str: + return self.body + + @property + def created_at(self) -> datetime: + return self.submitted_at + + def is_resolved_by(self, response: Feedback) -> bool: + # Reviews are resolved by APPROVED state, not responses + return False + + @classmethod + def from_github(cls, data: dict) -> "ReviewFeedback": + """Create from GitHub API response.""" + submitted_at_str = data.get("submittedAt", "") + if submitted_at_str: + submitted_at = datetime.fromisoformat( + submitted_at_str.replace("Z", "+00:00") + ) + else: + submitted_at = datetime.now(timezone.utc) + + return cls( + id=data["id"], + state=data["state"], + body=data.get("body", ""), + author=data["author"]["login"], + submitted_at=submitted_at, + ) + + +@dataclass +class CommentFeedback: + """Feedback from a PR comment (not on specific code).""" + + id: str + body: str + author: str + created_at: datetime + reactions: dict[str, int] = field(default_factory=dict) + + # Computed + content_hash: str = field(init=False) + + def __post_init__(self): + self.content_hash = hash_content(self.body) + + @property + def content(self) -> str: + return self.body + + def has_acknowledgment_reaction(self, by_author: str) -> bool: + """Check if specific user reacted with acknowledgment. + + Note: Current implementation only has counts, not who reacted. + Would need GraphQL query for full reaction data. + """ + return self.reactions.get("thumbsUp", 0) > 0 + + def is_resolved_by(self, response: Feedback) -> bool: + """Check if reviewer withdrew their feedback.""" + if response.author != self.author: + return False + + resolved_phrases = [ + "never mind", + "ignore", + "looks good now", + "resolved", + "my mistake", + "disregard", + ] + return any(p in response.content.lower() for p in resolved_phrases) + + @classmethod + def from_github(cls, data: dict) -> "CommentFeedback": + """Create from GitHub API response.""" + created_at_str = data.get("createdAt", "") + if created_at_str: + created_at = datetime.fromisoformat( + created_at_str.replace("Z", "+00:00") + ) + else: + created_at = datetime.now(timezone.utc) + + return cls( + id=data["id"], + body=data["body"], + author=data["author"]["login"], + created_at=created_at, + reactions=data.get("reactions", {}), + ) + + +@dataclass +class ThreadComment: + """A single comment within a review thread.""" + + id: str + body: str + author: str + created_at: datetime + + # Computed + content_hash: str = field(init=False) + is_resolution_signal: bool = field(init=False) + + def __post_init__(self): + self.content_hash = hash_content(self.body) + # Detect if this comment signals resolution + resolution_phrases = ["done", "fixed", "addressed", "resolved", "will do"] + self.is_resolution_signal = any( + p in self.body.lower() for p in resolution_phrases + ) + + @property + def content(self) -> str: + return self.body + + +@dataclass +class ThreadFeedback: + """Feedback from a code review thread (line-specific).""" + + id: str + path: str + line: int | None + is_resolved: bool + is_outdated: bool + comments: list[ThreadComment] + + # Set when linked to a review body + linked_to_review: str | None = None + + @property + def first_comment(self) -> ThreadComment | None: + return self.comments[0] if self.comments else None + + @property + def author(self) -> str | None: + return self.first_comment.author if self.first_comment else None + + @property + def content(self) -> str: + return self.first_comment.body if self.first_comment else "" + + @property + def content_hash(self) -> str: + return self.first_comment.content_hash if self.first_comment else "" + + @property + def created_at(self) -> datetime: + return self.first_comment.created_at if self.first_comment else datetime.min + + def get_new_comments_since(self, last_seen_id: str | None) -> list[ThreadComment]: + """Get comments added after last_seen_id.""" + if not last_seen_id: + return self.comments + + found = False + new_comments = [] + for comment in self.comments: + if found: + new_comments.append(comment) + elif comment.id == last_seen_id: + found = True + return new_comments + + def has_author_resolution(self, pr_author: str) -> bool: + """Check if PR author signaled resolution in thread.""" + for comment in self.comments[1:]: # Skip first (original feedback) + if comment.author == pr_author and comment.is_resolution_signal: + return True + return False + + def has_reviewer_withdrawal(self) -> bool: + """Check if reviewer withdrew their feedback.""" + if not self.first_comment: + return False + + reviewer = self.first_comment.author + withdrawal_phrases = ["never mind", "ignore this", "my mistake", "disregard"] + + for comment in self.comments[1:]: + if comment.author == reviewer: + if any(p in comment.body.lower() for p in withdrawal_phrases): + return True + return False + + def is_resolved_by(self, response: Feedback) -> bool: + return ( + self.has_author_resolution(response.author) + or self.has_reviewer_withdrawal() + ) + + @classmethod + def from_github(cls, data: dict) -> "ThreadFeedback": + """Create from GitHub GraphQL response.""" + comments = [] + nodes = data.get("comments", {}).get("nodes", []) + + for c in nodes: + if not isinstance(c, dict): + continue + + created_at_str = c.get("createdAt", "") + if created_at_str: + created_at = datetime.fromisoformat( + created_at_str.replace("Z", "+00:00") + ) + else: + created_at = datetime.now(timezone.utc) + + comments.append( + ThreadComment( + id=c.get("id", ""), + body=c.get("body", ""), + author=c.get("author", {}).get("login", ""), + created_at=created_at, + ) + ) + + return cls( + id=data["id"], + path=data.get("path", ""), + line=data.get("line"), + is_resolved=data.get("isResolved", False), + is_outdated=data.get("isOutdated", False), + comments=comments, + ) + + +# ============================================================================= +# Core Types for Pipeline +# ============================================================================= + + +@dataclass +class TokenUsage: + """Token usage from LLM API calls.""" + + input_tokens: int = 0 + output_tokens: int = 0 + cache_read_tokens: int = 0 + cache_write_tokens: int = 0 + total_cost: float = 0.0 + + @property + def total(self) -> int: + return self.input_tokens + self.output_tokens + + def __add__(self, other: "TokenUsage") -> "TokenUsage": + return TokenUsage( + input_tokens=self.input_tokens + other.input_tokens, + output_tokens=self.output_tokens + other.output_tokens, + cache_read_tokens=self.cache_read_tokens + other.cache_read_tokens, + cache_write_tokens=self.cache_write_tokens + other.cache_write_tokens, + total_cost=self.total_cost + other.total_cost, + ) + + def to_dict(self) -> dict: + return { + "input_tokens": self.input_tokens, + "output_tokens": self.output_tokens, + "cache_read_tokens": self.cache_read_tokens, + "cache_write_tokens": self.cache_write_tokens, + "total_cost": self.total_cost, + "total": self.total, + } + + +@dataclass +class Location: + """A location in a file that needs attention.""" + + file: str + line: int | None = None + end_line: int | None = None + thread_id: str | None = None + feedback_id: str | None = None + + def to_dict(self) -> dict: + return { + "file": self.file, + "line": self.line, + "end_line": self.end_line, + "thread_id": self.thread_id, + "feedback_id": self.feedback_id, + } + + @classmethod + def from_dict(cls, data: dict) -> "Location": + return cls( + file=data["file"], + line=data.get("line"), + end_line=data.get("end_line"), + thread_id=data.get("thread_id"), + feedback_id=data.get("feedback_id"), + ) + + +@dataclass +class ActionGroup: + """A group of related feedback to address together.""" + + id: str + type: str # "add_section", "fix_code", "move_content", "update_docs", etc. + description: str + locations: list[Location] = field(default_factory=list) + priority: str = "medium" # "critical", "high", "medium", "low" + linked_review_id: str | None = None # If this group came from cross-ref linking + + @property + def location_count(self) -> int: + return len(self.locations) + + @property + def thread_ids(self) -> list[str]: + """Get all thread IDs from locations.""" + return [loc.thread_id for loc in self.locations if loc.thread_id] + + def to_dict(self) -> dict: + return { + "id": self.id, + "type": self.type, + "description": self.description, + "locations": [loc.to_dict() for loc in self.locations], + "priority": self.priority, + "linked_review_id": self.linked_review_id, + } + + @classmethod + def from_dict(cls, data: dict) -> "ActionGroup": + return cls( + id=data["id"], + type=data["type"], + description=data["description"], + locations=[Location.from_dict(loc) for loc in data.get("locations", [])], + priority=data.get("priority", "medium"), + linked_review_id=data.get("linked_review_id"), + ) + + +@dataclass +class AddressedLocation: + """Record of an addressed location within an action group.""" + + file: str + line: int | None + thread_id: str | None + addressed_at: datetime + commit_sha: str + + def to_dict(self) -> dict: + return { + "file": self.file, + "line": self.line, + "thread_id": self.thread_id, + "addressed_at": self.addressed_at.isoformat(), + "commit_sha": self.commit_sha, + } + + @classmethod + def from_dict(cls, data: dict) -> "AddressedLocation": + return cls( + file=data["file"], + line=data.get("line"), + thread_id=data.get("thread_id"), + addressed_at=datetime.fromisoformat(data["addressed_at"]), + commit_sha=data["commit_sha"], + ) + + +@dataclass +class FixResult: + """Result of fixing an action group.""" + + group_id: str + has_changes: bool = False + addressed_locations: list[AddressedLocation] = field(default_factory=list) + addressed_thread_ids: list[str] = field(default_factory=list) + failed: bool = False + error: Exception | None = None + skipped: bool = False + reason: str | None = None + token_usage: TokenUsage = field(default_factory=TokenUsage) + + @classmethod + def success( + cls, + group_id: str, + addressed_locations: list[AddressedLocation], + addressed_thread_ids: list[str], + token_usage: TokenUsage | None = None, + ) -> "FixResult": + return cls( + group_id=group_id, + has_changes=True, + addressed_locations=addressed_locations, + addressed_thread_ids=addressed_thread_ids, + token_usage=token_usage or TokenUsage(), + ) + + @classmethod + def skipped_result(cls, group_id: str, reason: str) -> "FixResult": + return cls(group_id=group_id, skipped=True, reason=reason) + + @classmethod + def failed_result(cls, group_id: str, error: Exception) -> "FixResult": + return cls(group_id=group_id, failed=True, error=error) + + def to_dict(self) -> dict: + return { + "group_id": self.group_id, + "has_changes": self.has_changes, + "addressed_locations": [loc.to_dict() for loc in self.addressed_locations], + "addressed_thread_ids": self.addressed_thread_ids, + "failed": self.failed, + "error": str(self.error) if self.error else None, + "skipped": self.skipped, + "reason": self.reason, + "token_usage": self.token_usage.to_dict(), + } + + +@dataclass +class RawFeedback: + """Raw feedback from discovery stage. + + Defined here to avoid circular imports between filter.py and pipeline.py. + """ + + reviews: list[ReviewFeedback] = field(default_factory=list) + comments: list[CommentFeedback] = field(default_factory=list) + threads: list[ThreadFeedback] = field(default_factory=list) + + @property + def total_count(self) -> int: + return len(self.reviews) + len(self.comments) + len(self.threads) + + def summary(self) -> dict: + return { + "reviews": len(self.reviews), + "comments": len(self.comments), + "threads": len(self.threads), + "total": self.total_count, + } diff --git a/.claude/agents/skill-pr-addresser/src/pipeline.py b/.claude/agents/skill-pr-addresser/src/pipeline.py new file mode 100644 index 0000000..52dbd88 --- /dev/null +++ b/.claude/agents/skill-pr-addresser/src/pipeline.py @@ -0,0 +1,868 @@ +"""Pipeline executor for skill-pr-addresser. + +Orchestrates all pipeline stages using the hooks framework: +1. Discovery - Find PR and extract feedback +2. Filter - Remove already-addressed items (delta detection) +3. Consolidate - Group feedback into action groups +4. Plan - Create execution plan +5. Fix - Implement fixes for each action group +6. Commit - Commit and push changes +7. Notify - Post comments and request re-review + +Each stage has pre_/post_ hooks for extensibility. +""" + +import logging +from dataclasses import dataclass, field +from datetime import datetime, timezone +from pathlib import Path +from typing import Any, Callable + +from .commit import create_iteration_commit, push_changes +from .consolidate import consolidate_feedback, ConsolidationResult +from .discovery import discover, DiscoveryContext +from .dry_run import DryRunMode, DryRunSummary +from .filter import filter_feedback, FilterResult +from .fix import fix_action_group, FixGroupResult +from .github_pr import ( + add_pr_comment, + request_rereview, + resolve_addressed_threads, +) +from .hooks import ( + HookContext, + HookRegistry, + PIPELINE_HOOKS, + run_hook, +) +from .location_progress import PRLocationProgress +from .locking import session_lock, LockError +from .models import ActionGroup, FeedbackItem +from .planner import create_execution_plan, ExecutionPlan +from .templates import ( + format_summary_comment, + format_iteration_limit_comment, + format_error_comment, + format_no_feedback_comment, + format_partial_progress_comment, +) + +log = logging.getLogger(__name__) + + +@dataclass +class StageResult: + """Result from a single pipeline stage.""" + + stage: str + success: bool + duration_ms: float = 0.0 + error: str | None = None + data: dict = field(default_factory=dict) + + def to_dict(self) -> dict: + return { + "stage": self.stage, + "success": self.success, + "duration_ms": self.duration_ms, + "error": self.error, + "data": self.data, + } + + +@dataclass +class IterationResult: + """Result from a single addressing iteration.""" + + iteration: int + addressed_count: int = 0 + skipped_count: int = 0 + failed_count: int = 0 + commit_sha: str | None = None + pushed: bool = False + stage_results: list[StageResult] = field(default_factory=list) + cost: float = 0.0 + + @property + def success(self) -> bool: + return all(r.success for r in self.stage_results) + + def to_dict(self) -> dict: + return { + "iteration": self.iteration, + "addressed_count": self.addressed_count, + "skipped_count": self.skipped_count, + "failed_count": self.failed_count, + "commit_sha": self.commit_sha, + "pushed": self.pushed, + "success": self.success, + "cost": self.cost, + "stages": [r.to_dict() for r in self.stage_results], + } + + +@dataclass +class PipelineResult: + """Final result from the pipeline.""" + + success: bool + pr_number: int + iterations_run: int = 0 + total_addressed: int = 0 + total_skipped: int = 0 + total_failed: int = 0 + final_commit_sha: str | None = None + ready_for_review: bool = False + error: str | None = None + iteration_results: list[IterationResult] = field(default_factory=list) + total_cost: float = 0.0 + dry_run: bool = False + + def to_dict(self) -> dict: + return { + "success": self.success, + "pr_number": self.pr_number, + "iterations_run": self.iterations_run, + "total_addressed": self.total_addressed, + "total_skipped": self.total_skipped, + "total_failed": self.total_failed, + "final_commit_sha": self.final_commit_sha, + "ready_for_review": self.ready_for_review, + "error": self.error, + "total_cost": self.total_cost, + "dry_run": self.dry_run, + "iterations": [r.to_dict() for r in self.iteration_results], + } + + +class Pipeline: + """Orchestrates the feedback addressing pipeline. + + Usage: + pipeline = Pipeline( + agent_dir=Path("..."), + sessions_dir=Path("..."), + owner="aRustyDev", + repo="ai", + ) + result = pipeline.run(pr_number=795, max_iterations=3) + + With hooks: + pipeline = Pipeline(...) + pipeline.hooks.register("pre_fix", my_custom_hook) + result = pipeline.run(pr_number=795) + + Dry-run mode: + result = pipeline.run(pr_number=795, dry_run=True) + """ + + def __init__( + self, + agent_dir: Path, + sessions_dir: Path, + owner: str, + repo: str, + repo_path: Path | None = None, + worktree_base: Path | None = None, + rate_limit_delay: float = 1.0, + app=None, # Optional Cement app for hooks + ): + """Initialize the pipeline. + + Args: + agent_dir: Path to the agent directory + sessions_dir: Path to sessions directory + owner: Repository owner + repo: Repository name + repo_path: Path to the repository root + worktree_base: Base directory for worktrees + rate_limit_delay: Delay between API calls + app: Optional Cement app for hooks integration + """ + self.agent_dir = agent_dir + self.sessions_dir = sessions_dir + self.owner = owner + self.repo = repo + self.repo_path = repo_path or agent_dir.parent.parent.parent + self.worktree_base = worktree_base or Path("/private/tmp/worktrees") + self.rate_limit_delay = rate_limit_delay + self.app = app + + # Create standalone hook registry if no app + self.hooks = HookRegistry() + for hook_name in PIPELINE_HOOKS: + self.hooks.define(hook_name) + + # Progress tracking + self._progress: PRLocationProgress | None = None + self._dry_run_mode: DryRunMode | None = None + + def run( + self, + pr_number: int, + max_iterations: int = 3, + skill_path: str | None = None, + force: bool = False, + dry_run: bool = False, + stop_after: str | None = None, + ) -> PipelineResult: + """Run the pipeline for a PR. + + Args: + pr_number: Pull request number + max_iterations: Maximum addressing iterations + skill_path: Explicit skill path (auto-detected if None) + force: Force addressing even if no pending feedback + dry_run: Preview changes without making them + stop_after: Stop after this stage (for debugging) + + Returns: + PipelineResult with summary of what was done + """ + import time + + start_time = time.perf_counter() + + # Initialize dry-run mode + self._dry_run_mode = DryRunMode(enabled=dry_run, stop_after=stop_after) + + # Initialize progress tracking + self._progress = PRLocationProgress(pr_number=pr_number) + + result = PipelineResult( + success=False, + pr_number=pr_number, + dry_run=dry_run, + ) + + try: + # Acquire session lock + with session_lock(self.sessions_dir, pr_number) as lock: + log.info(f"Acquired lock for PR #{pr_number}") + + # Run the pipeline + result = self._run_pipeline( + pr_number=pr_number, + max_iterations=max_iterations, + skill_path=skill_path, + force=force, + ) + + except LockError as e: + log.error(f"Could not acquire lock: {e}") + result.error = f"Lock error: {e}" + + except Exception as e: + log.exception(f"Pipeline error: {e}") + result.error = str(e) + + # Run error hook + self._run_hook( + "on_error", + HookContext( + pr_number=pr_number, + stage="pipeline", + data={"error": str(e), "stage": "pipeline"}, + ), + ) + + duration_ms = (time.perf_counter() - start_time) * 1000 + log.info(f"Pipeline completed in {duration_ms:.0f}ms") + + return result + + def _run_pipeline( + self, + pr_number: int, + max_iterations: int, + skill_path: str | None, + force: bool, + ) -> PipelineResult: + """Internal pipeline execution.""" + result = PipelineResult( + success=False, + pr_number=pr_number, + dry_run=self._dry_run_mode.enabled if self._dry_run_mode else False, + ) + + # Stage 1: Discovery + ctx = self._run_discovery(pr_number, skill_path, force) + if ctx is None: + result.error = "Discovery failed" + return result + + if self._should_stop("discovery"): + result.success = True + return result + + # Check if there's feedback to address + if not ctx.needs_changes and not force: + log.info("No actionable feedback to address") + result.success = True + result.ready_for_review = True + return result + + # Run iterations + for iteration in range(1, max_iterations + 1): + log.info(f"=== Iteration {iteration}/{max_iterations} ===") + + iteration_result = self._run_iteration(ctx, iteration) + result.iteration_results.append(iteration_result) + result.iterations_run = iteration + + # Aggregate counts + result.total_addressed += iteration_result.addressed_count + result.total_skipped += iteration_result.skipped_count + result.total_failed += iteration_result.failed_count + result.total_cost += iteration_result.cost + + if iteration_result.commit_sha: + result.final_commit_sha = iteration_result.commit_sha + + # Check if we're done + if iteration_result.addressed_count == 0: + log.info("No items addressed in this iteration, stopping") + break + + # Re-discover for next iteration + if iteration < max_iterations: + ctx = self._run_discovery(pr_number, skill_path, force=True) + if ctx is None or not ctx.needs_changes: + log.info("No more feedback to address") + break + + # Final notification + if not self._dry_run_mode.enabled: + self._run_notify(ctx, result) + + result.success = True + result.ready_for_review = result.total_failed == 0 + + return result + + def _run_discovery( + self, + pr_number: int, + skill_path: str | None, + force: bool, + ) -> DiscoveryContext | None: + """Run discovery stage.""" + import time + + start = time.perf_counter() + hook_ctx = HookContext(pr_number=pr_number, stage="discovery") + + self._run_hook("pre_discovery", hook_ctx) + if hook_ctx.cancelled: + return None + + try: + ctx = discover( + owner=self.owner, + repo=self.repo, + pr_number=pr_number, + sessions_dir=self.sessions_dir, + worktree_base=self.worktree_base, + repo_path=self.repo_path, + skill_path=skill_path, + force=force, + ) + + hook_ctx.data["discovery_context"] = ctx + self._run_hook("post_discovery", hook_ctx) + + duration = (time.perf_counter() - start) * 1000 + log.debug(f"Discovery completed in {duration:.0f}ms") + + return ctx + + except Exception as e: + log.error(f"Discovery failed: {e}") + hook_ctx.data["error"] = str(e) + self._run_hook("on_error", hook_ctx) + return None + + def _run_iteration( + self, + ctx: DiscoveryContext, + iteration: int, + ) -> IterationResult: + """Run a single addressing iteration.""" + import time + + result = IterationResult(iteration=iteration) + hook_ctx = HookContext( + pr_number=ctx.pr_number, + iteration=iteration, + stage="iteration", + ) + + self._run_hook("pre_iteration", hook_ctx) + if hook_ctx.cancelled: + return result + + # Start progress tracking for this iteration + iter_progress = self._progress.get_or_start_iteration() + + # Stage 2: Filter (delta detection) + filter_result = self._run_filter(ctx, iteration) + if filter_result is None or self._should_stop("filter"): + return result + + # Stage 3: Consolidate + consolidation = self._run_consolidate(ctx, filter_result, iteration) + if consolidation is None or self._should_stop("consolidate"): + return result + + # Stage 4: Plan + plan = self._run_plan(consolidation, iteration) + if plan is None or self._should_stop("plan"): + return result + + # Stage 5: Fix (for each action group) + fix_results = self._run_fixes(ctx, plan, iteration) + + # Aggregate fix results + addressed_threads = [] + for fix_result in fix_results: + if fix_result.success and not fix_result.skipped: + result.addressed_count += len(fix_result.addressed_locations) + addressed_threads.extend(fix_result.addressed_thread_ids) + elif fix_result.skipped: + result.skipped_count += 1 + else: + result.failed_count += 1 + + # Stage 6: Commit + if result.addressed_count > 0 and not self._dry_run_mode.enabled: + commit_result = self._run_commit(ctx, fix_results, iteration) + if commit_result: + result.commit_sha = commit_result.get("sha") + result.pushed = commit_result.get("pushed", False) + + # Resolve addressed threads + if addressed_threads: + resolve_addressed_threads( + self.owner, + self.repo, + addressed_threads, + delay=self.rate_limit_delay, + ) + + # Complete iteration progress + iter_progress.complete() + + self._run_hook("post_iteration", hook_ctx) + + return result + + def _run_filter( + self, + ctx: DiscoveryContext, + iteration: int, + ) -> FilterResult | None: + """Run filter stage for delta detection.""" + import time + + start = time.perf_counter() + hook_ctx = HookContext( + pr_number=ctx.pr_number, + iteration=iteration, + stage="filter", + ) + + self._run_hook("pre_filter", hook_ctx) + if hook_ctx.cancelled: + return None + + try: + # Get all feedback items from context + all_items = self._extract_feedback_items(ctx) + + # Load previous session state for delta detection + prev_state = self._load_previous_state(ctx.pr_number) + + filter_result = filter_feedback( + feedback_items=all_items, + previous_state=prev_state, + ) + + hook_ctx.data["filter_result"] = filter_result + self._run_hook("post_filter", hook_ctx) + + duration = (time.perf_counter() - start) * 1000 + log.debug( + f"Filter: {filter_result.new_count} new, " + f"{filter_result.unchanged_count} unchanged, " + f"{filter_result.resolved_count} resolved " + f"({duration:.0f}ms)" + ) + + return filter_result + + except Exception as e: + log.error(f"Filter stage failed: {e}") + return None + + def _run_consolidate( + self, + ctx: DiscoveryContext, + filter_result: FilterResult, + iteration: int, + ) -> ConsolidationResult | None: + """Run consolidation stage.""" + import time + + start = time.perf_counter() + hook_ctx = HookContext( + pr_number=ctx.pr_number, + iteration=iteration, + stage="consolidate", + ) + + self._run_hook("pre_consolidate", hook_ctx) + if hook_ctx.cancelled: + return None + + try: + consolidation = consolidate_feedback( + agent_dir=self.agent_dir, + ctx=ctx, + filtered_items=filter_result.new_items, + ) + + hook_ctx.data["consolidation"] = consolidation + self._run_hook("post_consolidate", hook_ctx) + + duration = (time.perf_counter() - start) * 1000 + log.debug( + f"Consolidation: {len(consolidation.action_groups)} groups " + f"({duration:.0f}ms)" + ) + + return consolidation + + except Exception as e: + log.error(f"Consolidation stage failed: {e}") + return None + + def _run_plan( + self, + consolidation: ConsolidationResult, + iteration: int, + ) -> ExecutionPlan | None: + """Run planning stage.""" + import time + + start = time.perf_counter() + hook_ctx = HookContext( + pr_number=0, # Not available here + iteration=iteration, + stage="plan", + ) + + self._run_hook("pre_plan", hook_ctx) + if hook_ctx.cancelled: + return None + + try: + plan = create_execution_plan( + action_groups=consolidation.action_groups, + guidance=consolidation.guidance, + ) + + hook_ctx.data["plan"] = plan + self._run_hook("post_plan", hook_ctx) + + duration = (time.perf_counter() - start) * 1000 + log.debug(f"Plan: {len(plan.steps)} steps ({duration:.0f}ms)") + + return plan + + except Exception as e: + log.error(f"Planning stage failed: {e}") + return None + + def _run_fixes( + self, + ctx: DiscoveryContext, + plan: ExecutionPlan, + iteration: int, + ) -> list[FixGroupResult]: + """Run fix stage for each planned step.""" + import time + + results = [] + hook_ctx = HookContext( + pr_number=ctx.pr_number, + iteration=iteration, + stage="fix", + ) + + self._run_hook("pre_fix", hook_ctx) + if hook_ctx.cancelled: + return results + + for step in plan.steps: + start = time.perf_counter() + + group_ctx = HookContext( + pr_number=ctx.pr_number, + iteration=iteration, + stage="fix_group", + data={"group_id": step.group_id}, + ) + + self._run_hook("pre_fix_group", group_ctx) + if group_ctx.cancelled: + continue + + try: + if self._dry_run_mode.enabled: + # Dry run - record what would be done + self._dry_run_mode.would_commit( + f"Fix {step.group_id}", + [loc.file for loc in step.action_group.locations], + ) + fix_result = FixGroupResult( + group_id=step.group_id, + success=True, + skipped=False, + addressed_locations=[], + addressed_thread_ids=[], + ) + else: + fix_result = fix_action_group( + agent_dir=self.agent_dir, + ctx=ctx, + action_group=step.action_group, + guidance=plan.guidance, + ) + + group_ctx.data["fix_result"] = fix_result + self._run_hook("post_fix_group", group_ctx) + + results.append(fix_result) + + duration = (time.perf_counter() - start) * 1000 + log.debug( + f"Fixed group {step.group_id}: " + f"{len(fix_result.addressed_locations)} locations " + f"({duration:.0f}ms)" + ) + + except Exception as e: + log.error(f"Fix failed for group {step.group_id}: {e}") + results.append( + FixGroupResult( + group_id=step.group_id, + success=False, + skipped=False, + reason=str(e), + addressed_locations=[], + addressed_thread_ids=[], + ) + ) + + hook_ctx.data["fix_results"] = results + self._run_hook("post_fix", hook_ctx) + + return results + + def _run_commit( + self, + ctx: DiscoveryContext, + fix_results: list[FixGroupResult], + iteration: int, + ) -> dict | None: + """Run commit stage.""" + import time + + start = time.perf_counter() + hook_ctx = HookContext( + pr_number=ctx.pr_number, + iteration=iteration, + stage="commit", + ) + + self._run_hook("pre_commit", hook_ctx) + if hook_ctx.cancelled: + return None + + try: + # Create commit + commit_sha = create_iteration_commit( + worktree_path=ctx.worktree_path, + pr_number=ctx.pr_number, + iteration=iteration, + fix_results=fix_results, + ) + + if not commit_sha: + log.warning("No changes to commit") + return None + + # Push changes + pushed = push_changes( + worktree_path=ctx.worktree_path, + branch=ctx.pr.head_branch, + ) + + result = {"sha": commit_sha, "pushed": pushed} + + hook_ctx.data["commit_result"] = result + self._run_hook("post_commit", hook_ctx) + + duration = (time.perf_counter() - start) * 1000 + log.debug(f"Commit {commit_sha[:7]} pushed={pushed} ({duration:.0f}ms)") + + return result + + except Exception as e: + log.error(f"Commit stage failed: {e}") + return None + + def _run_notify( + self, + ctx: DiscoveryContext, + result: PipelineResult, + ) -> None: + """Run notification stage.""" + hook_ctx = HookContext( + pr_number=ctx.pr_number, + iteration=result.iterations_run, + stage="notify", + ) + + self._run_hook("pre_notify", hook_ctx) + if hook_ctx.cancelled: + return + + try: + # Format summary comment + if result.total_addressed > 0: + comment = format_summary_comment( + pr_number=ctx.pr_number, + iteration=result.iterations_run, + fix_results=[r.to_dict() for r in result.iteration_results], + commit_sha=result.final_commit_sha or "", + ) + add_pr_comment(self.owner, self.repo, ctx.pr_number, comment) + + # Request re-review if ready + if result.ready_for_review: + reviewers = [r.author for r in ctx.blocking_reviews] + if reviewers: + request_rereview(self.owner, self.repo, ctx.pr_number, reviewers) + + elif result.error: + comment = format_error_comment("pipeline", result.error) + add_pr_comment(self.owner, self.repo, ctx.pr_number, comment) + + else: + comment = format_no_feedback_comment() + add_pr_comment(self.owner, self.repo, ctx.pr_number, comment) + + self._run_hook("post_notify", hook_ctx) + + except Exception as e: + log.error(f"Notification failed: {e}") + + def _run_hook(self, hook_name: str, context: HookContext) -> None: + """Run a hook, using app hooks if available, otherwise standalone registry.""" + if self.app and hasattr(self.app, "hook"): + run_hook(self.app, hook_name, context) + else: + list(self.hooks.run(hook_name, self, context)) + + def _should_stop(self, stage: str) -> bool: + """Check if we should stop after this stage.""" + if self._dry_run_mode and self._dry_run_mode.stop_after == stage: + log.info(f"[DRY RUN] Stopping after {stage} stage") + return True + return False + + def _extract_feedback_items(self, ctx: DiscoveryContext) -> list[FeedbackItem]: + """Extract feedback items from discovery context.""" + items = [] + + # From reviews + for review in ctx.all_reviews: + items.append( + FeedbackItem( + id=f"review-{review.id}", + source_type="review", + author=review.author, + body=review.body or "", + file=None, + line=None, + thread_id=None, + ) + ) + + # From threads + for thread in ctx.unresolved_threads: + items.append( + FeedbackItem( + id=f"thread-{thread.id}", + source_type="thread", + author=thread.author, + body=thread.body or "", + file=thread.path, + line=thread.line, + thread_id=thread.id, + ) + ) + + return items + + def _load_previous_state(self, pr_number: int) -> dict | None: + """Load previous session state for delta detection.""" + import json + + state_file = self.sessions_dir / f"pr-{pr_number}" / "state.json" + if state_file.exists(): + try: + return json.loads(state_file.read_text()) + except Exception: + pass + return None + + +def run_pipeline( + pr_number: int, + agent_dir: Path, + sessions_dir: Path, + owner: str, + repo: str, + max_iterations: int = 3, + dry_run: bool = False, + **kwargs, +) -> PipelineResult: + """Convenience function to run the pipeline. + + Args: + pr_number: Pull request number + agent_dir: Path to agent directory + sessions_dir: Path to sessions directory + owner: Repository owner + repo: Repository name + max_iterations: Maximum iterations + dry_run: Preview changes only + **kwargs: Additional Pipeline constructor args + + Returns: + PipelineResult + """ + pipeline = Pipeline( + agent_dir=agent_dir, + sessions_dir=sessions_dir, + owner=owner, + repo=repo, + **kwargs, + ) + return pipeline.run( + pr_number=pr_number, + max_iterations=max_iterations, + dry_run=dry_run, + ) diff --git a/.claude/agents/skill-pr-addresser/src/planner.py b/.claude/agents/skill-pr-addresser/src/planner.py new file mode 100644 index 0000000..b008062 --- /dev/null +++ b/.claude/agents/skill-pr-addresser/src/planner.py @@ -0,0 +1,283 @@ +# src/planner.py +"""Execution planning for action groups. + +Stage 7.5 interface module that creates execution plans from +consolidated action groups. +""" + +from dataclasses import dataclass, field +from typing import TYPE_CHECKING + +if TYPE_CHECKING: + from .consolidate import ConsolidationResult + from .feedback import ActionGroup + + +@dataclass +class PlanStep: + """A single step in the execution plan.""" + + group_id: str + priority: str # "critical", "high", "medium", "low" + description: str + estimated_changes: int + dependencies: list[str] = field(default_factory=list) + + +@dataclass +class ExecutionPlan: + """Ordered execution plan.""" + + steps: list[PlanStep] = field(default_factory=list) + + @property + def total_items(self) -> int: + return len(self.steps) + + def get_step(self, group_id: str) -> PlanStep | None: + return next((s for s in self.steps if s.group_id == group_id), None) + + def get_next_pending(self, completed: set[str]) -> PlanStep | None: + """Get the next step whose dependencies are all completed. + + Args: + completed: Set of completed group IDs + + Returns: + Next executable step, or None if all done or blocked + """ + for step in self.steps: + if step.group_id in completed: + continue + if all(dep in completed for dep in step.dependencies): + return step + return None + + def get_executable_steps(self, completed: set[str]) -> list[PlanStep]: + """Get all steps that can be executed in parallel. + + Args: + completed: Set of completed group IDs + + Returns: + List of steps whose dependencies are satisfied + """ + executable = [] + for step in self.steps: + if step.group_id in completed: + continue + if all(dep in completed for dep in step.dependencies): + executable.append(step) + return executable + + +def create_plan(consolidated: "ConsolidationResult") -> ExecutionPlan: + """Create execution plan from consolidated feedback. + + Ordering strategy: + 1. Critical items first (blocking issues) + 2. High priority (major improvements) + 3. Medium priority (enhancements) + 4. Low priority (nice-to-have) + + Within priority, order by: + - Dependencies (do prerequisites first) + - Location (top-to-bottom in file) + - Size (smaller changes first) + + Args: + consolidated: Consolidated feedback + + Returns: + ExecutionPlan with ordered steps + """ + steps: list[PlanStep] = [] + + for group in consolidated.action_groups: + priority = _assign_priority(group) + estimated = _estimate_changes(group) + deps = _find_dependencies(group, consolidated.action_groups) + + steps.append( + PlanStep( + group_id=group.id, + priority=priority, + description=group.description, + estimated_changes=estimated, + dependencies=deps, + ) + ) + + # Sort by priority, then by dependencies + steps = _sort_by_priority(steps) + steps = _sort_by_dependencies(steps) + + return ExecutionPlan(steps=steps) + + +def _assign_priority(group: "ActionGroup") -> str: + """Assign priority based on action group type and content. + + Args: + group: Action group to prioritize + + Returns: + Priority string: "critical", "high", "medium", or "low" + """ + # Map existing priority to standardized values + priority_map = { + "critical": "critical", + "high": "high", + "medium": "medium", + "low": "low", + } + + # Use group's priority if valid + if group.priority in priority_map: + base_priority = priority_map[group.priority] + else: + base_priority = "medium" + + # Elevate priority for change_request type + if group.type == "change_request": + if base_priority == "low": + return "medium" + if base_priority == "medium": + return "high" + return "critical" + + # Lower priority for nitpicks + if group.type == "nitpick": + if base_priority == "critical": + return "high" + if base_priority == "high": + return "medium" + return "low" + + return base_priority + + +def _estimate_changes(group: "ActionGroup") -> int: + """Estimate number of changes needed for an action group. + + Args: + group: Action group + + Returns: + Estimated number of changes + """ + # Base estimate on number of locations + base = len(group.locations) + + # Adjust based on action type + action_multipliers = { + "add_section": 3, + "move_to_examples": 2, + "move_to_references": 2, + "fix_typo": 1, + "update_content": 2, + "delete": 1, + } + + multiplier = action_multipliers.get(group.action, 2) + return max(1, base * multiplier) + + +def _find_dependencies( + group: "ActionGroup", + all_groups: list["ActionGroup"], +) -> list[str]: + """Find dependencies for an action group. + + Args: + group: Action group to check + all_groups: All action groups + + Returns: + List of group IDs that must complete first + """ + deps: list[str] = [] + + # Check for file dependencies + # If another group modifies a file this group reads from, depend on it + group_files = {loc.file for loc in group.locations if loc.file} + + for other in all_groups: + if other.id == group.id: + continue + + other_files = {loc.file for loc in other.locations if loc.file} + + # If other group creates/moves content that this group references + if other.action in ("add_section", "move_to_references", "move_to_examples"): + # Check for overlapping files + if group_files & other_files: + # Higher priority actions should be dependencies + if _priority_value(other.priority) > _priority_value(group.priority): + deps.append(other.id) + + return deps + + +def _priority_value(priority: str) -> int: + """Convert priority string to numeric value for comparison.""" + values = { + "critical": 4, + "high": 3, + "medium": 2, + "low": 1, + } + return values.get(priority, 2) + + +def _sort_by_priority(steps: list[PlanStep]) -> list[PlanStep]: + """Sort steps by priority (critical first). + + Args: + steps: Unsorted steps + + Returns: + Steps sorted by priority + """ + return sorted(steps, key=lambda s: -_priority_value(s.priority)) + + +def _sort_by_dependencies(steps: list[PlanStep]) -> list[PlanStep]: + """Topological sort by dependencies. + + Args: + steps: Steps sorted by priority + + Returns: + Steps sorted to respect dependencies + """ + # Build adjacency list + step_map = {s.group_id: s for s in steps} + result: list[PlanStep] = [] + visited: set[str] = set() + temp_mark: set[str] = set() + + def visit(step: PlanStep) -> None: + if step.group_id in temp_mark: + # Cycle detected, skip + return + if step.group_id in visited: + return + + temp_mark.add(step.group_id) + + # Visit dependencies first + for dep_id in step.dependencies: + if dep_id in step_map: + visit(step_map[dep_id]) + + temp_mark.remove(step.group_id) + visited.add(step.group_id) + result.append(step) + + # Process all steps + for step in steps: + if step.group_id not in visited: + visit(step) + + return result diff --git a/.claude/agents/skill-pr-addresser/src/progress.py b/.claude/agents/skill-pr-addresser/src/progress.py new file mode 100644 index 0000000..2d5c324 --- /dev/null +++ b/.claude/agents/skill-pr-addresser/src/progress.py @@ -0,0 +1,339 @@ +"""Progress tracking for batch operations and monitoring. + +Writes progress to a JSON file for external monitoring and real-time updates. +""" + +import json +import logging +from dataclasses import dataclass, field +from datetime import datetime +from pathlib import Path +from enum import Enum + +log = logging.getLogger(__name__) + + +class PRStatus(str, Enum): + """Status of PR addressing.""" + + PENDING = "pending" + IN_PROGRESS = "in_progress" + SUCCESS = "success" + FAILED = "failed" + SKIPPED = "skipped" + + +@dataclass +class PRProgress: + """Progress for a single PR.""" + + pr_number: int + status: PRStatus = PRStatus.PENDING + title: str = "" + skill_path: str | None = None + iterations_completed: int = 0 + max_iterations: int = 3 + feedback_count: int = 0 + addressed_count: int = 0 + skipped_count: int = 0 + error: str | None = None + started_at: str | None = None + completed_at: str | None = None + cost: float = 0.0 + + def to_dict(self) -> dict: + """Convert to dictionary.""" + return { + "pr_number": self.pr_number, + "status": self.status.value, + "title": self.title, + "skill_path": self.skill_path, + "iterations_completed": self.iterations_completed, + "max_iterations": self.max_iterations, + "feedback_count": self.feedback_count, + "addressed_count": self.addressed_count, + "skipped_count": self.skipped_count, + "error": self.error, + "started_at": self.started_at, + "completed_at": self.completed_at, + "cost": self.cost, + } + + +@dataclass +class BatchProgress: + """Progress for a batch of PRs.""" + + started_at: str = field(default_factory=lambda: datetime.now().isoformat()) + completed_at: str | None = None + prs: list[PRProgress] = field(default_factory=list) + total_cost: float = 0.0 + + @property + def total_prs(self) -> int: + """Total number of PRs in batch.""" + return len(self.prs) + + @property + def completed_prs(self) -> int: + """Number of completed PRs.""" + return sum( + 1 + for pr in self.prs + if pr.status in (PRStatus.SUCCESS, PRStatus.FAILED, PRStatus.SKIPPED) + ) + + @property + def success_count(self) -> int: + """Number of successfully addressed PRs.""" + return sum(1 for pr in self.prs if pr.status == PRStatus.SUCCESS) + + @property + def failed_count(self) -> int: + """Number of failed PRs.""" + return sum(1 for pr in self.prs if pr.status == PRStatus.FAILED) + + @property + def skipped_count(self) -> int: + """Number of skipped PRs.""" + return sum(1 for pr in self.prs if pr.status == PRStatus.SKIPPED) + + @property + def in_progress_count(self) -> int: + """Number of PRs currently in progress.""" + return sum(1 for pr in self.prs if pr.status == PRStatus.IN_PROGRESS) + + @property + def pending_count(self) -> int: + """Number of pending PRs.""" + return sum(1 for pr in self.prs if pr.status == PRStatus.PENDING) + + @property + def progress_percent(self) -> float: + """Overall progress as percentage.""" + if self.total_prs == 0: + return 100.0 + return (self.completed_prs / self.total_prs) * 100 + + def to_dict(self) -> dict: + """Convert to dictionary.""" + return { + "started_at": self.started_at, + "completed_at": self.completed_at, + "total_prs": self.total_prs, + "completed_prs": self.completed_prs, + "success_count": self.success_count, + "failed_count": self.failed_count, + "skipped_count": self.skipped_count, + "in_progress_count": self.in_progress_count, + "pending_count": self.pending_count, + "progress_percent": round(self.progress_percent, 1), + "total_cost": round(self.total_cost, 4), + "prs": [pr.to_dict() for pr in self.prs], + } + + +class ProgressTracker: + """Tracks and persists progress for batch operations.""" + + def __init__(self, data_dir: Path): + """Initialize tracker. + + Args: + data_dir: Directory for progress files + """ + self.data_dir = data_dir + self.progress_file = data_dir / "progress.json" + self.batch: BatchProgress | None = None + + def start_batch(self, pr_numbers: list[int]) -> None: + """Start tracking a new batch. + + Args: + pr_numbers: List of PR numbers to track + """ + self.batch = BatchProgress( + prs=[PRProgress(pr_number=n) for n in pr_numbers] + ) + self._save() + log.info(f"Started tracking batch of {len(pr_numbers)} PRs") + + def start_pr(self, pr_number: int, title: str = "", skill_path: str | None = None) -> None: + """Mark a PR as started. + + Args: + pr_number: PR number + title: PR title + skill_path: Skill path being addressed + """ + pr = self._get_pr(pr_number) + if pr: + pr.status = PRStatus.IN_PROGRESS + pr.title = title + pr.skill_path = skill_path + pr.started_at = datetime.now().isoformat() + self._save() + + def update_iteration( + self, + pr_number: int, + iteration: int, + feedback_count: int, + addressed_count: int, + skipped_count: int, + cost: float = 0.0, + ) -> None: + """Update progress after an iteration. + + Args: + pr_number: PR number + iteration: Current iteration number + feedback_count: Total feedback items + addressed_count: Items addressed so far + skipped_count: Items skipped so far + cost: Cost for this iteration + """ + pr = self._get_pr(pr_number) + if pr: + pr.iterations_completed = iteration + pr.feedback_count = feedback_count + pr.addressed_count = addressed_count + pr.skipped_count = skipped_count + pr.cost += cost + if self.batch: + self.batch.total_cost += cost + self._save() + + def complete_pr( + self, + pr_number: int, + success: bool, + error: str | None = None, + ) -> None: + """Mark a PR as completed. + + Args: + pr_number: PR number + success: Whether addressing was successful + error: Optional error message + """ + pr = self._get_pr(pr_number) + if pr: + pr.status = PRStatus.SUCCESS if success else PRStatus.FAILED + pr.error = error + pr.completed_at = datetime.now().isoformat() + self._save() + + def skip_pr(self, pr_number: int, reason: str) -> None: + """Mark a PR as skipped. + + Args: + pr_number: PR number + reason: Reason for skipping + """ + pr = self._get_pr(pr_number) + if pr: + pr.status = PRStatus.SKIPPED + pr.error = reason + pr.completed_at = datetime.now().isoformat() + self._save() + + def complete_batch(self) -> None: + """Mark the batch as completed.""" + if self.batch: + self.batch.completed_at = datetime.now().isoformat() + self._save() + log.info( + f"Batch completed: {self.batch.success_count} success, " + f"{self.batch.failed_count} failed, " + f"{self.batch.skipped_count} skipped" + ) + + def get_summary(self) -> str: + """Get a human-readable progress summary. + + Returns: + Multi-line summary string + """ + if not self.batch: + return "No batch in progress" + + lines = [ + "Batch Progress", + "=" * 40, + f"Total PRs: {self.batch.total_prs}", + f"Progress: {self.batch.progress_percent:.1f}%", + "", + f" Success: {self.batch.success_count}", + f" Failed: {self.batch.failed_count}", + f" Skipped: {self.batch.skipped_count}", + f" In Progress: {self.batch.in_progress_count}", + f" Pending: {self.batch.pending_count}", + "", + f"Total Cost: ${self.batch.total_cost:.4f}", + ] + + return "\n".join(lines) + + def _get_pr(self, pr_number: int) -> PRProgress | None: + """Get PR progress by number.""" + if not self.batch: + return None + for pr in self.batch.prs: + if pr.pr_number == pr_number: + return pr + return None + + def _save(self) -> None: + """Save progress to file.""" + if not self.batch: + return + + try: + self.data_dir.mkdir(parents=True, exist_ok=True) + self.progress_file.write_text( + json.dumps(self.batch.to_dict(), indent=2) + ) + except Exception as e: + log.warning(f"Failed to save progress: {e}") + + def load(self) -> BatchProgress | None: + """Load existing progress from file. + + Returns: + BatchProgress if file exists, None otherwise + """ + if not self.progress_file.exists(): + return None + + try: + data = json.loads(self.progress_file.read_text()) + self.batch = BatchProgress( + started_at=data.get("started_at", ""), + completed_at=data.get("completed_at"), + total_cost=data.get("total_cost", 0.0), + ) + + for pr_data in data.get("prs", []): + self.batch.prs.append( + PRProgress( + pr_number=pr_data["pr_number"], + status=PRStatus(pr_data.get("status", "pending")), + title=pr_data.get("title", ""), + skill_path=pr_data.get("skill_path"), + iterations_completed=pr_data.get("iterations_completed", 0), + max_iterations=pr_data.get("max_iterations", 3), + feedback_count=pr_data.get("feedback_count", 0), + addressed_count=pr_data.get("addressed_count", 0), + skipped_count=pr_data.get("skipped_count", 0), + error=pr_data.get("error"), + started_at=pr_data.get("started_at"), + completed_at=pr_data.get("completed_at"), + cost=pr_data.get("cost", 0.0), + ) + ) + + return self.batch + except Exception as e: + log.warning(f"Failed to load progress: {e}") + return None diff --git a/.claude/agents/skill-pr-addresser/src/session_schema.py b/.claude/agents/skill-pr-addresser/src/session_schema.py new file mode 100644 index 0000000..ee8b28a --- /dev/null +++ b/.claude/agents/skill-pr-addresser/src/session_schema.py @@ -0,0 +1,188 @@ +# src/session_schema.py +"""Session storage schema for feedback tracking. + +Stage 8 implementation: Tracking addressed feedback with content hashes +for delta detection (#796). +""" + +from dataclasses import dataclass, field +from datetime import datetime, timezone + + +@dataclass +class AddressedItem: + """Record of an addressed feedback item.""" + + id: str + content_hash: str + addressed_at: datetime + addressed_in_commit: str + iteration: int + + def to_dict(self) -> dict: + return { + "id": self.id, + "content_hash": self.content_hash, + "addressed_at": self.addressed_at.isoformat(), + "addressed_in_commit": self.addressed_in_commit, + "iteration": self.iteration, + } + + @classmethod + def from_dict(cls, data: dict) -> "AddressedItem": + return cls( + id=data["id"], + content_hash=data["content_hash"], + addressed_at=datetime.fromisoformat(data["addressed_at"]), + addressed_in_commit=data["addressed_in_commit"], + iteration=data["iteration"], + ) + + +@dataclass +class ThreadState: + """State tracking for a review thread.""" + + thread_id: str + last_seen_comment_id: str | None + comments_processed: list[str] + last_processed_at: datetime + + def to_dict(self) -> dict: + return { + "thread_id": self.thread_id, + "last_seen_comment_id": self.last_seen_comment_id, + "comments_processed": self.comments_processed, + "last_processed_at": self.last_processed_at.isoformat(), + } + + @classmethod + def from_dict(cls, data: dict) -> "ThreadState": + return cls( + thread_id=data["thread_id"], + last_seen_comment_id=data.get("last_seen_comment_id"), + comments_processed=data.get("comments_processed", []), + last_processed_at=datetime.fromisoformat(data["last_processed_at"]), + ) + + +@dataclass +class FeedbackState: + """Complete feedback state for a session.""" + + addressed: dict[str, AddressedItem] = field(default_factory=dict) + threads: dict[str, ThreadState] = field(default_factory=dict) + last_run: datetime | None = None + + def to_dict(self) -> dict: + return { + "addressed": {k: v.to_dict() for k, v in self.addressed.items()}, + "threads": {k: v.to_dict() for k, v in self.threads.items()}, + "last_run": self.last_run.isoformat() if self.last_run else None, + } + + @classmethod + def from_dict(cls, data: dict) -> "FeedbackState": + return cls( + addressed={ + k: AddressedItem.from_dict(v) + for k, v in data.get("addressed", {}).items() + }, + threads={ + k: ThreadState.from_dict(v) + for k, v in data.get("threads", {}).items() + }, + last_run=datetime.fromisoformat(data["last_run"]) + if data.get("last_run") + else None, + ) + + @classmethod + def from_session(cls, session) -> "FeedbackState": + """Load feedback state from a session.""" + data = session.results.get("feedback_state", {}) + if not data: + return cls() + return cls.from_dict(data) + + def save_to_session(self, session) -> None: + """Save feedback state to a session.""" + session.results["feedback_state"] = self.to_dict() + + def mark_addressed( + self, + item_id: str, + content_hash: str, + commit_sha: str, + iteration: int, + ): + """Mark an item as addressed.""" + self.addressed[item_id] = AddressedItem( + id=item_id, + content_hash=content_hash, + addressed_at=datetime.now(timezone.utc), + addressed_in_commit=commit_sha, + iteration=iteration, + ) + + def update_thread(self, thread_id: str, processed_comment_ids: list[str]): + """Update thread state with processed comments.""" + existing = self.threads.get(thread_id) + all_processed = ( + existing.comments_processed if existing else [] + ) + processed_comment_ids + + self.threads[thread_id] = ThreadState( + thread_id=thread_id, + last_seen_comment_id=processed_comment_ids[-1] if processed_comment_ids else None, + comments_processed=list(set(all_processed)), + last_processed_at=datetime.now(timezone.utc), + ) + + def was_addressed(self, item_id: str) -> bool: + """Check if an item was previously addressed.""" + return item_id in self.addressed + + def was_addressed_with_hash(self, item_id: str, content_hash: str) -> bool: + """Check if item was addressed AND content hasn't changed.""" + if item_id not in self.addressed: + return False + return self.addressed[item_id].content_hash == content_hash + + def get_addressed_commit(self, item_id: str) -> str | None: + """Get the commit SHA where an item was addressed.""" + if item_id in self.addressed: + return self.addressed[item_id].addressed_in_commit + return None + + def get_unprocessed_comments( + self, thread_id: str, all_comment_ids: list[str] + ) -> list[str]: + """Get comment IDs that haven't been processed yet.""" + if thread_id not in self.threads: + return all_comment_ids + + processed = set(self.threads[thread_id].comments_processed) + return [cid for cid in all_comment_ids if cid not in processed] + + def record_run(self): + """Record that a run was completed.""" + self.last_run = datetime.now(timezone.utc) + + @property + def addressed_count(self) -> int: + """Number of addressed items.""" + return len(self.addressed) + + @property + def thread_count(self) -> int: + """Number of tracked threads.""" + return len(self.threads) + + def summary(self) -> dict: + """Generate a summary of the feedback state.""" + return { + "addressed_count": self.addressed_count, + "thread_count": self.thread_count, + "last_run": self.last_run.isoformat() if self.last_run else None, + } diff --git a/.claude/agents/skill-pr-addresser/src/templates.py b/.claude/agents/skill-pr-addresser/src/templates.py new file mode 100644 index 0000000..4fa0167 --- /dev/null +++ b/.claude/agents/skill-pr-addresser/src/templates.py @@ -0,0 +1,315 @@ +"""Template rendering for skill-pr-addresser. + +Uses chevron (Mustache/Handlebars) for template rendering. +""" + +import logging +from pathlib import Path + +import chevron + + +log = logging.getLogger(__name__) + + +def render_template(templates_dir: Path, template_name: str, data: dict) -> str: + """Render a Handlebars template with data. + + Args: + templates_dir: Directory containing templates + template_name: Name of template (without .hbs extension) + data: Template context data + + Returns: + Rendered template string + """ + template_file = templates_dir / f"{template_name}.hbs" + + if not template_file.exists(): + log.warning(f"Template not found: {template_file}") + # Return a basic fallback + return _fallback_template(template_name, data) + + template_content = template_file.read_text() + + try: + return chevron.render(template_content, data) + except Exception as e: + log.error(f"Template render error: {e}") + return _fallback_template(template_name, data) + + +def _fallback_template(template_name: str, data: dict) -> str: + """Generate fallback content when template is missing or fails. + + Args: + template_name: Name of the template + data: Template data + + Returns: + Basic formatted string + """ + if template_name == "iteration_comment": + return f"""## Addressing Iteration {data.get('iteration', '?')} + +**Feedback items:** {data.get('feedback_count', 0)} +**Addressed:** {data.get('addressed_count', 0)} +**Skipped:** {data.get('skipped_count', 0)} + +{_format_commit_info(data)} + +--- +*Automated by skill-pr-addresser* +""" + + if template_name == "ready_comment": + reviewers = data.get('reviewers', []) + mentions = ' '.join(f'@{r}' for r in reviewers) + return f"""## Ready for Re-Review + +All feedback has been addressed. {mentions} + +Please re-review this PR when convenient. + +--- +*Automated by skill-pr-addresser* +""" + + if template_name == "skipped_feedback": + skipped = data.get('skipped', []) + items = '\n'.join(f"- **{s.get('id')}**: {s.get('reason')}" for s in skipped) + return f"""## Feedback Not Addressed + +The following items could not be addressed automatically: + +{items} + +--- +*Manual review required* +""" + + # Generic fallback + return f"Template '{template_name}' rendered with {len(data)} data items." + + +def _format_commit_info(data: dict) -> str: + """Format commit information for iteration comment.""" + commit_sha = data.get('commit_sha') + if not commit_sha: + return "*No changes committed*" + + commit_short = data.get('commit_short', commit_sha[:8]) + files = data.get('files_modified', []) + lines_added = data.get('lines_added', 0) + lines_removed = data.get('lines_removed', 0) + + return f"""**Commit:** `{commit_short}` +**Files modified:** {', '.join(files) if files else 'none'} +**Lines:** +{lines_added} / -{lines_removed}""" + + +# ============================================================================= +# PR Comment Templates (Stage 10) +# ============================================================================= + + +def format_summary_comment( + pr_number: int, + iteration: int, + fix_results: list, + commit_sha: str, +) -> str: + """Format the PR summary comment. + + Args: + pr_number: PR number + iteration: Current iteration + fix_results: List of fix results (dicts with group_id, addressed_locations, etc.) + commit_sha: Commit SHA + + Returns: + Formatted markdown comment + """ + total_addressed = sum( + len(r.get("addressed_locations", [])) if isinstance(r, dict) else len(getattr(r, "addressed_locations", [])) + for r in fix_results + ) + threads_resolved = sum( + len(r.get("addressed_thread_ids", [])) if isinstance(r, dict) else len(getattr(r, "addressed_thread_ids", [])) + for r in fix_results + ) + + lines = [ + "## ✅ Feedback Addressed", + "", + f"**Iteration {iteration}** | Commit: [`{commit_sha[:7]}`](../commit/{commit_sha})", + "", + ] + + if fix_results: + lines.extend([ + "### Changes Made", + "", + "| Action Group | Status | Locations | Threads |", + "|--------------|--------|-----------|---------|", + ]) + + for result in fix_results: + if isinstance(result, dict): + group_id = result.get("group_id", "unknown") + skipped = result.get("skipped", False) + failed = result.get("failed", False) + reason = result.get("reason", "") + addressed_locs = len(result.get("addressed_locations", [])) + addressed_threads = len(result.get("addressed_thread_ids", [])) + else: + group_id = getattr(result, "group_id", "unknown") + skipped = getattr(result, "skipped", False) + failed = getattr(result, "failed", False) + reason = getattr(result, "reason", "") + addressed_locs = len(getattr(result, "addressed_locations", [])) + addressed_threads = len(getattr(result, "addressed_thread_ids", [])) + + if skipped: + status = f"⏭️ Skipped ({reason})" + elif failed: + status = "❌ Failed" + else: + status = "✅ Complete" + + lines.append( + f"| {group_id} | {status} | {addressed_locs} | {addressed_threads} |" + ) + + lines.append("") + + lines.extend([ + "---", + f"📊 **Summary**: {total_addressed} locations addressed, {threads_resolved} threads resolved", + "", + "*🤖 Automated by [skill-pr-addresser](https://github.com/aRustyDev/ai)*", + ]) + + return "\n".join(lines) + + +def format_iteration_limit_comment( + max_iterations: int, + resolved_count: int, +) -> str: + """Format the iteration limit warning comment. + + Args: + max_iterations: Maximum iterations reached + resolved_count: Number of threads resolved + + Returns: + Formatted markdown comment + """ + return f"""## ⚠️ Iteration Limit Reached + +Reached maximum iterations ({max_iterations}). Some feedback may require manual attention. + +**Resolved:** {resolved_count} threads +**Remaining:** See unresolved threads above. + +If critical feedback remains unaddressed, consider: +1. Running the addresser again with `--max-iterations N` +2. Manually addressing complex feedback +3. Requesting reviewer clarification + +--- +*🤖 Automated by [skill-pr-addresser](https://github.com/aRustyDev/ai)* +""" + + +def format_error_comment( + stage: str, + error: str, +) -> str: + """Format an error comment. + + Args: + stage: Stage that failed + error: Error message + + Returns: + Formatted markdown comment + """ + return f"""## ❌ Processing Failed + +The addresser encountered an error during the **{stage}** stage: + +``` +{error} +``` + +Please check the logs for more details or try running again. + +--- +*🤖 Automated by [skill-pr-addresser](https://github.com/aRustyDev/ai)* +""" + + +def format_no_feedback_comment() -> str: + """Format a comment when no new feedback is found. + + Returns: + Formatted markdown comment + """ + return """## ✅ No New Feedback + +All feedback has been addressed or there's nothing new to process. + +--- +*🤖 Automated by [skill-pr-addresser](https://github.com/aRustyDev/ai)* +""" + + +def format_partial_progress_comment( + iteration: int, + addressed_count: int, + total_count: int, + pending_groups: list[str], +) -> str: + """Format a comment for partial progress. + + Args: + iteration: Current iteration + addressed_count: Number of items addressed + total_count: Total items to address + pending_groups: List of pending group IDs + + Returns: + Formatted markdown comment + """ + pct = (addressed_count / total_count * 100) if total_count > 0 else 0 + + lines = [ + "## ⏳ Partial Progress", + "", + f"**Iteration {iteration}** | Progress: {addressed_count}/{total_count} ({pct:.0f}%)", + "", + ] + + if pending_groups: + lines.extend([ + "### Pending Groups", + "", + ]) + for group_id in pending_groups[:5]: # Limit to 5 + lines.append(f"- {group_id}") + + if len(pending_groups) > 5: + lines.append(f"- ... and {len(pending_groups) - 5} more") + + lines.append("") + + lines.extend([ + "The addresser will resume from here on the next run.", + "", + "---", + "*🤖 Automated by [skill-pr-addresser](https://github.com/aRustyDev/ai)*", + ]) + + return "\n".join(lines) diff --git a/.claude/agents/skill-pr-addresser/src/tracing.py b/.claude/agents/skill-pr-addresser/src/tracing.py new file mode 100644 index 0000000..8adfa1b --- /dev/null +++ b/.claude/agents/skill-pr-addresser/src/tracing.py @@ -0,0 +1,254 @@ +"""OpenTelemetry tracing support for skill-pr-addresser. + +Provides optional tracing for observability when enabled in config. +Falls back to no-op when OTEL is not available or disabled. +""" + +import logging +import os +from contextlib import contextmanager +from dataclasses import dataclass +from functools import wraps +from typing import Any, Callable, Generator + +log = logging.getLogger(__name__) + +# Try to import OpenTelemetry, fall back to no-op if not available +try: + from opentelemetry import trace + from opentelemetry.exporter.otlp.proto.grpc.trace_exporter import OTLPSpanExporter + from opentelemetry.sdk.resources import Resource + from opentelemetry.sdk.trace import TracerProvider + from opentelemetry.sdk.trace.export import BatchSpanProcessor + from opentelemetry.trace import Status, StatusCode + + OTEL_AVAILABLE = True +except ImportError: + OTEL_AVAILABLE = False + trace = None # type: ignore + Status = None # type: ignore + StatusCode = None # type: ignore + + +@dataclass +class TracingConfig: + """Configuration for OTEL tracing.""" + + enabled: bool = False + endpoint: str = "localhost:4317" # gRPC endpoint (no http:// prefix) + service_name: str = "skill-pr-addresser" + version: str = "0.1.0" + + +# Global tracer instance +_tracer = None +_config: TracingConfig | None = None + + +def init_tracing(config: TracingConfig) -> bool: + """Initialize OpenTelemetry tracing. + + Args: + config: Tracing configuration + + Returns: + True if tracing was initialized, False otherwise + """ + global _tracer, _config + + _config = config + + if not config.enabled: + log.debug("OTEL tracing disabled in config") + return False + + if not OTEL_AVAILABLE: + log.warning("OpenTelemetry not available, tracing disabled") + return False + + try: + # Create resource with service info + resource = Resource.create( + { + "service.name": config.service_name, + "service.version": config.version, + } + ) + + # Create tracer provider + provider = TracerProvider(resource=resource) + + # Add OTLP exporter (insecure=True for plain gRPC without TLS) + exporter = OTLPSpanExporter(endpoint=config.endpoint, insecure=True) + provider.add_span_processor(BatchSpanProcessor(exporter)) + + # Set as global provider + trace.set_tracer_provider(provider) + + # Get tracer + _tracer = trace.get_tracer(config.service_name, config.version) + + log.info(f"OTEL tracing initialized: {config.endpoint}") + return True + + except Exception as e: + log.warning(f"Failed to initialize OTEL tracing: {e}") + return False + + +def get_tracer(): + """Get the current tracer instance. + + Returns: + Tracer if initialized, None otherwise + """ + return _tracer + + +@contextmanager +def span( + name: str, + attributes: dict[str, Any] | None = None, +) -> Generator[Any, None, None]: + """Create a tracing span context manager. + + Args: + name: Span name + attributes: Optional span attributes + + Yields: + The span object (or None if tracing disabled) + """ + if _tracer is None: + yield None + return + + with _tracer.start_as_current_span(name) as s: + if attributes: + for key, value in attributes.items(): + # Convert to string for non-primitive types + if not isinstance(value, (str, int, float, bool)): + value = str(value) + s.set_attribute(key, value) + yield s + + +def traced(name: str | None = None) -> Callable: + """Decorator to trace a function. + + Args: + name: Optional span name (defaults to function name) + + Returns: + Decorated function + """ + + def decorator(func: Callable) -> Callable: + span_name = name or func.__name__ + + @wraps(func) + def wrapper(*args, **kwargs): + with span(span_name) as s: + try: + result = func(*args, **kwargs) + if s and OTEL_AVAILABLE: + s.set_status(Status(StatusCode.OK)) + return result + except Exception as e: + if s and OTEL_AVAILABLE: + s.set_status(Status(StatusCode.ERROR, str(e))) + s.record_exception(e) + raise + + return wrapper + + return decorator + + +def add_event(name: str, attributes: dict[str, Any] | None = None) -> None: + """Add an event to the current span. + + Args: + name: Event name + attributes: Optional event attributes + """ + if not OTEL_AVAILABLE or _tracer is None: + return + + current_span = trace.get_current_span() + if current_span: + current_span.add_event(name, attributes or {}) + + +def set_attribute(key: str, value: Any) -> None: + """Set an attribute on the current span. + + Args: + key: Attribute key + value: Attribute value + """ + if not OTEL_AVAILABLE or _tracer is None: + return + + current_span = trace.get_current_span() + if current_span: + if not isinstance(value, (str, int, float, bool)): + value = str(value) + current_span.set_attribute(key, value) + + +def record_subagent_call( + name: str, + model: str, + duration_seconds: float, + success: bool, + error: str | None = None, +) -> None: + """Record a sub-agent call as a span event. + + Args: + name: Sub-agent name + model: Model used + duration_seconds: Call duration + success: Whether the call succeeded + error: Optional error message + """ + attributes = { + "subagent.name": name, + "subagent.model": model, + "subagent.duration_seconds": duration_seconds, + "subagent.success": success, + } + + if error: + attributes["subagent.error"] = error + + add_event("subagent_call", attributes) + + +def record_iteration( + iteration: int, + feedback_count: int, + addressed_count: int, + skipped_count: int, + success_rate: float, +) -> None: + """Record an addressing iteration as a span event. + + Args: + iteration: Iteration number + feedback_count: Total feedback items + addressed_count: Items addressed + skipped_count: Items skipped + success_rate: Success rate as decimal + """ + add_event( + "addressing_iteration", + { + "iteration": iteration, + "feedback_count": feedback_count, + "addressed_count": addressed_count, + "skipped_count": skipped_count, + "success_rate": success_rate, + }, + ) diff --git a/.claude/agents/skill-pr-addresser/src/tui/__init__.py b/.claude/agents/skill-pr-addresser/src/tui/__init__.py new file mode 100644 index 0000000..9882642 --- /dev/null +++ b/.claude/agents/skill-pr-addresser/src/tui/__init__.py @@ -0,0 +1 @@ +"""TUI components for skill-pr-addresser.""" diff --git a/.claude/agents/skill-pr-addresser/src/worktree.py b/.claude/agents/skill-pr-addresser/src/worktree.py new file mode 100644 index 0000000..4fd2030 --- /dev/null +++ b/.claude/agents/skill-pr-addresser/src/worktree.py @@ -0,0 +1,297 @@ +# src/worktree.py +"""Git worktree management for PR processing. + +Stage 10 implementation: Create and manage worktrees for isolated PR processing. +""" + +import logging +import subprocess +from contextlib import contextmanager +from dataclasses import dataclass +from pathlib import Path +from typing import TYPE_CHECKING + +if TYPE_CHECKING: + from typing import Generator + +log = logging.getLogger(__name__) + + +class WorktreeError(Exception): + """Raised when worktree operations fail.""" + + pass + + +@dataclass +class WorktreeInfo: + """Information about a git worktree.""" + + path: Path + branch: str + commit: str + is_clean: bool + + def to_dict(self) -> dict: + """Serialize to dictionary.""" + return { + "path": str(self.path), + "branch": self.branch, + "commit": self.commit, + "is_clean": self.is_clean, + } + + +def create_worktree( + repo_path: Path, + branch: str, + worktree_dir: Path, +) -> Path: + """Create a git worktree for a PR branch. + + Args: + repo_path: Path to main repository + branch: Branch name to checkout + worktree_dir: Base directory for worktrees + + Returns: + Path to the created worktree + + Raises: + WorktreeError: If worktree creation fails + """ + # Sanitize branch name for directory + safe_branch = branch.replace("/", "-") + worktree_path = worktree_dir / safe_branch + + if worktree_path.exists(): + log.info(f"Worktree already exists: {worktree_path}") + return worktree_path + + worktree_dir.mkdir(parents=True, exist_ok=True) + + result = subprocess.run( + ["git", "-C", str(repo_path), "worktree", "add", str(worktree_path), branch], + capture_output=True, + text=True, + ) + + if result.returncode != 0: + raise WorktreeError(f"Failed to create worktree: {result.stderr}") + + log.info(f"Created worktree: {worktree_path}") + return worktree_path + + +def verify_worktree_clean(worktree_path: Path) -> bool: + """Check if worktree has no uncommitted changes. + + Args: + worktree_path: Path to worktree + + Returns: + True if clean, False if dirty + """ + result = subprocess.run( + ["git", "-C", str(worktree_path), "status", "--porcelain"], + capture_output=True, + text=True, + ) + + return result.returncode == 0 and not result.stdout.strip() + + +def get_worktree_info(worktree_path: Path) -> WorktreeInfo: + """Get information about a worktree. + + Args: + worktree_path: Path to worktree + + Returns: + WorktreeInfo with current state + """ + # Get current branch + branch_result = subprocess.run( + ["git", "-C", str(worktree_path), "branch", "--show-current"], + capture_output=True, + text=True, + ) + branch = branch_result.stdout.strip() + + # Get current commit + commit_result = subprocess.run( + ["git", "-C", str(worktree_path), "rev-parse", "HEAD"], + capture_output=True, + text=True, + ) + commit = commit_result.stdout.strip()[:8] + + # Check if clean + is_clean = verify_worktree_clean(worktree_path) + + return WorktreeInfo( + path=worktree_path, + branch=branch, + commit=commit, + is_clean=is_clean, + ) + + +def remove_worktree(repo_path: Path, worktree_path: Path, force: bool = False) -> None: + """Remove a git worktree. + + Args: + repo_path: Path to main repository + worktree_path: Path to worktree to remove + force: Force removal even if dirty + """ + args = ["git", "-C", str(repo_path), "worktree", "remove", str(worktree_path)] + if force: + args.append("--force") + + result = subprocess.run(args, capture_output=True, text=True) + + if result.returncode == 0: + log.info(f"Removed worktree: {worktree_path}") + else: + log.warning(f"Failed to remove worktree: {result.stderr}") + + +def list_worktrees(repo_path: Path) -> list[WorktreeInfo]: + """List all worktrees for a repository. + + Args: + repo_path: Path to main repository + + Returns: + List of WorktreeInfo for each worktree + """ + result = subprocess.run( + ["git", "-C", str(repo_path), "worktree", "list", "--porcelain"], + capture_output=True, + text=True, + ) + + if result.returncode != 0: + return [] + + worktrees = [] + current: dict = {} + + for line in result.stdout.splitlines(): + if line.startswith("worktree "): + if current: + worktrees.append(_parse_worktree_entry(current)) + current = {"path": line.split(" ", 1)[1]} + elif line.startswith("HEAD "): + current["commit"] = line.split(" ", 1)[1][:8] + elif line.startswith("branch "): + current["branch"] = line.split(" ", 1)[1].replace("refs/heads/", "") + elif line == "bare": + current["bare"] = True + + if current and not current.get("bare"): + worktrees.append(_parse_worktree_entry(current)) + + return worktrees + + +def _parse_worktree_entry(entry: dict) -> WorktreeInfo: + """Parse a worktree entry from porcelain output.""" + path = Path(entry.get("path", "")) + return WorktreeInfo( + path=path, + branch=entry.get("branch", ""), + commit=entry.get("commit", ""), + is_clean=verify_worktree_clean(path) if path.exists() else True, + ) + + +def prune_worktrees(repo_path: Path) -> None: + """Prune stale worktree administrative files. + + Args: + repo_path: Path to main repository + """ + subprocess.run( + ["git", "-C", str(repo_path), "worktree", "prune"], + capture_output=True, + ) + log.info("Pruned stale worktrees") + + +@contextmanager +def worktree_context( + repo_path: Path, + branch: str, + worktree_dir: Path, + cleanup: bool = False, +) -> "Generator[Path, None, None]": + """Context manager for worktree lifecycle. + + Args: + repo_path: Path to main repository + branch: Branch to checkout + worktree_dir: Base directory for worktrees + cleanup: If True, remove worktree after use + + Yields: + Path to the worktree + """ + worktree_path = create_worktree(repo_path, branch, worktree_dir) + try: + yield worktree_path + finally: + if cleanup: + remove_worktree(repo_path, worktree_path) + + +def sync_worktree(worktree_path: Path, remote: str = "origin") -> bool: + """Pull latest changes into worktree. + + Args: + worktree_path: Path to worktree + remote: Remote name + + Returns: + True if sync successful + """ + result = subprocess.run( + ["git", "-C", str(worktree_path), "pull", remote], + capture_output=True, + text=True, + ) + + if result.returncode != 0: + log.warning(f"Failed to sync worktree: {result.stderr}") + return False + + return True + + +def push_worktree( + worktree_path: Path, + remote: str = "origin", + force: bool = False, +) -> bool: + """Push worktree changes to remote. + + Args: + worktree_path: Path to worktree + remote: Remote name + force: Force push (with lease) + + Returns: + True if push successful + """ + args = ["git", "-C", str(worktree_path), "push", remote] + if force: + args.append("--force-with-lease") + + result = subprocess.run(args, capture_output=True, text=True) + + if result.returncode != 0: + log.warning(f"Failed to push worktree: {result.stderr}") + return False + + return True diff --git a/.claude/agents/skill-pr-addresser/subagents/feedback-analyzer/config.yml b/.claude/agents/skill-pr-addresser/subagents/feedback-analyzer/config.yml new file mode 100644 index 0000000..e98572d --- /dev/null +++ b/.claude/agents/skill-pr-addresser/subagents/feedback-analyzer/config.yml @@ -0,0 +1,12 @@ +# Feedback Analyzer Configuration +# +# Uses Haiku for fast, cost-effective analysis +# No tools needed - pure text analysis + +model: claude-3-5-haiku-20241022 +allowed_tools: [] +timeout: 120 +max_turns: 5 # Text analysis should complete quickly + +# Description for logging +description: "Analyzes PR feedback into structured actionable items" diff --git a/.claude/agents/skill-pr-addresser/subagents/feedback-analyzer/prompt.md b/.claude/agents/skill-pr-addresser/subagents/feedback-analyzer/prompt.md new file mode 100644 index 0000000..db50d5b --- /dev/null +++ b/.claude/agents/skill-pr-addresser/subagents/feedback-analyzer/prompt.md @@ -0,0 +1,112 @@ +# Feedback Analyzer + +You are analyzing PR review feedback to create an actionable execution plan. + +## Input + +You will receive: +- PR reviews with comments (may include checklists) +- General PR comments +- Review threads with resolution status (line-specific comments) + +## Task + +Analyze all feedback and produce: +1. **Guidance** - Meta-instructions about HOW to make changes (not specific changes themselves) +2. **Consolidated Actions** - Specific changes to make, grouped by similarity +3. **Ordered Task Plan** - Execution order for batched processing + +## Consolidation Rules + +**CRITICAL**: Similar feedback at different locations MUST be consolidated into ONE item. + +Examples of feedback that should be consolidated: +- "Move to examples/*" at lines 239, 398, 671, 733 → ONE item with 4 locations +- "Move to reference/*.md" at lines 692, 707, 787 → ONE item with 3 locations +- "Add error handling" mentioned in review body AND thread → ONE item + +## Output + +Return ONLY valid JSON matching this schema. Do not wrap in markdown code fences. + +{ + "guidance": [ + "General instruction about approach, e.g., 'Follow progressive disclosure patterns'", + "Another meta-instruction that applies to all changes" + ], + "action_groups": [ + { + "id": "group-1", + "action": "move_to_examples", + "description": "Move code blocks to examples/ directory", + "locations": [ + {"file": "SKILL.md", "line": 239, "thread_id": "PRRT_abc"}, + {"file": "SKILL.md", "line": 398, "thread_id": "PRRT_def"} + ], + "priority": "high", + "type": "change_request" + }, + { + "id": "group-2", + "action": "move_to_references", + "description": "Move detailed explanations to reference/*.md", + "locations": [ + {"file": "SKILL.md", "line": 692, "thread_id": "PRRT_ghi"} + ], + "priority": "high", + "type": "change_request" + } + ], + "execution_plan": [ + { + "order": 1, + "group_id": "group-1", + "rationale": "Address most frequent feedback first" + }, + { + "order": 2, + "group_id": "group-2", + "rationale": "Related structural change" + } + ], + "blocking_reviews": ["reviewer1"], + "approved_by": ["approver1"], + "summary": "1-2 sentence summary" +} + +## Action Types + +Common action patterns to look for: + +| Action | Description | +|--------|-------------| +| `move_to_examples` | Move code blocks to examples/ directory | +| `move_to_references` | Move detailed content to reference/*.md | +| `add_section` | Add new section or content | +| `remove_content` | Remove or trim content | +| `restructure` | Change organization/structure | +| `fix_formatting` | Fix typos, formatting, style | +| `add_documentation` | Add missing docs or comments | +| `other` | Custom action not in above list | + +## Priority Rules + +1. **high**: Blocking reviews, explicit change requests, checklist items +2. **medium**: Suggestions, questions needing documentation +3. **low**: Nitpicks, formatting, style suggestions + +## Execution Order Heuristics + +1. Structural changes before content changes (move files before editing them) +2. High priority before lower priority +3. Related changes grouped together +4. Independent changes can be parallelized (same order number) + +## Rules + +1. **Ignore resolved threads** - Only include unresolved items +2. **Ignore pure approvals** - LGTM/Approved without comments are not items +3. **Consolidate aggressively** - 10 similar comments = 1 action group with 10 locations +4. **Separate guidance from actions** - "Follow X pattern" is guidance, "Move Y to Z" is action +5. **Checklist items are actions** - Each unchecked `- [ ]` is a separate action (unless duplicated in threads) +6. **Be specific** - Descriptions should be actionable without re-reading original diff --git a/.claude/agents/skill-pr-addresser/subagents/feedback-fixer/config.yml b/.claude/agents/skill-pr-addresser/subagents/feedback-fixer/config.yml new file mode 100644 index 0000000..18a2d37 --- /dev/null +++ b/.claude/agents/skill-pr-addresser/subagents/feedback-fixer/config.yml @@ -0,0 +1,18 @@ +# Feedback Fixer Configuration +# +# Uses Sonnet for complex reasoning and code generation +# Has file access tools to read and modify skill files + +model: claude-sonnet-4-20250514 +allowed_tools: + - Read + - Write + - Edit + - Bash # Needed for git add + - Glob + - Grep +timeout: 600 # 10 minutes - allow time for complex skill fixes +max_turns: 25 # Limit agent turns to prevent infinite loops + +# Description for logging +description: "Implements fixes for PR feedback in skill files" diff --git a/.claude/agents/skill-pr-addresser/subagents/feedback-fixer/prompt.md b/.claude/agents/skill-pr-addresser/subagents/feedback-fixer/prompt.md new file mode 100644 index 0000000..f28af6f --- /dev/null +++ b/.claude/agents/skill-pr-addresser/subagents/feedback-fixer/prompt.md @@ -0,0 +1,103 @@ +# Feedback Fixer + +You are implementing fixes for PR review feedback on a Claude Code skill. + +## Context + +- **Working directory**: A git worktree with the skill checked out +- **Skill path**: Will be provided (e.g., `components/skills/lang-rust-dev`) +- **Action group**: A consolidated set of similar feedback items + +## Understanding Action Groups + +An action group represents **related feedback that should be addressed together**. For example: +- "Move to examples" with 4 locations means: move content at lines 239, 398, 671, 733 to examples/ +- "Move to references" with 2 locations means: move content at those lines to reference/ + +**Key insight**: Multiple locations = same type of change needed at each location. + +## Instructions + +1. **Read the relevant files** to understand current state +2. **Understand the action** and what change is needed +3. **Apply the change at ALL listed locations**: + - If action is `move_to_examples`: Extract content at each line to examples/ files + - If action is `move_to_references`: Extract content at each line to reference/ files + - If action is `add_section`: Add the requested section(s) + - If action is `restructure`: Reorganize as requested +4. **Apply guidance** if provided (these are meta-instructions for HOW to make changes) +5. **Stage your changes** with `git add` +6. **Do NOT commit** - the orchestrator handles commits + +## Common Actions + +| Action | What to do | +|--------|------------| +| `move_to_examples` | Extract code blocks at listed lines → create files in examples/ | +| `move_to_references` | Extract detailed content at listed lines → create files in reference/ | +| `add_section` | Add new section with specified content | +| `remove_content` | Remove or trim content at listed lines | +| `restructure` | Reorganize structure as described | +| `fix_formatting` | Fix typos, formatting at listed lines | +| `add_documentation` | Add docs or comments | + +## Quality Guidelines + +- **Minimal changes**: Only modify what's necessary +- **Maintain style**: Match existing formatting and conventions +- **Preserve structure**: Keep surrounding content intact +- **Replace with references**: When moving content, leave a reference link in its place +- **Group related moves**: If moving 4 code blocks to examples/, consider one well-organized file + +## Example: Moving to Examples + +If told to move content at lines 239, 398, 671 to examples/: + +1. Read the content at each line +2. Identify what each code block demonstrates +3. Create appropriate example files (e.g., `examples/tracing.ts`, `examples/webhooks.ts`) +4. Replace original content with reference links like: + ```markdown + See [examples/tracing.ts](examples/tracing.ts) for implementation. + ``` +5. Ensure the SKILL.md remains under 500 lines + +## What You Can Skip + +Skip items that: +- Require changes outside the skill directory +- Need access to external APIs or resources +- Involve architectural decisions beyond the skill scope +- Are ambiguous and need clarification from the reviewer + +## Output + +After making changes, return ONLY a JSON object (no markdown fences): + +{ + "addressed": [ + { + "id": "group-1", + "action": "Brief description of what you did", + "locations_fixed": 4 + } + ], + "skipped": [ + { + "id": "group-2", + "reason": "Clear explanation of why this couldn't be addressed" + } + ], + "files_modified": ["SKILL.md", "examples/tracing.ts", "examples/webhooks.ts"], + "files_created": ["examples/tracing.ts", "examples/webhooks.ts"], + "lines_added": 45, + "lines_removed": 120 +} + +## Important + +- Work ONLY within the skill directory +- When moving content, CREATE the destination files (examples/, reference/) +- Replace moved content with reference links +- Aim to reduce SKILL.md line count when moving content out +- Prefer editing existing content over adding new sections diff --git a/.claude/agents/skill-pr-addresser/subagents/substantive-checker/config.yml b/.claude/agents/skill-pr-addresser/subagents/substantive-checker/config.yml new file mode 100644 index 0000000..2f7f468 --- /dev/null +++ b/.claude/agents/skill-pr-addresser/subagents/substantive-checker/config.yml @@ -0,0 +1,8 @@ +# Substantive checker sub-agent config +# Uses Haiku since this is a simple classification task + +model: claude-3-5-haiku-20241022 +timeout: 60 + +# No tools needed - pure text classification +allowed_tools: [] diff --git a/.claude/agents/skill-pr-addresser/subagents/substantive-checker/prompt.md b/.claude/agents/skill-pr-addresser/subagents/substantive-checker/prompt.md new file mode 100644 index 0000000..59a9666 --- /dev/null +++ b/.claude/agents/skill-pr-addresser/subagents/substantive-checker/prompt.md @@ -0,0 +1,53 @@ +# Substantive Feedback Checker + +You evaluate whether review comments contain actionable feedback that requires code changes. + +## Your Task + +Given a list of review comments/reviews, determine which ones contain **substantive actionable feedback** vs which are: +- Acknowledgements or thanks ("LGTM", "Thanks!", "Great work!") +- Status updates ("I'll review this later") +- Questions that don't require code changes +- General discussion or conversation +- Positive feedback without change requests + +## Output Format + +Return a JSON object: + +```json +{ + "substantive": [ + { + "id": "comment-123", + "reason": "Requests adding error handling for null case" + } + ], + "not_substantive": [ + { + "id": "review-456", + "reason": "Positive acknowledgement only" + } + ] +} +``` + +## Classification Rules + +### Mark as SUBSTANTIVE if: +- Requests a specific code change +- Points out a bug or issue +- Suggests an improvement with concrete action +- Identifies missing functionality +- Notes a style/convention violation requiring fix +- Asks a question that implies something needs changing + +### Mark as NOT SUBSTANTIVE if: +- Pure praise or acknowledgement +- Status updates about the reviewer +- Questions seeking clarification only (no implied change) +- General discussion/philosophy +- "Thinking out loud" comments +- Empty or trivial content + +When in doubt, mark as SUBSTANTIVE to avoid missing real feedback. diff --git a/.claude/agents/skill-pr-addresser/templates/iteration_comment.hbs b/.claude/agents/skill-pr-addresser/templates/iteration_comment.hbs new file mode 100644 index 0000000..d5cf3f4 --- /dev/null +++ b/.claude/agents/skill-pr-addresser/templates/iteration_comment.hbs @@ -0,0 +1,45 @@ +## Addressing Iteration {{iteration}} + +### Summary + +| Metric | Value | +|--------|-------| +| **Feedback items** | {{feedback_count}} | +| **Addressed** | {{addressed_count}} | +| **Skipped** | {{skipped_count}} | +| **Success rate** | {{success_rate}} | + +{{#addressed}} +### Addressed Items + +{{#addressed}} +- **{{id}}**: {{action}} +{{/addressed}} +{{/addressed}} + +{{#skipped}} +### Skipped Items + +{{#skipped}} +- **{{id}}**: {{reason}} +{{/skipped}} +{{/skipped}} + +{{#commit_sha}} +### Changes + +**Commit:** [`{{commit_short}}`](../commit/{{commit_sha}}) + +**Files modified:** +{{#files_modified}} +- `{{.}}` +{{/files_modified}} + +**Lines:** +{{lines_added}} / -{{lines_removed}} +{{/commit_sha}} +{{^commit_sha}} +*No changes committed in this iteration* +{{/commit_sha}} + +--- +Iteration {{iteration}} completed at {{timestamp}} | Automated by [skill-pr-addresser](https://github.com/aRustyDev/ai) diff --git a/.claude/agents/skill-pr-addresser/templates/ready_comment.hbs b/.claude/agents/skill-pr-addresser/templates/ready_comment.hbs new file mode 100644 index 0000000..d0acfaa --- /dev/null +++ b/.claude/agents/skill-pr-addresser/templates/ready_comment.hbs @@ -0,0 +1,23 @@ +## Ready for Re-Review + +All feedback has been addressed for **{{skill_path}}**. + +{{#reviewers}} +{{#reviewers}}@{{.}} {{/reviewers}}Please re-review this PR when convenient. +{{/reviewers}} +{{^reviewers}} +This PR is ready for review. +{{/reviewers}} + +### What was done + +The automated feedback addresser analyzed the review comments and made changes to address the requested modifications. Please verify that: + +1. The changes correctly address your feedback +2. No new issues were introduced +3. The skill documentation is accurate and complete + +If you have additional feedback, please leave new comments and the agent can make another pass. + +--- +Generated at {{timestamp}} | Automated by [skill-pr-addresser](https://github.com/aRustyDev/ai) diff --git a/.claude/agents/skill-pr-addresser/templates/skipped_feedback.hbs b/.claude/agents/skill-pr-addresser/templates/skipped_feedback.hbs new file mode 100644 index 0000000..89a630a --- /dev/null +++ b/.claude/agents/skill-pr-addresser/templates/skipped_feedback.hbs @@ -0,0 +1,23 @@ +## Feedback Not Addressed + +The following feedback items could not be addressed automatically: + +{{#skipped}} +### {{id}} + +**Reason:** {{reason}} + +{{/skipped}} + +--- + +{{#addressed_count}} +*The agent successfully addressed {{addressed_count}} of {{total_count}} feedback items.* +{{/addressed_count}} +{{^addressed_count}} +*No feedback items could be addressed automatically. Please review the items above.* +{{/addressed_count}} + +{{#needs_manual_review}} +**Manual review required** for the remaining items. +{{/needs_manual_review}} diff --git a/.claude/agents/skill-pr-addresser/tests/__init__.py b/.claude/agents/skill-pr-addresser/tests/__init__.py new file mode 100644 index 0000000..12916c5 --- /dev/null +++ b/.claude/agents/skill-pr-addresser/tests/__init__.py @@ -0,0 +1 @@ +"""Tests for skill-pr-addresser.""" diff --git a/.claude/agents/skill-pr-addresser/tests/conftest.py b/.claude/agents/skill-pr-addresser/tests/conftest.py new file mode 100644 index 0000000..8ede511 --- /dev/null +++ b/.claude/agents/skill-pr-addresser/tests/conftest.py @@ -0,0 +1,14 @@ +"""Pytest configuration for skill-pr-addresser tests.""" + +import sys +from pathlib import Path + +# Add the agents directory to path for skill_agents_common imports +_agents_dir = Path(__file__).parent.parent.parent +if str(_agents_dir) not in sys.path: + sys.path.insert(0, str(_agents_dir)) + +# Add the agent src directory to path +_agent_dir = Path(__file__).parent.parent +if str(_agent_dir) not in sys.path: + sys.path.insert(0, str(_agent_dir)) diff --git a/.claude/agents/skill-pr-addresser/tests/test_addresser.py b/.claude/agents/skill-pr-addresser/tests/test_addresser.py new file mode 100644 index 0000000..b1c8e4a --- /dev/null +++ b/.claude/agents/skill-pr-addresser/tests/test_addresser.py @@ -0,0 +1,549 @@ +"""Tests for the Addresser orchestration module.""" + +import json +import subprocess +import pytest +from pathlib import Path +from unittest.mock import patch, MagicMock + +import sys + +# Add agent directory to path +_agent_dir = Path(__file__).parent.parent +if str(_agent_dir) not in sys.path: + sys.path.insert(0, str(_agent_dir)) + +from src.addresser import ( + Addresser, + AddressingResult, + IterationResult, +) +from src.costs import CallCost +from src.discovery import DiscoveryContext +from src.github_pr import PRDetails, Review, ReviewThread +from src.feedback import AnalysisResult, FixResult, FeedbackItem +from skill_agents_common.models import AgentSession, Stage + + +# --- Fixtures --- + + +@pytest.fixture +def mock_session(): + """Create a mock session.""" + session = MagicMock(spec=AgentSession) + session.session_id = "test-session-123" + session.stage = Stage.INIT + session.results = {} + session.save = MagicMock() + session.update_stage = MagicMock() + session.add_error = MagicMock() + return session + + +@pytest.fixture +def mock_pr_details(): + """Sample PR details.""" + return PRDetails( + number=795, + title="feat(skills): improve lang-rust-dev", + url="https://github.com/aRustyDev/ai/pull/795", + state="OPEN", + branch="feat/lang-rust-dev", + review_decision="CHANGES_REQUESTED", + base_branch="main", + ) + + +@pytest.fixture +def mock_worktree(tmp_path): + """Sample worktree info.""" + wt = MagicMock() + wt.path = str(tmp_path / "worktree") + Path(wt.path).mkdir(parents=True, exist_ok=True) + return wt + + +@pytest.fixture +def mock_blocking_review(): + """Sample blocking review.""" + return Review( + author="reviewer1", + state="CHANGES_REQUESTED", + body="Please add error handling examples", + submitted_at="2024-01-15T10:00:00Z", + ) + + +@pytest.fixture +def mock_discovery_context(mock_pr_details, mock_worktree, mock_blocking_review, mock_session): + """Sample discovery context.""" + ctx = DiscoveryContext( + pr=mock_pr_details, + pr_number=795, + skill_path="components/skills/lang-rust-dev", + blocking_reviews=[mock_blocking_review], + worktree=mock_worktree, + session=mock_session, + ) + return ctx + + +@pytest.fixture +def agent_dir(tmp_path): + """Create a mock agent directory.""" + # Create templates directory + templates_dir = tmp_path / "templates" + templates_dir.mkdir() + (templates_dir / "iteration_comment.hbs").write_text( + "## Iteration {{iteration}}\nAddressed: {{addressed_count}}" + ) + (templates_dir / "ready_comment.hbs").write_text( + "## Ready for Review\nReviewers: {{#each reviewers}}@{{.}} {{/each}}" + ) + + # Create subagents directories + analyzer_dir = tmp_path / "subagents" / "feedback-analyzer" + analyzer_dir.mkdir(parents=True) + (analyzer_dir / "prompt.md").write_text("Analyze feedback") + (analyzer_dir / "config.yml").write_text("model: claude-3-5-haiku-20241022") + + fixer_dir = tmp_path / "subagents" / "feedback-fixer" + fixer_dir.mkdir(parents=True) + (fixer_dir / "prompt.md").write_text("Fix feedback") + (fixer_dir / "config.yml").write_text("model: claude-sonnet-4-20250514") + + return tmp_path + + +@pytest.fixture +def sessions_dir(tmp_path): + """Create sessions directory.""" + sessions = tmp_path / "sessions" + sessions.mkdir() + return sessions + + +# --- Tests for AddressingResult --- + + +class TestAddressingResult: + """Tests for AddressingResult dataclass.""" + + def test_creates_result(self): + """Should create an addressing result.""" + result = AddressingResult( + success=True, + iterations_run=2, + total_addressed=5, + total_skipped=1, + files_modified=["SKILL.md"], + final_commit_sha="abc123", + ready_for_review=True, + ) + assert result.success is True + assert result.iterations_run == 2 + assert result.total_addressed == 5 + assert result.ready_for_review is True + + def test_defaults(self): + """Should have sensible defaults.""" + result = AddressingResult( + success=False, + iterations_run=0, + total_addressed=0, + total_skipped=0, + ) + assert result.files_modified == [] + assert result.final_commit_sha is None + assert result.ready_for_review is False + assert result.error is None + assert result.iteration_results == [] + + +# --- Tests for IterationResult --- + + +class TestIterationResult: + """Tests for IterationResult dataclass.""" + + def test_creates_result(self): + """Should create an iteration result.""" + analysis = AnalysisResult( + feedback_items=[], + blocking_reviews=[], + approved_by=[], + summary="test", + ) + fix_result = FixResult() + + result = IterationResult( + iteration=1, + analysis=analysis, + fix_result=fix_result, + commit_sha="abc123", + pushed=True, + ) + + assert result.iteration == 1 + assert result.commit_sha == "abc123" + assert result.pushed is True + + +# --- Tests for Addresser --- + + +class TestAddresser: + """Tests for Addresser class.""" + + def test_creates_addresser(self, agent_dir, sessions_dir): + """Should create an addresser instance.""" + addresser = Addresser( + agent_dir=agent_dir, + sessions_dir=sessions_dir, + owner="aRustyDev", + repo="ai", + ) + + assert addresser.owner == "aRustyDev" + assert addresser.repo == "ai" + assert addresser.rate_limit_delay == 1.0 + + def test_address_returns_empty_when_no_feedback( + self, agent_dir, sessions_dir, mock_discovery_context + ): + """Should return early when no feedback items found.""" + mock_analysis = AnalysisResult( + feedback_items=[], + blocking_reviews=[], + approved_by=[], + summary="No feedback", + ) + mock_cost = CallCost( + subagent="feedback-analyzer", + model="haiku", + input_tokens=100, + output_tokens=50, + input_cost=0.0001, + output_cost=0.00025, + total_cost=0.00035, + ) + + with patch("src.addresser.analyze_feedback", return_value=(mock_analysis, mock_cost)): + addresser = Addresser( + agent_dir=agent_dir, + sessions_dir=sessions_dir, + owner="aRustyDev", + repo="ai", + ) + + result = addresser.address(mock_discovery_context, max_iterations=3) + + assert result.success is False + assert result.total_addressed == 0 + assert result.iterations_run == 0 + + def test_address_fixes_feedback_items( + self, agent_dir, sessions_dir, mock_discovery_context + ): + """Should fix feedback items and commit changes.""" + mock_analysis = AnalysisResult( + feedback_items=[ + FeedbackItem( + id="thread-1", + type="change_request", + file="SKILL.md", + line=10, + description="Add example", + priority="high", + resolved=False, + ) + ], + blocking_reviews=["reviewer1"], + approved_by=[], + summary="Needs fixes", + ) + mock_analysis_cost = CallCost( + subagent="feedback-analyzer", + model="haiku", + input_tokens=100, + output_tokens=50, + input_cost=0.0001, + output_cost=0.00025, + total_cost=0.00035, + ) + + mock_fix_result = FixResult( + addressed=[{"id": "thread-1", "action": "Added example"}], + skipped=[], + files_modified=["SKILL.md"], + lines_added=10, + lines_removed=0, + ) + mock_fix_cost = CallCost( + subagent="feedback-fixer", + model="sonnet", + input_tokens=500, + output_tokens=200, + input_cost=0.0015, + output_cost=0.003, + total_cost=0.0045, + ) + + with patch("src.addresser.analyze_feedback", return_value=(mock_analysis, mock_analysis_cost)): + with patch("src.addresser.fix_with_escalation", return_value=(mock_fix_result, [mock_fix_cost])): + with patch.object(Addresser, "_commit_changes", return_value="abc123"): + with patch.object(Addresser, "_push_changes", return_value=True): + with patch.object(Addresser, "_add_iteration_comment", return_value="url"): + with patch.object(Addresser, "_request_rereview", return_value=True): + addresser = Addresser( + agent_dir=agent_dir, + sessions_dir=sessions_dir, + owner="aRustyDev", + repo="ai", + ) + + result = addresser.address( + mock_discovery_context, max_iterations=3 + ) + + assert result.success is True + assert result.total_addressed == 1 + assert result.iterations_run == 1 + assert result.final_commit_sha == "abc123" + assert result.total_cost > 0 + + def test_address_stops_after_high_success_rate( + self, agent_dir, sessions_dir, mock_discovery_context + ): + """Should stop when success rate is >= 90%.""" + mock_analysis = AnalysisResult( + feedback_items=[ + FeedbackItem( + id="thread-1", + type="nitpick", + file="SKILL.md", + line=10, + description="Fix typo", + priority="low", + resolved=False, + ) + ], + blocking_reviews=[], + approved_by=[], + summary="Minor fix", + ) + mock_analysis_cost = CallCost( + subagent="feedback-analyzer", + model="haiku", + input_tokens=100, + output_tokens=50, + input_cost=0.0001, + output_cost=0.00025, + total_cost=0.00035, + ) + + mock_fix_result = FixResult( + addressed=[{"id": "thread-1", "action": "Fixed typo"}], + skipped=[], + files_modified=["SKILL.md"], + lines_added=1, + lines_removed=1, + ) + mock_fix_cost = CallCost( + subagent="feedback-fixer", + model="haiku", + input_tokens=200, + output_tokens=100, + input_cost=0.0002, + output_cost=0.0005, + total_cost=0.0007, + ) + + with patch("src.addresser.analyze_feedback", return_value=(mock_analysis, mock_analysis_cost)): + with patch("src.addresser.fix_with_escalation", return_value=(mock_fix_result, [mock_fix_cost])): + with patch.object(Addresser, "_commit_changes", return_value="abc123"): + with patch.object(Addresser, "_push_changes", return_value=True): + with patch.object(Addresser, "_add_iteration_comment", return_value=None): + with patch.object(Addresser, "_request_rereview", return_value=True): + addresser = Addresser( + agent_dir=agent_dir, + sessions_dir=sessions_dir, + owner="aRustyDev", + repo="ai", + ) + + result = addresser.address( + mock_discovery_context, max_iterations=3 + ) + + # Should stop after 1 iteration due to high success rate + assert result.iterations_run == 1 + assert result.success is True + assert result.ready_for_review is True + + +class TestAddresserCommit: + """Tests for Addresser commit functionality.""" + + def test_commit_changes_creates_commit( + self, agent_dir, sessions_dir, mock_discovery_context + ): + """Should create a commit with changes.""" + # Create a file in the worktree + worktree_path = Path(mock_discovery_context.worktree.path) + (worktree_path / "SKILL.md").write_text("# Test") + + mock_fix_result = FixResult( + addressed=[{"id": "test", "action": "Fixed"}], + files_modified=["SKILL.md"], + lines_added=5, + lines_removed=0, + ) + + # Mock git commands + def mock_run(cmd, **kwargs): + result = MagicMock() + if cmd[1] == "status": + result.returncode = 0 + result.stdout = "M SKILL.md\n" + elif cmd[1] == "add": + result.returncode = 0 + elif cmd[1] == "commit": + result.returncode = 0 + elif cmd[1] == "rev-parse": + result.returncode = 0 + result.stdout = "abc123def456" + return result + + with patch("subprocess.run", side_effect=mock_run): + addresser = Addresser( + agent_dir=agent_dir, + sessions_dir=sessions_dir, + owner="aRustyDev", + repo="ai", + ) + + sha = addresser._commit_changes(mock_discovery_context, 1, mock_fix_result) + + assert sha == "abc123def456" + + def test_commit_returns_none_when_no_changes( + self, agent_dir, sessions_dir, mock_discovery_context + ): + """Should return None when there are no changes.""" + mock_fix_result = FixResult() + + def mock_run(cmd, **kwargs): + result = MagicMock() + result.returncode = 0 + result.stdout = "" # No changes + return result + + with patch("subprocess.run", side_effect=mock_run): + addresser = Addresser( + agent_dir=agent_dir, + sessions_dir=sessions_dir, + owner="aRustyDev", + repo="ai", + ) + + sha = addresser._commit_changes(mock_discovery_context, 1, mock_fix_result) + + assert sha is None + + +class TestAddresserPush: + """Tests for Addresser push functionality.""" + + def test_push_changes_succeeds( + self, agent_dir, sessions_dir, mock_discovery_context + ): + """Should push changes successfully.""" + mock_result = MagicMock() + mock_result.returncode = 0 + + with patch("subprocess.run", return_value=mock_result): + addresser = Addresser( + agent_dir=agent_dir, + sessions_dir=sessions_dir, + owner="aRustyDev", + repo="ai", + ) + + success = addresser._push_changes(mock_discovery_context) + + assert success is True + + def test_push_changes_fails( + self, agent_dir, sessions_dir, mock_discovery_context + ): + """Should handle push failure.""" + mock_result = MagicMock() + mock_result.returncode = 1 + mock_result.stderr = "Push failed" + + with patch("subprocess.run", return_value=mock_result): + addresser = Addresser( + agent_dir=agent_dir, + sessions_dir=sessions_dir, + owner="aRustyDev", + repo="ai", + ) + + success = addresser._push_changes(mock_discovery_context) + + assert success is False + + +class TestAddresserComments: + """Tests for Addresser comment functionality.""" + + def test_add_iteration_comment( + self, agent_dir, sessions_dir, mock_discovery_context + ): + """Should add an iteration comment.""" + mock_analysis = AnalysisResult( + feedback_items=[], + blocking_reviews=[], + approved_by=[], + summary="test", + ) + mock_fix_result = FixResult( + addressed=[{"id": "1", "action": "Fixed"}], + skipped=[], + ) + + with patch("src.addresser.add_pr_comment", return_value="https://github.com/comment"): + addresser = Addresser( + agent_dir=agent_dir, + sessions_dir=sessions_dir, + owner="aRustyDev", + repo="ai", + ) + + url = addresser._add_iteration_comment( + mock_discovery_context, 1, mock_analysis, mock_fix_result, "abc123" + ) + + assert url == "https://github.com/comment" + + def test_request_rereview( + self, agent_dir, sessions_dir, mock_discovery_context + ): + """Should request re-review from blocking reviewers.""" + with patch("src.addresser.add_pr_comment", return_value="url"): + with patch("src.addresser.request_rereview", return_value=True) as mock_rereview: + addresser = Addresser( + agent_dir=agent_dir, + sessions_dir=sessions_dir, + owner="aRustyDev", + repo="ai", + ) + + success = addresser._request_rereview(mock_discovery_context) + + assert success is True + mock_rereview.assert_called_once_with( + "aRustyDev", "ai", 795, ["reviewer1"] + ) diff --git a/.claude/agents/skill-pr-addresser/tests/test_costs.py b/.claude/agents/skill-pr-addresser/tests/test_costs.py new file mode 100644 index 0000000..689c171 --- /dev/null +++ b/.claude/agents/skill-pr-addresser/tests/test_costs.py @@ -0,0 +1,179 @@ +"""Tests for the costs module.""" + +import pytest +from pathlib import Path + +import sys + +# Add agent directory to path +_agent_dir = Path(__file__).parent.parent +if str(_agent_dir) not in sys.path: + sys.path.insert(0, str(_agent_dir)) + +from src.costs import ( + CallCost, + SessionCosts, + get_model_pricing, + estimate_call_cost, + estimate_pr_cost, + format_cost, + get_cost_summary, +) + + +class TestModelPricing: + """Tests for get_model_pricing function.""" + + def test_exact_match(self): + """Should return pricing for exact model match.""" + pricing = get_model_pricing("claude-3-5-haiku-20241022") + assert pricing["input"] == 0.001 + assert pricing["output"] == 0.005 + + def test_partial_match(self): + """Should return pricing for partial model match.""" + pricing = get_model_pricing("haiku-35") + assert pricing["input"] == 0.001 + + def test_unknown_model_defaults_to_sonnet(self): + """Should default to Sonnet pricing for unknown models.""" + pricing = get_model_pricing("unknown-model") + assert pricing["input"] == 0.003 + assert pricing["output"] == 0.015 + + +class TestEstimateCallCost: + """Tests for estimate_call_cost function.""" + + def test_estimates_with_actual_tokens(self): + """Should calculate cost from actual tokens.""" + cost = estimate_call_cost( + "feedback-analyzer", + "claude-3-5-haiku-20241022", + input_tokens=1000, + output_tokens=500, + ) + + # 1000/1000 * 0.001 = 0.001 input + # 500/1000 * 0.005 = 0.0025 output + assert cost.input_cost == 0.001 + assert cost.output_cost == 0.0025 + assert cost.total_cost == 0.0035 + + def test_estimates_without_tokens(self): + """Should use default estimates when tokens not provided.""" + cost = estimate_call_cost( + "feedback-analyzer", + "claude-3-5-haiku-20241022", + ) + + # Uses ESTIMATED_TOKENS defaults + assert cost.input_tokens == 2000 + assert cost.output_tokens == 500 + assert cost.total_cost > 0 + + +class TestEstimatePRCost: + """Tests for estimate_pr_cost function.""" + + def test_single_iteration(self): + """Should estimate cost for single iteration.""" + cost = estimate_pr_cost(num_iterations=1) + # Should be sum of analysis + fix + assert cost > 0 + + def test_multiple_iterations(self): + """Should scale cost with iterations.""" + cost_1 = estimate_pr_cost(num_iterations=1) + cost_3 = estimate_pr_cost(num_iterations=3) + assert cost_3 > cost_1 * 2 # At least more than 2x + + +class TestFormatCost: + """Tests for format_cost function.""" + + def test_formats_small_cost(self): + """Should format small costs with more precision.""" + assert format_cost(0.001) == "$0.0010" + assert format_cost(0.0035) == "$0.0035" + + def test_formats_normal_cost(self): + """Should format normal costs with 2 decimals.""" + assert format_cost(0.75) == "$0.75" + assert format_cost(1.50) == "$1.50" + + +class TestSessionCosts: + """Tests for SessionCosts class.""" + + def test_add_call(self): + """Should track cumulative costs.""" + costs = SessionCosts(session_id="test-123", pr_number=795) + + costs.add_call( + CallCost( + subagent="feedback-analyzer", + model="haiku", + input_tokens=1000, + output_tokens=500, + input_cost=0.001, + output_cost=0.0025, + total_cost=0.0035, + ) + ) + + assert len(costs.calls) == 1 + assert costs.total_input_tokens == 1000 + assert costs.total_output_tokens == 500 + assert costs.total_cost == 0.0035 + + def test_save_and_load(self, tmp_path): + """Should persist and reload costs.""" + costs = SessionCosts(session_id="test-123", pr_number=795) + costs.add_call( + CallCost( + subagent="feedback-analyzer", + model="haiku", + input_tokens=1000, + output_tokens=500, + input_cost=0.001, + output_cost=0.0025, + total_cost=0.0035, + ) + ) + + # Save + costs.save(tmp_path) + + # Load + loaded = SessionCosts.load(tmp_path, "test-123") + + assert loaded is not None + assert loaded.pr_number == 795 + assert len(loaded.calls) == 1 + assert loaded.total_cost == 0.0035 + + +class TestGetCostSummary: + """Tests for get_cost_summary function.""" + + def test_generates_summary(self): + """Should generate readable summary.""" + costs = SessionCosts(session_id="test-123", pr_number=795) + costs.add_call( + CallCost( + subagent="feedback-analyzer", + model="haiku", + input_tokens=1000, + output_tokens=500, + input_cost=0.001, + output_cost=0.0025, + total_cost=0.0035, + ) + ) + + summary = get_cost_summary(costs) + + assert "PR #795" in summary + assert "feedback-analyzer" in summary + assert "$" in summary diff --git a/.claude/agents/skill-pr-addresser/tests/test_cross_reference.py b/.claude/agents/skill-pr-addresser/tests/test_cross_reference.py new file mode 100644 index 0000000..06cfd08 --- /dev/null +++ b/.claude/agents/skill-pr-addresser/tests/test_cross_reference.py @@ -0,0 +1,297 @@ +# tests/test_cross_reference.py +"""Tests for cross-reference detection. + +Stage 9 tests for linking reviews to threads. +""" + +import pytest +from datetime import datetime, timezone +from unittest.mock import MagicMock +import sys +from pathlib import Path + +# Add agent directory to path for imports +_agent_dir = Path(__file__).parent.parent +if str(_agent_dir) not in sys.path: + sys.path.insert(0, str(_agent_dir)) + +from src.cross_reference import ( + extract_line_references, + extract_file_references, + link_reviews_to_threads, + mark_linked_threads, + find_duplicate_feedback, + _are_similar, +) +from src.models import ReviewFeedback, ThreadFeedback, ThreadComment +from src.filter import FilteredFeedback, FilteredThread + + +# ============================================================================= +# extract_line_references Tests +# ============================================================================= + + +class TestExtractLineReferences: + def test_line_pattern(self): + """Should extract 'line N' pattern.""" + assert extract_line_references("see line 42") == [42] + + def test_line_pattern_case_insensitive(self): + """Should be case insensitive.""" + assert extract_line_references("see Line 42") == [42] + assert extract_line_references("see LINE 42") == [42] + + def test_L_pattern(self): + """Should extract 'LN' pattern.""" + assert extract_line_references("see L42") == [42] + + def test_lines_range(self): + """Should extract 'lines N-M' pattern.""" + assert extract_line_references("lines 10-20") == [10, 20] + + def test_github_link(self): + """Should extract '#LN' GitHub link pattern.""" + assert extract_line_references("SKILL.md#L42") == [42] + + def test_at_line_pattern(self): + """Should extract 'at line N' pattern.""" + assert extract_line_references("at line 42") == [42] + + def test_multiple_refs(self): + """Should extract multiple references.""" + text = "see line 42 and L100, also lines 10-20" + result = extract_line_references(text) + assert 10 in result + assert 20 in result + assert 42 in result + assert 100 in result + + def test_empty_text(self): + """Should handle empty text.""" + assert extract_line_references("") == [] + assert extract_line_references(None) == [] + + def test_no_matches(self): + """Should return empty list when no matches.""" + assert extract_line_references("no line references here") == [] + + +# ============================================================================= +# extract_file_references Tests +# ============================================================================= + + +class TestExtractFileReferences: + def test_in_pattern(self): + """Should extract 'in file.ext' pattern.""" + assert extract_file_references("in SKILL.md") == ["SKILL.md"] + + def test_see_pattern(self): + """Should extract 'see path/file.ext' pattern.""" + assert extract_file_references("see examples/foo.py") == ["examples/foo.py"] + + def test_backtick_pattern(self): + """Should extract backtick-quoted paths.""" + assert extract_file_references("check `src/app.py`") == ["src/app.py"] + + def test_multiple_refs(self): + """Should extract multiple file references.""" + text = "see `SKILL.md` and in examples/test.py" + result = extract_file_references(text) + assert "SKILL.md" in result + assert "examples/test.py" in result + + def test_empty_text(self): + """Should handle empty text.""" + assert extract_file_references("") == [] + assert extract_file_references(None) == [] + + def test_deduplication(self): + """Should deduplicate file references.""" + text = "see `SKILL.md` and in SKILL.md" + result = extract_file_references(text) + assert len(result) == 1 + assert result[0] == "SKILL.md" + + +# ============================================================================= +# link_reviews_to_threads Tests +# ============================================================================= + + +class TestLinkReviewsToThreads: + def _make_review(self, id: str, body: str) -> ReviewFeedback: + """Helper to create review.""" + return ReviewFeedback( + id=id, + state="CHANGES_REQUESTED", + body=body, + author="reviewer", + submitted_at=datetime.now(timezone.utc), + ) + + def _make_thread( + self, id: str, path: str = "SKILL.md", line: int = 42 + ) -> ThreadFeedback: + """Helper to create thread.""" + return ThreadFeedback( + id=id, + path=path, + line=line, + is_resolved=False, + is_outdated=False, + comments=[ + ThreadComment( + id=f"{id}_c1", + body="Comment", + author="reviewer", + created_at=datetime.now(timezone.utc), + ) + ], + ) + + def test_links_by_line(self): + """Should link review to thread by line number.""" + reviews = [self._make_review("R_1", "Fix the issue at line 42")] + threads = [ + self._make_thread("T_1", line=42), + self._make_thread("T_2", line=100), + ] + links = link_reviews_to_threads(reviews, threads) + + assert links == {"R_1": ["T_1"]} + + def test_no_links_when_no_refs(self): + """Should not link when no line references.""" + reviews = [self._make_review("R_1", "General feedback")] + threads = [self._make_thread("T_1", line=42)] + links = link_reviews_to_threads(reviews, threads) + + assert links == {} + + def test_links_multiple_threads(self): + """Should link to multiple threads at same line.""" + reviews = [self._make_review("R_1", "Fix issues at line 42")] + threads = [ + self._make_thread("T_1", line=42), + self._make_thread("T_2", line=42), + self._make_thread("T_3", line=100), + ] + links = link_reviews_to_threads(reviews, threads) + + assert "R_1" in links + assert "T_1" in links["R_1"] + assert "T_2" in links["R_1"] + assert "T_3" not in links["R_1"] + + def test_updates_review_references(self): + """Should update review with detected references.""" + reviews = [self._make_review("R_1", "Fix line 42 in SKILL.md")] + threads = [self._make_thread("T_1", line=42)] + link_reviews_to_threads(reviews, threads) + + assert 42 in reviews[0].references_lines + assert "SKILL.md" in reviews[0].references_files + + +# ============================================================================= +# mark_linked_threads Tests +# ============================================================================= + + +class TestMarkLinkedThreads: + def test_marks_linked_threads(self): + """Should mark threads with linked review ID.""" + thread = ThreadFeedback( + id="T_1", + path="SKILL.md", + line=42, + is_resolved=False, + is_outdated=False, + comments=[ + ThreadComment( + id="c1", + body="Fix", + author="reviewer", + created_at=datetime.now(timezone.utc), + ) + ], + ) + filtered = FilteredFeedback( + threads=[FilteredThread(thread=thread, new_comments=thread.comments)] + ) + links = {"R_1": ["T_1"]} + + mark_linked_threads(filtered, links) + + assert filtered.threads[0].thread.linked_to_review == "R_1" + + def test_does_not_mark_unlinked(self): + """Should not mark threads that aren't linked.""" + thread = ThreadFeedback( + id="T_1", + path="SKILL.md", + line=42, + is_resolved=False, + is_outdated=False, + comments=[], + ) + filtered = FilteredFeedback( + threads=[FilteredThread(thread=thread, new_comments=[])] + ) + links = {"R_1": ["T_2"]} # Different thread + + mark_linked_threads(filtered, links) + + assert filtered.threads[0].thread.linked_to_review is None + + +# ============================================================================= +# find_duplicate_feedback Tests +# ============================================================================= + + +class TestFindDuplicateFeedback: + def test_finds_similar_reviews(self): + """Should find reviews with similar body text.""" + r1 = ReviewFeedback( + id="R_1", + state="CHANGES_REQUESTED", + body="Please add more tests for this function", + author="reviewer", + submitted_at=datetime.now(timezone.utc), + ) + r2 = ReviewFeedback( + id="R_2", + state="CHANGES_REQUESTED", + body="Please add more tests for this function please", + author="reviewer", + submitted_at=datetime.now(timezone.utc), + ) + filtered = FilteredFeedback(reviews=[r1, r2]) + + duplicates = find_duplicate_feedback(filtered) + + assert "R_1" in duplicates + assert "R_2" in duplicates["R_1"] + + +class TestAreSimilar: + def test_similar_texts(self): + """Should detect similar texts (>80% word overlap).""" + # 9/11 ≈ 81.8% overlap (exceeds 80% threshold) + assert _are_similar( + "Please add more tests for this function in the code", + "Please add more tests for this function in the module" + ) is True + + def test_different_texts(self): + """Should detect different texts.""" + assert _are_similar("hello world", "completely different text here") is False + + def test_empty_texts(self): + """Should handle empty texts.""" + assert _are_similar("", "hello") is False + assert _are_similar("hello", "") is False + assert _are_similar(None, "hello") is False diff --git a/.claude/agents/skill-pr-addresser/tests/test_discovery.py b/.claude/agents/skill-pr-addresser/tests/test_discovery.py new file mode 100644 index 0000000..aa7c1a2 --- /dev/null +++ b/.claude/agents/skill-pr-addresser/tests/test_discovery.py @@ -0,0 +1,352 @@ +"""Tests for discovery module.""" + +import pytest +from pathlib import Path +from unittest.mock import patch, MagicMock + +import sys + +# Add agent directory to path +_agent_dir = Path(__file__).parent.parent +if str(_agent_dir) not in sys.path: + sys.path.insert(0, str(_agent_dir)) + +from src.discovery import discover, DiscoveryContext +from src.github_pr import PRDetails, Review, Comment, ReviewThread, PendingFeedback +from src.exceptions import PRNotFoundError, PRClosedError, NoFeedbackError + + +# --- Fixtures --- + + +@pytest.fixture +def mock_pr_details(): + """Sample PR details for testing.""" + return PRDetails( + number=795, + title="feat(skills): add lang-rust-dev skill", + url="https://github.com/aRustyDev/ai/pull/795", + state="OPEN", + branch="feat/lang-rust-dev", + body="Closes #123\n\nAdds the lang-rust-dev skill.", + is_draft=False, + mergeable="MERGEABLE", + review_decision="CHANGES_REQUESTED", + base_branch="main", + head_sha="abc123", + changed_files=[ + "components/skills/lang-rust-dev/SKILL.md", + "components/skills/lang-rust-dev/examples/ownership.md", + ], + ) + + +@pytest.fixture +def mock_blocking_review(): + """Sample blocking review.""" + return Review( + author="reviewer1", + state="CHANGES_REQUESTED", + body="Please add more examples", + submitted_at="2024-01-15T10:00:00Z", + ) + + +@pytest.fixture +def mock_unresolved_thread(): + """Sample unresolved thread.""" + return ReviewThread( + id="thread_1", + path="components/skills/lang-rust-dev/SKILL.md", + line=42, + is_resolved=False, + is_outdated=False, + comments=[ + { + "author": {"login": "reviewer1"}, + "body": "Missing ownership section", + "createdAt": "2024-01-15T10:05:00Z", + } + ], + ) + + +@pytest.fixture +def tmp_sessions_dir(tmp_path): + """Temporary sessions directory.""" + sessions = tmp_path / "sessions" + sessions.mkdir() + return sessions + + +@pytest.fixture +def tmp_worktree_base(tmp_path): + """Temporary worktree base directory.""" + worktrees = tmp_path / "worktrees" + worktrees.mkdir() + return worktrees + + +# --- Tests --- + + +class TestDiscoverPRNotFound: + """Tests for PR not found scenario.""" + + def test_raises_when_pr_does_not_exist(self, tmp_sessions_dir, tmp_worktree_base): + """Should raise PRNotFoundError when PR doesn't exist.""" + with patch("src.discovery.get_pr_details", return_value=None): + with pytest.raises(PRNotFoundError) as exc_info: + discover( + owner="aRustyDev", + repo="ai", + pr_number=99999, + sessions_dir=tmp_sessions_dir, + worktree_base=tmp_worktree_base, + repo_path=Path("/fake/repo"), + ) + assert "99999" in str(exc_info.value) + + +class TestDiscoverPRClosed: + """Tests for closed/merged PR scenario.""" + + def test_raises_when_pr_is_merged( + self, mock_pr_details, tmp_sessions_dir, tmp_worktree_base + ): + """Should raise PRClosedError when PR is merged.""" + mock_pr_details.state = "MERGED" + + with patch("src.discovery.get_pr_details", return_value=mock_pr_details): + with pytest.raises(PRClosedError) as exc_info: + discover( + owner="aRustyDev", + repo="ai", + pr_number=795, + sessions_dir=tmp_sessions_dir, + worktree_base=tmp_worktree_base, + repo_path=Path("/fake/repo"), + ) + assert "merged" in str(exc_info.value).lower() + + def test_raises_when_pr_is_closed( + self, mock_pr_details, tmp_sessions_dir, tmp_worktree_base + ): + """Should raise PRClosedError when PR is closed.""" + mock_pr_details.state = "CLOSED" + + with patch("src.discovery.get_pr_details", return_value=mock_pr_details): + with pytest.raises(PRClosedError) as exc_info: + discover( + owner="aRustyDev", + repo="ai", + pr_number=795, + sessions_dir=tmp_sessions_dir, + worktree_base=tmp_worktree_base, + repo_path=Path("/fake/repo"), + ) + assert "closed" in str(exc_info.value).lower() + + +class TestDiscoverNoFeedback: + """Tests for no feedback scenario.""" + + def test_raises_when_no_pending_feedback( + self, mock_pr_details, tmp_sessions_dir, tmp_worktree_base + ): + """Should raise NoFeedbackError when no feedback to address.""" + empty_feedback = PendingFeedback() # No feedback + + with patch("src.discovery.get_pr_details", return_value=mock_pr_details): + with patch( + "src.discovery.get_pending_feedback", + return_value=empty_feedback, + ): + with pytest.raises(NoFeedbackError) as exc_info: + discover( + owner="aRustyDev", + repo="ai", + pr_number=795, + sessions_dir=tmp_sessions_dir, + worktree_base=tmp_worktree_base, + repo_path=Path("/fake/repo"), + ) + assert "no pending feedback" in str(exc_info.value).lower() + + def test_force_bypasses_no_feedback_check( + self, mock_pr_details, tmp_sessions_dir, tmp_worktree_base + ): + """Should not raise when force=True even with no feedback.""" + mock_session = MagicMock() + mock_session.session_id = "test-session" + mock_session.worktree_path = "" + mock_session.results = {} + + mock_worktree = MagicMock() + mock_worktree.path = tmp_worktree_base / "pr-795" + + empty_feedback = PendingFeedback() # No feedback + + with patch("src.discovery.get_pr_details", return_value=mock_pr_details): + with patch( + "src.discovery.get_pending_feedback", + return_value=empty_feedback, + ): + with patch( + "src.discovery.find_session_by_pr", + return_value=mock_session, + ): + with patch( + "src.discovery.get_or_create_worktree", + return_value=mock_worktree, + ): + # Should not raise + ctx = discover( + owner="aRustyDev", + repo="ai", + pr_number=795, + sessions_dir=tmp_sessions_dir, + worktree_base=tmp_worktree_base, + repo_path=Path("/fake/repo"), + force=True, + ) + assert ctx.feedback_count == 0 + + +class TestDiscoverContext: + """Tests for DiscoveryContext properties.""" + + def test_feedback_count( + self, mock_pr_details, mock_blocking_review, mock_unresolved_thread + ): + """Should correctly count total feedback items.""" + ctx = DiscoveryContext( + pr=mock_pr_details, + pr_number=795, + skill_path="components/skills/lang-rust-dev", + blocking_reviews=[mock_blocking_review], + unresolved_threads=[mock_unresolved_thread], + ) + assert ctx.feedback_count == 2 + + def test_blocking_reviewers(self, mock_pr_details, mock_blocking_review): + """Should list blocking reviewer usernames.""" + ctx = DiscoveryContext( + pr=mock_pr_details, + pr_number=795, + skill_path="components/skills/lang-rust-dev", + blocking_reviews=[mock_blocking_review], + ) + assert ctx.blocking_reviewers == ["reviewer1"] + + def test_needs_changes_with_blocking_reviews( + self, mock_pr_details, mock_blocking_review + ): + """Should return True when blocking reviews exist.""" + ctx = DiscoveryContext( + pr=mock_pr_details, + pr_number=795, + skill_path="components/skills/lang-rust-dev", + blocking_reviews=[mock_blocking_review], + ) + assert ctx.needs_changes is True + + def test_needs_changes_with_unresolved_threads( + self, mock_pr_details, mock_unresolved_thread + ): + """Should return True when unresolved threads exist.""" + ctx = DiscoveryContext( + pr=mock_pr_details, + pr_number=795, + skill_path="components/skills/lang-rust-dev", + unresolved_threads=[mock_unresolved_thread], + ) + assert ctx.needs_changes is True + + def test_needs_changes_false_when_no_feedback(self, mock_pr_details): + """Should return False when no blocking feedback.""" + ctx = DiscoveryContext( + pr=mock_pr_details, + pr_number=795, + skill_path="components/skills/lang-rust-dev", + ) + assert ctx.needs_changes is False + + +class TestInferSkillPath: + """Tests for skill path inference.""" + + def test_infers_skill_from_changed_files( + self, mock_pr_details, mock_blocking_review, tmp_sessions_dir, tmp_worktree_base + ): + """Should infer skill path from changed files.""" + mock_session = MagicMock() + mock_session.session_id = "test-session" + mock_session.worktree_path = "" + mock_session.results = {} + + mock_worktree = MagicMock() + mock_worktree.path = tmp_worktree_base / "pr-795" + + feedback_with_review = PendingFeedback(blocking_reviews=[mock_blocking_review]) + + with patch("src.discovery.get_pr_details", return_value=mock_pr_details): + with patch( + "src.discovery.get_pending_feedback", + return_value=feedback_with_review, + ): + with patch( + "src.discovery.find_session_by_pr", + return_value=mock_session, + ): + with patch( + "src.discovery.get_or_create_worktree", + return_value=mock_worktree, + ): + ctx = discover( + owner="aRustyDev", + repo="ai", + pr_number=795, + sessions_dir=tmp_sessions_dir, + worktree_base=tmp_worktree_base, + repo_path=Path("/fake/repo"), + ) + assert ctx.skill_path == "components/skills/lang-rust-dev" + + def test_explicit_skill_path_overrides_inference( + self, mock_pr_details, mock_blocking_review, tmp_sessions_dir, tmp_worktree_base + ): + """Should use explicit skill path when provided.""" + mock_session = MagicMock() + mock_session.session_id = "test-session" + mock_session.worktree_path = "" + mock_session.results = {} + + mock_worktree = MagicMock() + mock_worktree.path = tmp_worktree_base / "pr-795" + + feedback_with_review = PendingFeedback(blocking_reviews=[mock_blocking_review]) + + with patch("src.discovery.get_pr_details", return_value=mock_pr_details): + with patch( + "src.discovery.get_pending_feedback", + return_value=feedback_with_review, + ): + with patch( + "src.discovery.find_session_by_pr", + return_value=mock_session, + ): + with patch( + "src.discovery.get_or_create_worktree", + return_value=mock_worktree, + ): + ctx = discover( + owner="aRustyDev", + repo="ai", + pr_number=795, + sessions_dir=tmp_sessions_dir, + worktree_base=tmp_worktree_base, + repo_path=Path("/fake/repo"), + skill_path="components/skills/other-skill", + ) + assert ctx.skill_path == "components/skills/other-skill" diff --git a/.claude/agents/skill-pr-addresser/tests/test_dry_run.py b/.claude/agents/skill-pr-addresser/tests/test_dry_run.py new file mode 100644 index 0000000..56a48c3 --- /dev/null +++ b/.claude/agents/skill-pr-addresser/tests/test_dry_run.py @@ -0,0 +1,199 @@ +# tests/test_dry_run.py +"""Tests for dry-run mode. + +Stage 10 tests for preview without changes. +""" + +import pytest +import sys +from pathlib import Path + +# Add src to path for imports +sys.path.insert(0, str(Path(__file__).parent.parent / "src")) + +from dry_run import DryRunSummary, DryRunMode + + +# ============================================================================= +# DryRunSummary Tests +# ============================================================================= + + +class TestDryRunSummary: + def test_basic_text_output(self): + """Should format basic text output.""" + summary = DryRunSummary( + pr_number=795, + discovery_summary={ + "reviews": 2, + "comments": 3, + "threads": 5, + }, + filter_summary={ + "item_count": 4, + "skipped_unchanged": ["a", "b"], + "skipped_resolved": ["c"], + }, + ) + + text = summary.to_text() + + assert "PR #795" in text + assert "Reviews: 2" in text + assert "Comments: 3" in text + assert "Threads: 5" in text + assert "New items: 4" in text + assert "Unchanged: 2" in text + assert "Resolved: 1" in text + assert "No changes made (dry run)" in text + + def test_with_consolidation(self): + """Should include consolidation summary.""" + summary = DryRunSummary( + pr_number=795, + discovery_summary={"reviews": 1, "comments": 0, "threads": 2}, + filter_summary={"item_count": 2, "skipped_unchanged": [], "skipped_resolved": []}, + consolidation_summary={ + "action_groups": [ + {"id": "g1", "type": "add_section", "location_count": 2}, + {"id": "g2", "type": "fix_typo", "location_count": 1}, + ], + "guidance": ["item1", "item2"], + }, + ) + + text = summary.to_text() + + assert "Consolidation" in text + assert "Action Groups: 2" in text + assert "g1: add_section (2 locations)" in text + assert "Guidance: 2 items" in text + + def test_with_plan(self): + """Should include plan summary.""" + summary = DryRunSummary( + pr_number=795, + discovery_summary={"reviews": 1, "comments": 0, "threads": 0}, + filter_summary={"item_count": 1, "skipped_unchanged": [], "skipped_resolved": []}, + consolidation_summary={"action_groups": [], "guidance": []}, + plan_summary={ + "steps": [ + {"group_id": "g1", "priority": "high", "description": "Add missing section"}, + {"group_id": "g2", "priority": "medium", "description": "Fix formatting"}, + ], + "total_items": 3, + }, + ) + + text = summary.to_text() + + assert "Execution Plan" in text + assert "[high] g1: Add missing section" in text + assert "[medium] g2: Fix formatting" in text + assert "Would address 3 feedback items" in text + + def test_to_dict(self): + """Should serialize to dictionary.""" + summary = DryRunSummary( + pr_number=795, + discovery_summary={"reviews": 1}, + filter_summary={"item_count": 1}, + ) + + data = summary.to_dict() + + assert data["pr_number"] == 795 + assert "discovery" in data + assert "filter" in data + + +# ============================================================================= +# DryRunMode Tests +# ============================================================================= + + +class TestDryRunMode: + def test_not_enabled_by_default(self): + """Mode should not be enabled by default.""" + mode = DryRunMode() + assert mode.enabled is False + + def test_enabled(self): + """Should track enabled state.""" + mode = DryRunMode(enabled=True) + assert mode.enabled is True + + def test_record_action(self): + """Should record actions when enabled.""" + mode = DryRunMode(enabled=True) + mode.record_action("commit", message="test", files=["a.md"]) + + assert len(mode.recorded_actions) == 1 + assert mode.recorded_actions[0]["type"] == "commit" + assert mode.recorded_actions[0]["message"] == "test" + + def test_would_commit(self): + """Should record commit actions.""" + mode = DryRunMode(enabled=True) + mode.would_commit("Add section", ["SKILL.md", "examples/foo.py"]) + + actions = mode.recorded_actions + assert len(actions) == 1 + assert actions[0]["type"] == "commit" + assert "SKILL.md" in actions[0]["files"] + + def test_would_resolve(self): + """Should record resolve actions.""" + mode = DryRunMode(enabled=True) + mode.would_resolve("PRRT_123") + + actions = mode.recorded_actions + assert actions[0]["type"] == "resolve_thread" + assert actions[0]["thread_id"] == "PRRT_123" + + def test_would_comment(self): + """Should record comment actions with truncated body.""" + mode = DryRunMode(enabled=True) + long_body = "x" * 200 + mode.would_comment(795, long_body) + + actions = mode.recorded_actions + assert actions[0]["type"] == "comment" + assert actions[0]["pr_number"] == 795 + assert len(actions[0]["body"]) == 103 # 100 chars + "..." + + def test_would_push(self): + """Should record push actions.""" + mode = DryRunMode(enabled=True) + mode.would_push("feature/test") + + actions = mode.recorded_actions + assert actions[0]["type"] == "push" + assert actions[0]["branch"] == "feature/test" + + def test_get_summary_text_empty(self): + """Should handle no actions.""" + mode = DryRunMode(enabled=True) + + text = mode.get_summary_text() + + assert "No actions would be taken" in text + + def test_get_summary_text_with_actions(self): + """Should format actions summary.""" + mode = DryRunMode(enabled=True) + mode.would_commit("Fix bug", ["a.md"]) + mode.would_resolve("PRRT_123") + mode.would_push("main") + + text = mode.get_summary_text() + + assert "Actions that would be taken" in text + assert "1. commit" in text + assert "2. resolve_thread" in text + assert "3. push" in text + + def test_stop_after(self): + """Should track stop_after stage.""" + mode = DryRunMode(enabled=True, stop_after="consolidate") + assert mode.stop_after == "consolidate" diff --git a/.claude/agents/skill-pr-addresser/tests/test_feedback.py b/.claude/agents/skill-pr-addresser/tests/test_feedback.py new file mode 100644 index 0000000..87ead35 --- /dev/null +++ b/.claude/agents/skill-pr-addresser/tests/test_feedback.py @@ -0,0 +1,753 @@ +"""Tests for feedback analysis and fixing module.""" + +import json +import pytest +from pathlib import Path +from unittest.mock import patch, MagicMock + +import sys + +# Add agent directory to path +_agent_dir = Path(__file__).parent.parent +if str(_agent_dir) not in sys.path: + sys.path.insert(0, str(_agent_dir)) + +from src.feedback import ( + FeedbackItem, + Location, + ActionGroup, + ExecutionStep, + AnalysisResult, + FixResult, + run_subagent, + analyze_feedback, + fix_feedback, + fix_action_group, + fix_batch, + fix_all_batches, + fix_with_escalation, + _extract_json, +) +from src.discovery import DiscoveryContext +from src.github_pr import PRDetails, Review, ReviewThread +from skill_agents_common.models import Model, SubagentResult + + +# --- Fixtures --- + + +@pytest.fixture +def mock_pr_details(): + """Sample PR details.""" + return PRDetails( + number=795, + title="feat(skills): improve lang-rust-dev", + url="https://github.com/aRustyDev/ai/pull/795", + state="OPEN", + branch="feat/lang-rust-dev", + review_decision="CHANGES_REQUESTED", + base_branch="main", + ) + + +@pytest.fixture +def mock_blocking_review(): + """Sample blocking review.""" + return Review( + author="reviewer1", + state="CHANGES_REQUESTED", + body="Please add error handling examples", + submitted_at="2024-01-15T10:00:00Z", + ) + + +@pytest.fixture +def mock_thread(): + """Sample unresolved thread.""" + return ReviewThread( + id="thread-123", + path="SKILL.md", + line=42, + is_resolved=False, + is_outdated=False, + comments=[ + { + "author": {"login": "reviewer1"}, + "body": "Add an example for Result", + "createdAt": "2024-01-15T10:05:00Z", + } + ], + ) + + +@pytest.fixture +def mock_worktree(): + """Sample worktree info.""" + wt = MagicMock() + wt.path = "/tmp/worktrees/pr-795" + return wt + + +@pytest.fixture +def mock_discovery_context(mock_pr_details, mock_blocking_review, mock_thread, mock_worktree): + """Sample discovery context.""" + return DiscoveryContext( + pr=mock_pr_details, + pr_number=795, + skill_path="components/skills/lang-rust-dev", + blocking_reviews=[mock_blocking_review], + unresolved_threads=[mock_thread], + worktree=mock_worktree, + ) + + +@pytest.fixture +def mock_analysis_result(): + """Sample analysis result.""" + return AnalysisResult( + feedback_items=[ + FeedbackItem( + id="thread-123", + type="change_request", + file="SKILL.md", + line=42, + description="Add an example for Result", + priority="high", + resolved=False, + ), + FeedbackItem( + id="comment-456", + type="nitpick", + file="examples/error.md", + line=10, + description="Fix typo: 'recieve' -> 'receive'", + priority="low", + resolved=False, + ), + ], + blocking_reviews=["reviewer1"], + approved_by=[], + summary="Needs error handling examples and minor fixes", + ) + + +@pytest.fixture +def agent_dir(tmp_path): + """Create a mock agent directory with sub-agent configs.""" + # Create feedback-analyzer + analyzer_dir = tmp_path / "subagents" / "feedback-analyzer" + analyzer_dir.mkdir(parents=True) + (analyzer_dir / "prompt.md").write_text("# Feedback Analyzer\nAnalyze feedback.") + (analyzer_dir / "config.yml").write_text( + "model: claude-3-5-haiku-20241022\nallowed_tools: []\ntimeout: 120" + ) + + # Create feedback-fixer + fixer_dir = tmp_path / "subagents" / "feedback-fixer" + fixer_dir.mkdir(parents=True) + (fixer_dir / "prompt.md").write_text("# Feedback Fixer\nFix feedback.") + (fixer_dir / "config.yml").write_text( + "model: claude-sonnet-4-20250514\nallowed_tools: [Read, Write, Edit]\ntimeout: 600" + ) + + return tmp_path + + +# --- Tests for _extract_json --- + + +class TestExtractJson: + """Tests for JSON extraction from text.""" + + def test_extracts_direct_json(self): + """Should parse direct JSON.""" + text = '{"key": "value"}' + result = _extract_json(text) + assert result == {"key": "value"} + + def test_extracts_from_markdown_fence(self): + """Should extract JSON from markdown code fence.""" + text = 'Some text\n```json\n{"key": "value"}\n```\nMore text' + result = _extract_json(text) + assert result == {"key": "value"} + + def test_extracts_from_plain_fence(self): + """Should extract JSON from plain code fence.""" + text = 'Text\n```\n{"items": []}\n```' + result = _extract_json(text) + assert result == {"items": []} + + def test_returns_none_for_invalid(self): + """Should return None for non-JSON text.""" + text = "This is just plain text" + result = _extract_json(text) + assert result is None + + +# --- Tests for FeedbackItem --- + + +class TestFeedbackItem: + """Tests for FeedbackItem dataclass.""" + + def test_creates_item(self): + """Should create a feedback item.""" + item = FeedbackItem( + id="thread-123", + type="change_request", + file="SKILL.md", + line=42, + description="Add example", + priority="high", + resolved=False, + ) + assert item.id == "thread-123" + assert item.type == "change_request" + assert item.priority == "high" + + +# --- Tests for AnalysisResult --- + + +class TestAnalysisResult: + """Tests for AnalysisResult dataclass.""" + + def test_actionable_count(self, mock_analysis_result): + """Should count actionable items.""" + # 1 change_request + 1 nitpick = 2 actionable items + assert mock_analysis_result.actionable_count == 2 + + def test_has_blocking_feedback(self, mock_analysis_result): + """Should detect blocking feedback.""" + assert mock_analysis_result.has_blocking_feedback is True + + def test_no_blocking_when_approved(self): + """Should not be blocking when all resolved.""" + result = AnalysisResult( + feedback_items=[], + blocking_reviews=[], + approved_by=["approver1"], + summary="All good", + ) + assert result.has_blocking_feedback is False + + +# --- Tests for FixResult --- + + +class TestFixResult: + """Tests for FixResult dataclass.""" + + def test_success_rate_all_addressed(self): + """Should calculate 100% success rate.""" + result = FixResult( + addressed=[{"id": "1", "action": "Fixed"}, {"id": "2", "action": "Fixed"}], + skipped=[], + ) + assert result.success_rate == 1.0 + + def test_success_rate_partial(self): + """Should calculate partial success rate.""" + result = FixResult( + addressed=[{"id": "1", "action": "Fixed"}], + skipped=[{"id": "2", "reason": "Too complex"}], + ) + assert result.success_rate == 0.5 + + def test_success_rate_empty(self): + """Should return 1.0 for empty result.""" + result = FixResult() + assert result.success_rate == 1.0 + + +# --- Tests for run_subagent --- + + +class TestRunSubagent: + """Tests for run_subagent function.""" + + def test_returns_error_for_missing_prompt(self, tmp_path): + """Should return error if prompt file doesn't exist.""" + result, cost = run_subagent(tmp_path, "nonexistent", "task", tmp_path) + assert result.exit_code == 1 + assert "not found" in result.error + assert cost is None + + def test_runs_claude_command(self, agent_dir): + """Should run claude command with correct args.""" + mock_result = MagicMock() + mock_result.returncode = 0 + mock_result.stdout = json.dumps({"result": '{"status": "ok"}'}) + mock_result.stderr = "" + + with patch("subprocess.run", return_value=mock_result) as mock_run: + result, cost = run_subagent(agent_dir, "feedback-analyzer", "test task", agent_dir) + + # Verify claude was called + mock_run.assert_called_once() + args = mock_run.call_args[0][0] + assert args[0] == "claude" + assert "--model" in args + assert "--print" in args + assert "--output-format" in args + + assert result.success is True + assert cost is not None + + +# --- Tests for analyze_feedback --- + + +class TestAnalyzeFeedback: + """Tests for analyze_feedback function.""" + + def test_extracts_feedback_items(self, agent_dir, mock_discovery_context): + """Should extract feedback items from PR.""" + mock_response = { + "result": json.dumps( + { + "feedback_items": [ + { + "id": "thread-123", + "type": "change_request", + "file": "SKILL.md", + "line": 42, + "description": "Add Result example", + "priority": "high", + "resolved": False, + } + ], + "blocking_reviews": ["reviewer1"], + "approved_by": [], + "summary": "Needs error handling", + } + ) + } + + mock_result = MagicMock() + mock_result.returncode = 0 + mock_result.stdout = json.dumps(mock_response) + mock_result.stderr = "" + + with patch("subprocess.run", return_value=mock_result): + result, cost = analyze_feedback(agent_dir, mock_discovery_context) + + assert len(result.feedback_items) == 1 + assert result.feedback_items[0].type == "change_request" + assert result.blocking_reviews == ["reviewer1"] + assert cost is not None + + def test_handles_failed_analysis(self, agent_dir, mock_discovery_context): + """Should handle failed sub-agent gracefully.""" + mock_result = MagicMock() + mock_result.returncode = 1 + mock_result.stdout = "" + mock_result.stderr = "Error" + + with patch("subprocess.run", return_value=mock_result): + result, cost = analyze_feedback(agent_dir, mock_discovery_context) + + assert len(result.feedback_items) == 0 + assert "failed" in result.summary.lower() + + +# --- Tests for fix_feedback --- + + +class TestFixFeedback: + """Tests for fix_feedback function.""" + + def test_fixes_actionable_items( + self, agent_dir, mock_discovery_context, mock_analysis_result + ): + """Should fix actionable feedback items.""" + mock_response = { + "result": json.dumps( + { + "addressed": [{"id": "thread-123", "action": "Added Result example"}], + "skipped": [], + "files_modified": ["SKILL.md"], + "lines_added": 15, + "lines_removed": 0, + } + ) + } + + mock_result = MagicMock() + mock_result.returncode = 0 + mock_result.stdout = json.dumps(mock_response) + mock_result.stderr = "" + + with patch("subprocess.run", return_value=mock_result): + result, cost = fix_feedback( + agent_dir, mock_discovery_context, mock_analysis_result, Model.SONNET_4 + ) + + assert len(result.addressed) == 1 + assert result.files_modified == ["SKILL.md"] + assert result.lines_added == 15 + assert cost is not None + + def test_returns_empty_for_no_actionable( + self, agent_dir, mock_discovery_context + ): + """Should return empty result if no actionable items.""" + empty_analysis = AnalysisResult( + feedback_items=[], + blocking_reviews=[], + approved_by=[], + summary="Nothing to do", + ) + + result, cost = fix_feedback( + agent_dir, mock_discovery_context, empty_analysis, Model.SONNET_4 + ) + + assert len(result.addressed) == 0 + assert len(result.skipped) == 0 + assert cost is None + + +# --- Tests for fix_with_escalation --- + + +class TestFixWithEscalation: + """Tests for fix_with_escalation function.""" + + def test_uses_haiku_for_simple_nitpicks( + self, agent_dir, mock_discovery_context + ): + """Should use Haiku for simple nitpick-only fixes.""" + simple_analysis = AnalysisResult( + feedback_items=[ + FeedbackItem( + id="nitpick-1", + type="nitpick", + file="SKILL.md", + line=10, + description="Fix typo", + priority="low", + resolved=False, + ) + ], + blocking_reviews=[], + approved_by=[], + summary="Simple typo fix", + ) + + mock_response = { + "result": json.dumps( + { + "addressed": [{"id": "nitpick-1", "action": "Fixed typo"}], + "skipped": [], + "files_modified": ["SKILL.md"], + "lines_added": 1, + "lines_removed": 1, + } + ) + } + + mock_result = MagicMock() + mock_result.returncode = 0 + mock_result.stdout = json.dumps(mock_response) + mock_result.stderr = "" + + with patch("subprocess.run", return_value=mock_result) as mock_run: + result, costs = fix_with_escalation(agent_dir, mock_discovery_context, simple_analysis) + + # Verify Haiku was used + args = mock_run.call_args[0][0] + assert Model.HAIKU_35.value in args + + assert len(result.addressed) == 1 + assert len(costs) == 1 # One call + + def test_uses_sonnet_for_change_requests( + self, agent_dir, mock_discovery_context, mock_analysis_result + ): + """Should use Sonnet for change requests.""" + mock_response = { + "result": json.dumps( + { + "addressed": [{"id": "thread-123", "action": "Added example"}], + "skipped": [], + "files_modified": ["SKILL.md"], + "lines_added": 20, + "lines_removed": 0, + } + ) + } + + mock_result = MagicMock() + mock_result.returncode = 0 + mock_result.stdout = json.dumps(mock_response) + mock_result.stderr = "" + + with patch("subprocess.run", return_value=mock_result) as mock_run: + result, costs = fix_with_escalation( + agent_dir, mock_discovery_context, mock_analysis_result + ) + + # Verify Sonnet was used + args = mock_run.call_args[0][0] + assert Model.SONNET_4.value in args + + assert len(result.addressed) == 1 + assert len(costs) == 1 # One call + + +# --- Tests for new ActionGroup format --- + + +class TestLocation: + """Tests for Location dataclass.""" + + def test_creates_location(self): + """Should create a location.""" + loc = Location(file="SKILL.md", line=42, thread_id="thread-123") + assert loc.file == "SKILL.md" + assert loc.line == 42 + assert loc.thread_id == "thread-123" + + def test_optional_fields(self): + """Should allow optional fields.""" + loc = Location(file="SKILL.md") + assert loc.file == "SKILL.md" + assert loc.line is None + assert loc.thread_id is None + + +class TestActionGroup: + """Tests for ActionGroup dataclass.""" + + def test_creates_action_group(self): + """Should create an action group.""" + group = ActionGroup( + id="group-1", + action="move_to_examples", + description="Move code blocks to examples/", + locations=[ + Location(file="SKILL.md", line=239, thread_id="thread-1"), + Location(file="SKILL.md", line=398, thread_id="thread-2"), + ], + priority="high", + type="change_request", + ) + assert group.id == "group-1" + assert group.action == "move_to_examples" + assert group.location_count == 2 + + def test_thread_ids(self): + """Should extract thread IDs.""" + group = ActionGroup( + id="group-1", + action="move_to_examples", + description="Move", + locations=[ + Location(file="SKILL.md", line=239, thread_id="thread-1"), + Location(file="SKILL.md", line=398, thread_id="thread-2"), + Location(file="SKILL.md", line=500), # No thread_id + ], + priority="high", + type="change_request", + ) + assert group.thread_ids == ["thread-1", "thread-2"] + + +class TestAnalysisResultWithActionGroups: + """Tests for AnalysisResult with new action_groups format.""" + + @pytest.fixture + def analysis_with_groups(self): + """Sample analysis with action groups.""" + return AnalysisResult( + guidance=["Follow progressive disclosure patterns"], + action_groups=[ + ActionGroup( + id="group-1", + action="move_to_examples", + description="Move code blocks to examples/", + locations=[ + Location(file="SKILL.md", line=239, thread_id="t1"), + Location(file="SKILL.md", line=398, thread_id="t2"), + ], + priority="high", + type="change_request", + ), + ActionGroup( + id="group-2", + action="move_to_references", + description="Move explanations to reference/", + locations=[Location(file="SKILL.md", line=692, thread_id="t3")], + priority="high", + type="change_request", + ), + ], + execution_plan=[ + ExecutionStep(order=1, group_id="group-1", rationale="Most locations"), + ExecutionStep(order=2, group_id="group-2", rationale="Related change"), + ], + blocking_reviews=["reviewer1"], + approved_by=[], + summary="Move content to examples and references", + ) + + def test_actionable_count_with_groups(self, analysis_with_groups): + """Should count action groups.""" + assert analysis_with_groups.actionable_count == 2 + + def test_has_blocking_with_groups(self, analysis_with_groups): + """Should detect blocking from action groups.""" + assert analysis_with_groups.has_blocking_feedback is True + + def test_ordered_groups(self, analysis_with_groups): + """Should order groups by execution plan.""" + ordered = analysis_with_groups.ordered_groups + assert ordered[0].id == "group-1" + assert ordered[1].id == "group-2" + + def test_get_batch(self, analysis_with_groups): + """Should get batch of groups.""" + batch = analysis_with_groups.get_batch(0, batch_size=1) + assert len(batch) == 1 + assert batch[0].id == "group-1" + + def test_batch_count(self, analysis_with_groups): + """Should calculate batch count.""" + # 2 groups / 3 per batch = 1 batch (ceiling) + assert analysis_with_groups.batch_count == 1 + + +class TestAnalyzeFeedbackNewFormat: + """Tests for analyze_feedback with new format.""" + + def test_parses_action_groups(self, agent_dir, mock_discovery_context): + """Should parse action groups from analyzer output.""" + mock_response = { + "result": json.dumps({ + "guidance": ["Use progressive disclosure"], + "action_groups": [ + { + "id": "group-1", + "action": "move_to_examples", + "description": "Move code to examples/", + "locations": [ + {"file": "SKILL.md", "line": 239, "thread_id": "t1"}, + ], + "priority": "high", + "type": "change_request", + } + ], + "execution_plan": [ + {"order": 1, "group_id": "group-1", "rationale": "First"}, + ], + "blocking_reviews": ["reviewer1"], + "approved_by": [], + "summary": "Consolidated feedback", + }) + } + + mock_result = MagicMock() + mock_result.returncode = 0 + mock_result.stdout = json.dumps(mock_response) + mock_result.stderr = "" + + with patch("subprocess.run", return_value=mock_result): + result, cost = analyze_feedback(agent_dir, mock_discovery_context) + + assert len(result.guidance) == 1 + assert len(result.action_groups) == 1 + assert result.action_groups[0].action == "move_to_examples" + assert len(result.execution_plan) == 1 + + +class TestFixActionGroup: + """Tests for fix_action_group function.""" + + def test_fixes_single_group(self, agent_dir, mock_discovery_context): + """Should fix a single action group.""" + group = ActionGroup( + id="group-1", + action="move_to_examples", + description="Move code blocks", + locations=[ + Location(file="SKILL.md", line=239, thread_id="t1"), + Location(file="SKILL.md", line=398, thread_id="t2"), + ], + priority="high", + type="change_request", + ) + guidance = ["Follow progressive disclosure"] + + mock_response = { + "result": json.dumps({ + "addressed": [{"id": "group-1", "action": "Moved 2 code blocks", "locations_fixed": 2}], + "skipped": [], + "files_modified": ["SKILL.md", "examples/tracing.ts"], + "lines_added": 50, + "lines_removed": 80, + }) + } + + mock_result = MagicMock() + mock_result.returncode = 0 + mock_result.stdout = json.dumps(mock_response) + mock_result.stderr = "" + + with patch("subprocess.run", return_value=mock_result): + result, cost = fix_action_group( + agent_dir, mock_discovery_context, group, guidance, Model.SONNET_4 + ) + + assert len(result.addressed) == 1 + assert result.lines_removed == 80 + assert "examples/tracing.ts" in result.files_modified + + +class TestFixAllBatches: + """Tests for fix_all_batches function.""" + + def test_processes_all_batches(self, agent_dir, mock_discovery_context): + """Should process all action groups in batches.""" + analysis = AnalysisResult( + guidance=["Be concise"], + action_groups=[ + ActionGroup( + id=f"group-{i}", + action="move_to_examples", + description=f"Move block {i}", + locations=[Location(file="SKILL.md", line=100 + i * 50)], + priority="high", + type="change_request", + ) + for i in range(4) # 4 groups -> 2 batches of 3 + ], + execution_plan=[], + blocking_reviews=[], + approved_by=[], + summary="4 items", + ) + + mock_response = { + "result": json.dumps({ + "addressed": [{"id": "group-x", "action": "Fixed"}], + "skipped": [], + "files_modified": ["SKILL.md"], + "lines_added": 10, + "lines_removed": 20, + }) + } + + mock_result = MagicMock() + mock_result.returncode = 0 + mock_result.stdout = json.dumps(mock_response) + mock_result.stderr = "" + + with patch("subprocess.run", return_value=mock_result): + result, costs = fix_all_batches( + agent_dir, mock_discovery_context, analysis, batch_size=3, model=Model.SONNET_4 + ) + + # Should have called fixer 4 times (one per group) + assert len(result.addressed) == 4 + assert len(costs) == 4 diff --git a/.claude/agents/skill-pr-addresser/tests/test_filter.py b/.claude/agents/skill-pr-addresser/tests/test_filter.py new file mode 100644 index 0000000..d3f1747 --- /dev/null +++ b/.claude/agents/skill-pr-addresser/tests/test_filter.py @@ -0,0 +1,405 @@ +# tests/test_filter.py +"""Tests for filter stage. + +Stage 9 tests for delta detection. +""" + +import pytest +from datetime import datetime, timezone +from unittest.mock import MagicMock +import sys +from pathlib import Path + +# Add agent directory to path for imports +_agent_dir = Path(__file__).parent.parent +if str(_agent_dir) not in sys.path: + sys.path.insert(0, str(_agent_dir)) + +from src.filter import ( + filter_feedback, + FilteredFeedback, + FilteredThread, + _is_new_or_changed, + _get_new_thread_comments, +) +from src.session_schema import FeedbackState, AddressedItem, ThreadState +from src.models import ( + ReviewFeedback, + CommentFeedback, + ThreadFeedback, + ThreadComment, + RawFeedback, +) + + +class MockSession: + """Mock session for testing.""" + + def __init__(self): + self.results = {} + + +# ============================================================================= +# _is_new_or_changed Tests +# ============================================================================= + + +class TestIsNewOrChanged: + def test_new_item_returns_true(self): + """Items not in state are new.""" + item = ReviewFeedback( + id="R_123", + state="CHANGES_REQUESTED", + body="Fix this", + author="reviewer", + submitted_at=datetime.now(timezone.utc), + ) + state = FeedbackState() + assert _is_new_or_changed(item, state) is True + + def test_unchanged_item_returns_false(self): + """Items with matching hash are unchanged.""" + item = ReviewFeedback( + id="R_123", + state="CHANGES_REQUESTED", + body="Fix this", + author="reviewer", + submitted_at=datetime.now(timezone.utc), + ) + state = FeedbackState() + state.mark_addressed("R_123", item.content_hash, "abc123", 1) + assert _is_new_or_changed(item, state) is False + + def test_updated_item_returns_true(self): + """Items with different hash are updated (#796).""" + item = ReviewFeedback( + id="R_123", + state="CHANGES_REQUESTED", + body="Fix this - UPDATED", # Changed content + author="reviewer", + submitted_at=datetime.now(timezone.utc), + ) + state = FeedbackState() + state.addressed["R_123"] = AddressedItem( + id="R_123", + content_hash="sha256:oldhash", # Old hash + addressed_at=datetime.now(timezone.utc), + addressed_in_commit="abc123", + iteration=1, + ) + assert _is_new_or_changed(item, state) is True + + def test_comment_feedback(self): + """Should work with CommentFeedback too.""" + item = CommentFeedback( + id="IC_123", + body="Please add tests", + author="reviewer", + created_at=datetime.now(timezone.utc), + ) + state = FeedbackState() + assert _is_new_or_changed(item, state) is True + + # Mark as addressed + state.mark_addressed("IC_123", item.content_hash, "abc123", 1) + assert _is_new_or_changed(item, state) is False + + +# ============================================================================= +# _get_new_thread_comments Tests +# ============================================================================= + + +class TestGetNewThreadComments: + def _make_thread(self, comments: list[tuple[str, str, str]]) -> ThreadFeedback: + """Helper to create threads with comments.""" + return ThreadFeedback( + id="PRRT_123", + path="SKILL.md", + line=42, + is_resolved=False, + is_outdated=False, + comments=[ + ThreadComment( + id=cid, + body=body, + author=author, + created_at=datetime.now(timezone.utc), + ) + for cid, body, author in comments + ], + ) + + def test_all_new_when_no_state(self): + """All comments are new when no thread state exists.""" + thread = self._make_thread([ + ("c1", "Fix this", "reviewer"), + ("c2", "Done", "author"), + ]) + result = _get_new_thread_comments(thread, None, "author") + assert len(result) == 2 + + def test_filters_processed_comments(self): + """Should exclude already processed comments.""" + thread = self._make_thread([ + ("c1", "Fix this", "reviewer"), + ("c2", "Working on it", "author"), + ("c3", "Done!", "author"), + ]) + state = ThreadState( + thread_id="PRRT_123", + last_seen_comment_id="c1", + comments_processed=["c1"], + last_processed_at=datetime.now(timezone.utc), + ) + result = _get_new_thread_comments(thread, state, "author") + assert len(result) == 2 + assert result[0].id == "c2" + assert result[1].id == "c3" + + def test_empty_when_all_processed(self): + """Should return empty when all comments processed.""" + thread = self._make_thread([ + ("c1", "Fix this", "reviewer"), + ]) + state = ThreadState( + thread_id="PRRT_123", + last_seen_comment_id="c1", + comments_processed=["c1"], + last_processed_at=datetime.now(timezone.utc), + ) + result = _get_new_thread_comments(thread, state, "author") + assert len(result) == 0 + + +# ============================================================================= +# filter_feedback Tests +# ============================================================================= + + +class TestFilterFeedback: + def test_filters_resolved_threads(self): + """Resolved threads are skipped.""" + raw = RawFeedback( + threads=[ + ThreadFeedback( + id="PRRT_123", + path="SKILL.md", + line=42, + is_resolved=True, + is_outdated=False, + comments=[ + ThreadComment( + id="c1", + body="Fix this", + author="reviewer", + created_at=datetime.now(timezone.utc), + ) + ], + ) + ] + ) + session = MockSession() + result = filter_feedback(raw, session, "pr_author") + + assert len(result.threads) == 0 + assert "PRRT_123" in result.skipped_resolved + + def test_filters_outdated_threads(self): + """Outdated threads are skipped.""" + raw = RawFeedback( + threads=[ + ThreadFeedback( + id="PRRT_123", + path="SKILL.md", + line=42, + is_resolved=False, + is_outdated=True, + comments=[ + ThreadComment( + id="c1", + body="Fix this", + author="reviewer", + created_at=datetime.now(timezone.utc), + ) + ], + ) + ] + ) + session = MockSession() + result = filter_feedback(raw, session, "pr_author") + + assert len(result.threads) == 0 + assert "PRRT_123" in result.skipped_outdated + + def test_includes_new_thread_comments(self): + """New thread comments are included.""" + raw = RawFeedback( + threads=[ + ThreadFeedback( + id="PRRT_123", + path="SKILL.md", + line=42, + is_resolved=False, + is_outdated=False, + comments=[ + ThreadComment( + id="c1", + body="Fix this", + author="reviewer", + created_at=datetime.now(timezone.utc), + ), + ThreadComment( + id="c2", + body="New reply", + author="someone", + created_at=datetime.now(timezone.utc), + ), + ], + ) + ] + ) + session = MockSession() + # Mark c1 as already processed + session.results["feedback_state"] = { + "addressed": {}, + "threads": { + "PRRT_123": { + "thread_id": "PRRT_123", + "comments_processed": ["c1"], + "last_seen_comment_id": "c1", + "last_processed_at": datetime.now(timezone.utc).isoformat(), + } + }, + "last_run": None, + } + result = filter_feedback(raw, session, "pr_author") + + assert len(result.threads) == 1 + assert len(result.threads[0].new_comments) == 1 + assert result.threads[0].new_comments[0].id == "c2" + + def test_detects_author_response(self): + """Should detect when PR author responds with resolution signal.""" + raw = RawFeedback( + threads=[ + ThreadFeedback( + id="PRRT_123", + path="SKILL.md", + line=42, + is_resolved=False, + is_outdated=False, + comments=[ + ThreadComment( + id="c1", + body="Fix this", + author="reviewer", + created_at=datetime.now(timezone.utc), + ), + ThreadComment( + id="c2", + body="Done!", + author="pr_author", + created_at=datetime.now(timezone.utc), + ), + ], + ) + ] + ) + session = MockSession() + result = filter_feedback(raw, session, "pr_author") + + assert len(result.threads) == 1 + assert result.threads[0].has_author_response is True + + def test_skips_reviewer_withdrawal(self): + """Should skip threads where reviewer withdrew feedback.""" + raw = RawFeedback( + threads=[ + ThreadFeedback( + id="PRRT_123", + path="SKILL.md", + line=42, + is_resolved=False, + is_outdated=False, + comments=[ + ThreadComment( + id="c1", + body="Fix this", + author="reviewer", + created_at=datetime.now(timezone.utc), + ), + ThreadComment( + id="c2", + body="Actually, never mind", + author="reviewer", + created_at=datetime.now(timezone.utc), + ), + ], + ) + ] + ) + session = MockSession() + result = filter_feedback(raw, session, "pr_author") + + assert len(result.threads) == 0 + assert "PRRT_123" in result.skipped_resolved + + def test_is_empty_property(self): + """is_empty should be True when no feedback.""" + result = FilteredFeedback() + assert result.is_empty is True + + result.reviews.append(MagicMock()) + assert result.is_empty is False + + def test_summary(self): + """summary should return counts.""" + result = FilteredFeedback( + reviews=[MagicMock()], + comments=[MagicMock(), MagicMock()], + skipped_unchanged=["a", "b"], + skipped_resolved=["c"], + ) + summary = result.summary() + assert summary["new_reviews"] == 1 + assert summary["new_comments"] == 2 + assert summary["skipped_unchanged"] == 2 + assert summary["skipped_resolved"] == 1 + + +# ============================================================================= +# FilteredThread Tests +# ============================================================================= + + +class TestFilteredThread: + def test_to_consolidation_dict(self): + """Should format thread for LLM consolidation.""" + thread = ThreadFeedback( + id="PRRT_123", + path="SKILL.md", + line=42, + is_resolved=False, + is_outdated=False, + comments=[ + ThreadComment( + id="c1", + body="Fix this", + author="reviewer", + created_at=datetime.now(timezone.utc), + ), + ], + ) + filtered = FilteredThread( + thread=thread, + new_comments=thread.comments, + has_author_response=False, + ) + + d = filtered.to_consolidation_dict() + assert d["id"] == "PRRT_123" + assert d["path"] == "SKILL.md" + assert d["line"] == 42 + assert len(d["new_comments"]) == 1 + assert d["has_author_response"] is False diff --git a/.claude/agents/skill-pr-addresser/tests/test_github_pr.py b/.claude/agents/skill-pr-addresser/tests/test_github_pr.py new file mode 100644 index 0000000..c077b0d --- /dev/null +++ b/.claude/agents/skill-pr-addresser/tests/test_github_pr.py @@ -0,0 +1,419 @@ +"""Tests for github_pr module.""" + +import json +import pytest +from pathlib import Path +from unittest.mock import patch, MagicMock + +import sys + +# Add agent directory to path +_agent_dir = Path(__file__).parent.parent +if str(_agent_dir) not in sys.path: + sys.path.insert(0, str(_agent_dir)) + +from src.github_pr import ( + PRDetails, + Review, + Comment, + ReviewThread, + PendingFeedback, + get_pr_details, + get_pr_reviews, + get_pr_comments, + get_review_threads, + get_pending_feedback, + infer_skill_from_files, + find_prs_with_feedback, +) + + +# --- Test Data --- + + +SAMPLE_PR_JSON = { + "number": 795, + "title": "feat(skills): add lang-rust-dev skill", + "url": "https://github.com/aRustyDev/ai/pull/795", + "state": "OPEN", + "headRefName": "feat/lang-rust-dev", + "body": "Closes #123", + "isDraft": False, + "mergeable": "MERGEABLE", + "reviewDecision": "CHANGES_REQUESTED", + "baseRefName": "main", + "headRefOid": "abc123def456", + "files": [ + {"path": "components/skills/lang-rust-dev/SKILL.md"}, + {"path": "components/skills/lang-rust-dev/examples/ownership.md"}, + ], +} + + +SAMPLE_REVIEWS_JSON = { + "reviews": [ + { + "author": {"login": "reviewer1"}, + "state": "CHANGES_REQUESTED", + "body": "Please add more examples", + "submittedAt": "2024-01-15T10:00:00Z", + }, + { + "author": {"login": "reviewer2"}, + "state": "APPROVED", + "body": "LGTM", + "submittedAt": "2024-01-15T11:00:00Z", + }, + ] +} + + +SAMPLE_COMMENTS_JSON = { + "comments": [ + { + "author": {"login": "commenter1"}, + "body": "Great work!", + "createdAt": "2024-01-15T09:00:00Z", + "url": "https://github.com/aRustyDev/ai/pull/795#issuecomment-1", + } + ] +} + + +SAMPLE_THREADS_GRAPHQL = { + "data": { + "repository": { + "pullRequest": { + "reviewThreads": { + "nodes": [ + { + "id": "thread_1", + "path": "components/skills/lang-rust-dev/SKILL.md", + "line": 42, + "isResolved": False, + "isOutdated": False, + "comments": { + "nodes": [ + { + "author": {"login": "reviewer1"}, + "body": "Missing ownership section", + "createdAt": "2024-01-15T10:05:00Z", + } + ] + }, + }, + { + "id": "thread_2", + "path": "components/skills/lang-rust-dev/SKILL.md", + "line": 100, + "isResolved": True, + "isOutdated": False, + "comments": { + "nodes": [ + { + "author": {"login": "reviewer1"}, + "body": "Fixed typo", + "createdAt": "2024-01-15T10:10:00Z", + } + ] + }, + }, + ] + } + } + } + } +} + + +# --- Tests --- + + +class TestGetPRDetails: + """Tests for get_pr_details function.""" + + def test_returns_pr_details_on_success(self): + """Should return PRDetails when gh command succeeds.""" + mock_result = MagicMock() + mock_result.returncode = 0 + mock_result.stdout = json.dumps(SAMPLE_PR_JSON) + + with patch("subprocess.run", return_value=mock_result): + pr = get_pr_details("aRustyDev", "ai", 795) + + assert pr is not None + assert pr.number == 795 + assert pr.title == "feat(skills): add lang-rust-dev skill" + assert pr.state == "OPEN" + assert pr.branch == "feat/lang-rust-dev" + assert pr.review_decision == "CHANGES_REQUESTED" + assert len(pr.changed_files) == 2 + + def test_returns_none_on_failure(self): + """Should return None when gh command fails.""" + mock_result = MagicMock() + mock_result.returncode = 1 + mock_result.stdout = "" + + with patch("subprocess.run", return_value=mock_result): + pr = get_pr_details("aRustyDev", "ai", 99999) + + assert pr is None + + +class TestGetPRReviews: + """Tests for get_pr_reviews function.""" + + def test_returns_reviews(self): + """Should return list of Review objects.""" + mock_result = MagicMock() + mock_result.returncode = 0 + mock_result.stdout = json.dumps(SAMPLE_REVIEWS_JSON) + + with patch("subprocess.run", return_value=mock_result): + reviews = get_pr_reviews("aRustyDev", "ai", 795) + + assert len(reviews) == 2 + assert reviews[0].author == "reviewer1" + assert reviews[0].state == "CHANGES_REQUESTED" + assert reviews[1].author == "reviewer2" + assert reviews[1].state == "APPROVED" + + def test_returns_empty_on_failure(self): + """Should return empty list when gh command fails.""" + mock_result = MagicMock() + mock_result.returncode = 1 + mock_result.stdout = "" + + with patch("subprocess.run", return_value=mock_result): + reviews = get_pr_reviews("aRustyDev", "ai", 795) + + assert reviews == [] + + +class TestGetReviewThreads: + """Tests for get_review_threads function.""" + + def test_returns_threads(self): + """Should return list of ReviewThread objects.""" + mock_result = MagicMock() + mock_result.returncode = 0 + mock_result.stdout = json.dumps(SAMPLE_THREADS_GRAPHQL) + + with patch("subprocess.run", return_value=mock_result): + threads = get_review_threads("aRustyDev", "ai", 795) + + assert len(threads) == 2 + assert threads[0].id == "thread_1" + assert threads[0].is_resolved is False + assert threads[0].author == "reviewer1" + assert threads[1].id == "thread_2" + assert threads[1].is_resolved is True + + +class TestGetPendingFeedback: + """Tests for get_pending_feedback function.""" + + def test_filters_to_pending_only(self): + """Should return PendingFeedback with categorized feedback.""" + mock_reviews_result = MagicMock() + mock_reviews_result.returncode = 0 + mock_reviews_result.stdout = json.dumps(SAMPLE_REVIEWS_JSON) + + mock_comments_result = MagicMock() + mock_comments_result.returncode = 0 + mock_comments_result.stdout = json.dumps(SAMPLE_COMMENTS_JSON) + + mock_threads_result = MagicMock() + mock_threads_result.returncode = 0 + mock_threads_result.stdout = json.dumps(SAMPLE_THREADS_GRAPHQL) + + def side_effect(args, **kwargs): + if "--json" in args: + json_idx = args.index("--json") + json_fields = args[json_idx + 1] + if "reviews" in json_fields and "files" not in json_fields: + return mock_reviews_result + elif "comments" in json_fields: + return mock_comments_result + elif "graphql" in args: + return mock_threads_result + return mock_reviews_result + + with patch("subprocess.run", side_effect=side_effect): + feedback = get_pending_feedback("aRustyDev", "ai", 795) + + # Should return a PendingFeedback object + assert isinstance(feedback, PendingFeedback) + + # Only CHANGES_REQUESTED reviews in blocking + assert len(feedback.blocking_reviews) == 1 + assert feedback.blocking_reviews[0].state == "CHANGES_REQUESTED" + + # Only unresolved, non-outdated threads + assert len(feedback.unresolved_threads) == 1 + assert feedback.unresolved_threads[0].id == "thread_1" + + +class TestInferSkillFromFiles: + """Tests for infer_skill_from_files function.""" + + def test_infers_from_skill_path(self): + """Should infer skill path from changed files.""" + files = [ + "components/skills/lang-rust-dev/SKILL.md", + "components/skills/lang-rust-dev/examples/ownership.md", + ] + assert infer_skill_from_files(files) == "components/skills/lang-rust-dev" + + def test_returns_none_for_non_skill_files(self): + """Should return None when no skill files changed.""" + files = [ + "README.md", + ".github/workflows/ci.yml", + ] + assert infer_skill_from_files(files) is None + + def test_handles_mixed_files(self): + """Should find skill even with mixed file types.""" + files = [ + "README.md", + "components/skills/lang-rust-dev/SKILL.md", + ".github/workflows/ci.yml", + ] + assert infer_skill_from_files(files) == "components/skills/lang-rust-dev" + + +class TestReviewThread: + """Tests for ReviewThread dataclass.""" + + def test_first_comment_property(self): + """Should return first comment when available.""" + thread = ReviewThread( + id="thread_1", + path="test.md", + line=10, + is_resolved=False, + is_outdated=False, + comments=[{"author": {"login": "user1"}, "body": "Comment 1"}], + ) + assert thread.first_comment is not None + assert thread.first_comment["body"] == "Comment 1" + + def test_first_comment_none_when_empty(self): + """Should return None when no comments.""" + thread = ReviewThread( + id="thread_1", + path="test.md", + line=10, + is_resolved=False, + is_outdated=False, + comments=[], + ) + assert thread.first_comment is None + + def test_author_property(self): + """Should extract author from first comment.""" + thread = ReviewThread( + id="thread_1", + path="test.md", + line=10, + is_resolved=False, + is_outdated=False, + comments=[{"author": {"login": "reviewer1"}, "body": "Comment"}], + ) + assert thread.author == "reviewer1" + + +# --- Test Data for Batch --- + + +SAMPLE_PR_LIST_JSON = [ + { + "number": 795, + "title": "feat(skills): add lang-rust-dev skill", + "reviewDecision": "CHANGES_REQUESTED", + "reviewRequests": [], + "reviews": [ + {"author": {"login": "reviewer1"}, "state": "CHANGES_REQUESTED"}, + ], + }, + { + "number": 800, + "title": "feat(skills): add lang-go-dev skill", + "reviewDecision": "APPROVED", + "reviewRequests": [], + "reviews": [ + {"author": {"login": "reviewer2"}, "state": "APPROVED"}, + ], + }, + { + "number": 805, + "title": "fix(skills): update examples", + "reviewDecision": "CHANGES_REQUESTED", + "reviewRequests": [], + "reviews": [ + {"author": {"login": "reviewer1"}, "state": "CHANGES_REQUESTED"}, + {"author": {"login": "reviewer2"}, "state": "CHANGES_REQUESTED"}, + ], + }, +] + + +class TestFindPRsWithFeedback: + """Tests for find_prs_with_feedback function.""" + + def test_finds_prs_needing_changes(self): + """Should return PRs with CHANGES_REQUESTED reviews.""" + mock_result = MagicMock() + mock_result.returncode = 0 + mock_result.stdout = json.dumps(SAMPLE_PR_LIST_JSON) + + with patch("subprocess.run", return_value=mock_result): + prs = find_prs_with_feedback("aRustyDev", "ai") + + # Should only return PRs with blocking reviews + assert len(prs) == 2 + assert prs[0]["pr_number"] == 795 + assert prs[1]["pr_number"] == 805 + + def test_includes_blocking_reviewers(self): + """Should include list of blocking reviewers.""" + mock_result = MagicMock() + mock_result.returncode = 0 + mock_result.stdout = json.dumps(SAMPLE_PR_LIST_JSON) + + with patch("subprocess.run", return_value=mock_result): + prs = find_prs_with_feedback("aRustyDev", "ai") + + # PR 805 has two blocking reviewers + pr_805 = next(p for p in prs if p["pr_number"] == 805) + assert len(pr_805["blocking_reviewers"]) == 2 + assert "reviewer1" in pr_805["blocking_reviewers"] + assert "reviewer2" in pr_805["blocking_reviewers"] + + def test_returns_empty_on_failure(self): + """Should return empty list when gh command fails.""" + mock_result = MagicMock() + mock_result.returncode = 1 + mock_result.stdout = "" + + with patch("subprocess.run", return_value=mock_result): + prs = find_prs_with_feedback("aRustyDev", "ai") + + assert prs == [] + + def test_filters_by_labels(self): + """Should pass labels to gh pr list command.""" + mock_result = MagicMock() + mock_result.returncode = 0 + mock_result.stdout = json.dumps([]) + + with patch("subprocess.run", return_value=mock_result) as mock_run: + find_prs_with_feedback("aRustyDev", "ai", labels=["skills", "review"]) + + # Check that --label flags were passed + call_args = mock_run.call_args[0][0] + assert "--label" in call_args + assert "skills" in call_args + assert "review" in call_args diff --git a/.claude/agents/skill-pr-addresser/tests/test_hooks.py b/.claude/agents/skill-pr-addresser/tests/test_hooks.py new file mode 100644 index 0000000..7b8460a --- /dev/null +++ b/.claude/agents/skill-pr-addresser/tests/test_hooks.py @@ -0,0 +1,555 @@ +# tests/test_hooks.py +"""Tests for the Cement hooks framework. + +Stage 11 tests for pipeline hook system. +""" + +import pytest +from datetime import datetime, timezone +from pathlib import Path +from unittest.mock import MagicMock, patch +import sys + +# Add src to path for imports +sys.path.insert(0, str(Path(__file__).parent.parent / "src")) + +from hooks import ( + PIPELINE_HOOKS, + HookContext, + HookResult, + HookRegistry, + define_pipeline_hooks, + run_hook, + log_stage_start, + log_stage_end, + check_cancelled, + rate_limit_handler, + error_handler, + register_logging_hooks, + register_cancellation_hooks, + register_error_handlers, + get_hook_definitions, + get_default_hooks, +) + + +# ============================================================================= +# HookContext Tests +# ============================================================================= + + +class TestHookContext: + def test_basic_creation(self): + """Should create context with required fields.""" + ctx = HookContext(pr_number=795) + + assert ctx.pr_number == 795 + assert ctx.iteration == 1 + assert ctx.stage == "" + assert ctx.data == {} + assert ctx.dry_run is False + assert ctx.cancelled is False + + def test_with_all_fields(self): + """Should create context with all fields.""" + ctx = HookContext( + pr_number=795, + iteration=3, + stage="discovery", + data={"key": "value"}, + metadata={"custom": "data"}, + dry_run=True, + cancelled=False, + ) + + assert ctx.iteration == 3 + assert ctx.stage == "discovery" + assert ctx.data["key"] == "value" + assert ctx.metadata["custom"] == "data" + assert ctx.dry_run is True + + def test_record_timestamp(self): + """Should record timestamps.""" + ctx = HookContext(pr_number=795) + ctx.record_timestamp("discovery_start") + + ts = ctx.get_timestamp("discovery_start") + assert ts is not None + assert "T" in ts # ISO format + + def test_get_missing_timestamp(self): + """Should return None for missing timestamp.""" + ctx = HookContext(pr_number=795) + + assert ctx.get_timestamp("nonexistent") is None + + def test_serialization_roundtrip(self): + """Should serialize and deserialize correctly.""" + ctx = HookContext( + pr_number=795, + iteration=2, + stage="fix", + data={"action_group": "g1"}, + dry_run=True, + ) + ctx.record_timestamp("test_event") + + data = ctx.to_dict() + restored = HookContext.from_dict(data) + + assert restored.pr_number == 795 + assert restored.iteration == 2 + assert restored.stage == "fix" + assert restored.data["action_group"] == "g1" + assert restored.dry_run is True + assert restored.get_timestamp("test_event") is not None + + +# ============================================================================= +# HookResult Tests +# ============================================================================= + + +class TestHookResult: + def test_basic_creation(self): + """Should create result with required fields.""" + result = HookResult( + hook_name="pre_discovery", + function_name="my_hook", + ) + + assert result.hook_name == "pre_discovery" + assert result.function_name == "my_hook" + assert result.success is True + assert result.duration_ms == 0.0 + assert result.error is None + assert result.output is None + + def test_with_error(self): + """Should track errors.""" + result = HookResult( + hook_name="pre_fix", + function_name="broken_hook", + success=False, + error="Something went wrong", + ) + + assert result.success is False + assert result.error == "Something went wrong" + + def test_to_dict(self): + """Should serialize to dictionary.""" + result = HookResult( + hook_name="post_commit", + function_name="log_commit", + success=True, + duration_ms=5.5, + output={"committed": True}, + ) + + data = result.to_dict() + assert data["hook_name"] == "post_commit" + assert data["function_name"] == "log_commit" + assert data["success"] is True + assert data["duration_ms"] == 5.5 + assert "committed" in data["output"] + + +# ============================================================================= +# HookRegistry Tests +# ============================================================================= + + +class TestHookRegistry: + def test_define_hook(self): + """Should define a hook point.""" + registry = HookRegistry() + registry.define("my_hook") + + assert registry.defined("my_hook") + assert not registry.defined("other_hook") + + def test_register_function(self): + """Should register function to hook.""" + registry = HookRegistry() + + def my_func(ctx): + return "called" + + registry.register("my_hook", my_func) + + funcs = registry.list("my_hook") + assert "my_func" in funcs + + def test_register_auto_defines(self): + """Should auto-define hook when registering.""" + registry = HookRegistry() + + def my_func(ctx): + pass + + registry.register("auto_defined", my_func) + + assert registry.defined("auto_defined") + + def test_run_hooks(self): + """Should run all registered functions.""" + registry = HookRegistry() + called = [] + + def hook1(ctx): + called.append("hook1") + return 1 + + def hook2(ctx): + called.append("hook2") + return 2 + + registry.register("test", hook1) + registry.register("test", hook2) + + results = list(registry.run("test", None)) + + assert len(results) == 2 + assert called == ["hook1", "hook2"] + assert results[0].output == 1 + assert results[1].output == 2 + + def test_run_with_weight_ordering(self): + """Should run hooks in weight order (higher first).""" + registry = HookRegistry() + order = [] + + def low_priority(ctx): + order.append("low") + + def high_priority(ctx): + order.append("high") + + def medium_priority(ctx): + order.append("medium") + + registry.register("test", low_priority, weight=0) + registry.register("test", high_priority, weight=100) + registry.register("test", medium_priority, weight=50) + + list(registry.run("test", None)) + + assert order == ["high", "medium", "low"] + + def test_run_handles_errors(self): + """Should catch and report errors in hooks.""" + registry = HookRegistry() + + def broken_hook(ctx): + raise ValueError("Hook exploded") + + registry.register("test", broken_hook) + + results = list(registry.run("test", None)) + + assert len(results) == 1 + assert results[0].success is False + assert "Hook exploded" in results[0].error + + def test_run_nonexistent_hook(self): + """Should return empty for undefined hook.""" + registry = HookRegistry() + + results = list(registry.run("nonexistent", None)) + + assert results == [] + + def test_list_all_hooks(self): + """Should list all defined hooks.""" + registry = HookRegistry() + registry.define("hook1") + registry.define("hook2") + + hooks = registry.list() + + assert "hook1" in hooks + assert "hook2" in hooks + + def test_tracks_duration(self): + """Should track execution duration.""" + import time + + registry = HookRegistry() + + def slow_hook(ctx): + time.sleep(0.01) + return "done" + + registry.register("test", slow_hook) + + results = list(registry.run("test", None)) + + assert results[0].duration_ms >= 10 # At least 10ms + + +# ============================================================================= +# Cement Integration Tests +# ============================================================================= + + +class TestDefinePipelineHooks: + def test_defines_all_hooks(self): + """Should define all pipeline hooks.""" + app = MagicMock() + app.hook.defined.return_value = False + + define_pipeline_hooks(app) + + # Should have defined each hook + define_calls = [call[0][0] for call in app.hook.define.call_args_list] + for hook_name in PIPELINE_HOOKS: + assert hook_name in define_calls + + def test_skips_already_defined(self): + """Should skip hooks that are already defined.""" + app = MagicMock() + app.hook.defined.side_effect = lambda name: name == "pre_discovery" + + define_pipeline_hooks(app) + + define_calls = [call[0][0] for call in app.hook.define.call_args_list] + assert "pre_discovery" not in define_calls + + +class TestRunHook: + def test_collects_results(self): + """Should collect all hook results.""" + app = MagicMock() + app.hook.defined.return_value = True + app.hook.run.return_value = iter(["result1", "result2"]) + + ctx = HookContext(pr_number=795) + results = run_hook(app, "test_hook", ctx) + + assert len(results) == 2 + assert results[0].output == "result1" + + def test_returns_empty_for_undefined(self): + """Should return empty list for undefined hook.""" + app = MagicMock() + app.hook.defined.return_value = False + + ctx = HookContext(pr_number=795) + results = run_hook(app, "undefined", ctx) + + assert results == [] + + def test_wraps_hook_results(self): + """Should wrap raw results in HookResult.""" + app = MagicMock() + app.hook.defined.return_value = True + + # If hook returns HookResult directly, use it + hr = HookResult(hook_name="test", function_name="fn", output="direct") + app.hook.run.return_value = iter([hr]) + + ctx = HookContext(pr_number=795) + results = run_hook(app, "test", ctx) + + assert results[0] is hr + + +# ============================================================================= +# Built-in Hook Function Tests +# ============================================================================= + + +class TestLogStageHooks: + def test_log_stage_start(self): + """Should log stage start and record timestamp.""" + app = MagicMock() + ctx = HookContext(pr_number=795, iteration=2, stage="discovery") + + log_stage_start(app, ctx) + + app.log.debug.assert_called_once() + assert ctx.get_timestamp("discovery_start") is not None + + def test_log_stage_end(self): + """Should log stage end with duration.""" + app = MagicMock() + ctx = HookContext(pr_number=795, stage="discovery") + ctx.record_timestamp("discovery_start") + + log_stage_end(app, ctx) + + app.log.debug.assert_called() + assert ctx.get_timestamp("discovery_end") is not None + + +class TestCheckCancelled: + def test_allows_uncancelled(self): + """Should not raise for uncancelled context.""" + app = MagicMock() + ctx = HookContext(pr_number=795, cancelled=False) + + # Should not raise + check_cancelled(app, ctx) + + def test_raises_on_cancelled(self): + """Should raise RuntimeError when cancelled.""" + app = MagicMock() + ctx = HookContext(pr_number=795, stage="fix", cancelled=True) + + with pytest.raises(RuntimeError) as exc: + check_cancelled(app, ctx) + + assert "cancelled" in str(exc.value) + assert "fix" in str(exc.value) + + +class TestRateLimitHandler: + def test_waits_and_logs(self): + """Should log warning and sleep.""" + app = MagicMock() + ctx = HookContext(pr_number=795) + ctx.data["retry_after"] = 1 # 1 second for test speed + + with patch("time.sleep") as mock_sleep: + rate_limit_handler(app, ctx) + + app.log.warning.assert_called_once() + mock_sleep.assert_called_once_with(1) + + def test_uses_default_wait(self): + """Should use 60s default if retry_after not set.""" + app = MagicMock() + ctx = HookContext(pr_number=795) + + with patch("time.sleep") as mock_sleep: + rate_limit_handler(app, ctx) + + mock_sleep.assert_called_once_with(60) + + +class TestErrorHandler: + def test_logs_error(self): + """Should log error details.""" + app = MagicMock() + ctx = HookContext(pr_number=795) + ctx.data["error"] = "Connection failed" + ctx.data["stage"] = "commit" + + error_handler(app, ctx) + + app.log.error.assert_called_once() + call_arg = app.log.error.call_args[0][0] + assert "commit" in call_arg + assert "Connection failed" in call_arg + + +# ============================================================================= +# Registration Helper Tests +# ============================================================================= + + +class TestRegisterLoggingHooks: + def test_registers_pre_and_post(self): + """Should register logging hooks for all stages.""" + app = MagicMock() + + register_logging_hooks(app) + + # Should have registered for pre_ and post_ hooks + register_calls = [call[0][0] for call in app.hook.register.call_args_list] + assert "pre_discovery" in register_calls + assert "post_discovery" in register_calls + + +class TestRegisterCancellationHooks: + def test_registers_with_high_weight(self): + """Should register cancellation hooks with high priority.""" + app = MagicMock() + + register_cancellation_hooks(app) + + # Check weight was passed + for call in app.hook.register.call_args_list: + if call[0][0].startswith("pre_"): + assert call[1].get("weight", 0) == 100 or call[0][2] == 100 + + +class TestRegisterErrorHandlers: + def test_registers_handlers(self): + """Should register error handlers.""" + app = MagicMock() + + register_error_handlers(app) + + register_calls = [call[0][0] for call in app.hook.register.call_args_list] + assert "on_error" in register_calls + assert "on_rate_limit" in register_calls + + +# ============================================================================= +# Meta Configuration Tests +# ============================================================================= + + +class TestGetHookDefinitions: + def test_returns_all_hooks(self): + """Should return list of all hook names.""" + hooks = get_hook_definitions() + + assert "pre_discovery" in hooks + assert "post_fix" in hooks + assert "on_error" in hooks + assert len(hooks) == len(PIPELINE_HOOKS) + + +class TestGetDefaultHooks: + def test_includes_cancellation_hooks(self): + """Should include cancellation checks for pre_ hooks.""" + hooks = get_default_hooks() + + pre_hooks = [h for h in hooks if h[0].startswith("pre_")] + assert len(pre_hooks) > 0 + + # All should have check_cancelled + for hook_name, func, weight in pre_hooks: + assert func == check_cancelled + assert weight == 100 + + def test_includes_error_handlers(self): + """Should include error handlers.""" + hooks = get_default_hooks() + + hook_names = [h[0] for h in hooks] + assert "on_error" in hook_names + assert "on_rate_limit" in hook_names + + +# ============================================================================= +# Pipeline Hook Names Tests +# ============================================================================= + + +class TestPipelineHooks: + def test_has_all_stages(self): + """Should have hooks for all pipeline stages.""" + stages = ["discovery", "filter", "consolidate", "plan", "fix", "commit", "notify"] + + for stage in stages: + assert f"pre_{stage}" in PIPELINE_HOOKS + assert f"post_{stage}" in PIPELINE_HOOKS + + def test_has_iteration_hooks(self): + """Should have iteration lifecycle hooks.""" + assert "pre_iteration" in PIPELINE_HOOKS + assert "post_iteration" in PIPELINE_HOOKS + + def test_has_error_hooks(self): + """Should have error handling hooks.""" + assert "on_error" in PIPELINE_HOOKS + assert "on_rate_limit" in PIPELINE_HOOKS + + def test_has_group_hooks(self): + """Should have per-group hooks for fix stage.""" + assert "pre_fix_group" in PIPELINE_HOOKS + assert "post_fix_group" in PIPELINE_HOOKS diff --git a/.claude/agents/skill-pr-addresser/tests/test_location_progress.py b/.claude/agents/skill-pr-addresser/tests/test_location_progress.py new file mode 100644 index 0000000..db40714 --- /dev/null +++ b/.claude/agents/skill-pr-addresser/tests/test_location_progress.py @@ -0,0 +1,303 @@ +# tests/test_location_progress.py +"""Tests for location-level progress tracking. + +Stage 10 tests for partial addressing support. +""" + +import pytest +from datetime import datetime, timezone +import sys +from pathlib import Path + +# Add src to path for imports +sys.path.insert(0, str(Path(__file__).parent.parent / "src")) + +from location_progress import ( + AddressedLocation, + ActionGroupProgress, + IterationProgress, + PRLocationProgress, +) + + +# ============================================================================= +# AddressedLocation Tests +# ============================================================================= + + +class TestAddressedLocation: + def test_to_dict(self): + """Should serialize to dictionary.""" + loc = AddressedLocation( + file="SKILL.md", + line=42, + thread_id="PRRT_123", + addressed_at=datetime(2025, 1, 1, 12, 0, 0, tzinfo=timezone.utc), + commit_sha="abc123", + ) + + data = loc.to_dict() + assert data["file"] == "SKILL.md" + assert data["line"] == 42 + assert data["thread_id"] == "PRRT_123" + assert data["commit_sha"] == "abc123" + assert "2025-01-01" in data["addressed_at"] + + def test_from_dict(self): + """Should deserialize from dictionary.""" + data = { + "file": "SKILL.md", + "line": 42, + "thread_id": "PRRT_123", + "addressed_at": "2025-01-01T12:00:00+00:00", + "commit_sha": "abc123", + } + + loc = AddressedLocation.from_dict(data) + assert loc.file == "SKILL.md" + assert loc.line == 42 + assert loc.thread_id == "PRRT_123" + assert loc.commit_sha == "abc123" + + def test_from_dict_with_missing_fields(self): + """Should handle missing optional fields.""" + data = { + "file": "SKILL.md", + "commit_sha": "abc123", + } + + loc = AddressedLocation.from_dict(data) + assert loc.file == "SKILL.md" + assert loc.line is None + assert loc.thread_id is None + + +# ============================================================================= +# ActionGroupProgress Tests +# ============================================================================= + + +class TestActionGroupProgress: + def test_is_complete(self): + """Should track completion status.""" + progress = ActionGroupProgress(group_id="g1", total_locations=2) + assert not progress.is_complete + + progress.add_location("SKILL.md", 42, None, "abc123") + assert not progress.is_complete + + progress.add_location("SKILL.md", 100, None, "abc123") + assert progress.is_complete + + def test_progress_pct(self): + """Should calculate progress percentage.""" + progress = ActionGroupProgress(group_id="g1", total_locations=4) + assert progress.progress_pct == 0.0 + + progress.add_location("SKILL.md", 42, None, "abc123") + assert progress.progress_pct == 25.0 + + progress.add_location("SKILL.md", 100, None, "abc123") + assert progress.progress_pct == 50.0 + + def test_progress_pct_empty(self): + """Should handle zero total locations.""" + progress = ActionGroupProgress(group_id="g1", total_locations=0) + assert progress.progress_pct == 100.0 + + def test_has_location(self): + """Should check if location exists.""" + progress = ActionGroupProgress(group_id="g1", total_locations=2) + progress.add_location("SKILL.md", 42, None, "abc123") + + assert progress.has_location("SKILL.md", 42) + assert not progress.has_location("SKILL.md", 100) + assert not progress.has_location("OTHER.md", 42) + + def test_pending_count(self): + """Should track pending count.""" + progress = ActionGroupProgress(group_id="g1", total_locations=3) + assert progress.pending_count == 3 + + progress.add_location("SKILL.md", 42, None, "abc123") + assert progress.pending_count == 2 + + def test_get_pending_files(self): + """Should return only unaddressed locations.""" + progress = ActionGroupProgress(group_id="g1", total_locations=3) + progress.add_location("SKILL.md", 42, None, "abc123") + + all_locs = [ + {"file": "SKILL.md", "line": 42}, + {"file": "SKILL.md", "line": 100}, + {"file": "OTHER.md", "line": 10}, + ] + + pending = progress.get_pending_files(all_locs) + assert len(pending) == 2 + assert pending[0]["line"] == 100 + assert pending[1]["file"] == "OTHER.md" + + def test_serialization_roundtrip(self): + """Should serialize and deserialize correctly.""" + progress = ActionGroupProgress(group_id="g1", total_locations=2) + progress.add_location("SKILL.md", 42, "PRRT_123", "abc123") + + data = progress.to_dict() + restored = ActionGroupProgress.from_dict(data) + + assert restored.group_id == "g1" + assert restored.total_locations == 2 + assert len(restored.addressed_locations) == 1 + assert restored.addressed_locations[0].file == "SKILL.md" + + +# ============================================================================= +# IterationProgress Tests +# ============================================================================= + + +class TestIterationProgress: + def test_get_or_create_group(self): + """Should get existing or create new group.""" + progress = IterationProgress( + iteration=1, + started_at=datetime.now(timezone.utc), + ) + + group1 = progress.get_or_create_group("g1", 3) + assert group1.group_id == "g1" + assert group1.total_locations == 3 + + # Same group + group1b = progress.get_or_create_group("g1", 5) + assert group1b is group1 + assert group1b.total_locations == 3 # Unchanged + + def test_all_complete(self): + """Should check if all groups complete.""" + progress = IterationProgress( + iteration=1, + started_at=datetime.now(timezone.utc), + ) + + group1 = progress.get_or_create_group("g1", 1) + group2 = progress.get_or_create_group("g2", 1) + + assert not progress.all_complete + + group1.add_location("a.md", 1, None, "abc") + assert not progress.all_complete + + group2.add_location("b.md", 2, None, "abc") + assert progress.all_complete + + def test_total_counts(self): + """Should aggregate counts across groups.""" + progress = IterationProgress( + iteration=1, + started_at=datetime.now(timezone.utc), + ) + + group1 = progress.get_or_create_group("g1", 3) + group2 = progress.get_or_create_group("g2", 2) + + group1.add_location("a.md", 1, None, "abc") + group2.add_location("b.md", 2, None, "abc") + + assert progress.total_addressed == 2 + assert progress.total_pending == 3 + + def test_complete(self): + """Should mark as completed.""" + progress = IterationProgress( + iteration=1, + started_at=datetime.now(timezone.utc), + ) + assert progress.completed_at is None + + progress.complete() + assert progress.completed_at is not None + + def test_serialization_roundtrip(self): + """Should serialize and deserialize correctly.""" + progress = IterationProgress( + iteration=1, + started_at=datetime.now(timezone.utc), + ) + group = progress.get_or_create_group("g1", 2) + group.add_location("a.md", 1, None, "abc") + + data = progress.to_dict() + restored = IterationProgress.from_dict(data) + + assert restored.iteration == 1 + assert len(restored.groups) == 1 + assert "g1" in restored.groups + assert restored.groups["g1"].addressed_count == 1 + + +# ============================================================================= +# PRLocationProgress Tests +# ============================================================================= + + +class TestPRLocationProgress: + def test_start_iteration(self): + """Should start new iteration.""" + progress = PRLocationProgress(pr_number=795) + assert progress.last_iteration_number == 0 + + it1 = progress.start_iteration() + assert it1.iteration == 1 + assert progress.last_iteration_number == 1 + + it2 = progress.start_iteration() + assert it2.iteration == 2 + + def test_current_iteration(self): + """Should return current (incomplete) iteration.""" + progress = PRLocationProgress(pr_number=795) + assert progress.current_iteration is None + + it1 = progress.start_iteration() + assert progress.current_iteration is it1 + + it1.complete() + assert progress.current_iteration is None + + it2 = progress.start_iteration() + assert progress.current_iteration is it2 + + def test_get_or_start_iteration(self): + """Should get current or start new.""" + progress = PRLocationProgress(pr_number=795) + + it1 = progress.get_or_start_iteration() + assert it1.iteration == 1 + + it1b = progress.get_or_start_iteration() + assert it1b is it1 + + it1.complete() + it2 = progress.get_or_start_iteration() + assert it2.iteration == 2 + + def test_serialization_roundtrip(self): + """Should serialize and deserialize correctly.""" + progress = PRLocationProgress(pr_number=795) + it1 = progress.start_iteration() + group = it1.get_or_create_group("g1", 2) + group.add_location("a.md", 1, None, "abc") + it1.complete() + + it2 = progress.start_iteration() + it2.get_or_create_group("g2", 1) + + data = progress.to_dict() + restored = PRLocationProgress.from_dict(data) + + assert restored.pr_number == 795 + assert len(restored.iterations) == 2 + assert restored.iterations[0].completed_at is not None + assert restored.iterations[1].completed_at is None diff --git a/.claude/agents/skill-pr-addresser/tests/test_locking.py b/.claude/agents/skill-pr-addresser/tests/test_locking.py new file mode 100644 index 0000000..f85b693 --- /dev/null +++ b/.claude/agents/skill-pr-addresser/tests/test_locking.py @@ -0,0 +1,147 @@ +# tests/test_locking.py +"""Tests for session locking. + +Stage 10 tests for concurrent run prevention. +""" + +import os +import pytest +from pathlib import Path +from tempfile import TemporaryDirectory +import sys + +# Add src to path for imports +sys.path.insert(0, str(Path(__file__).parent.parent / "src")) + +from locking import SessionLock, session_lock, LockError, force_unlock + + +class TestSessionLock: + def test_acquire_and_release(self): + """Should acquire and release lock successfully.""" + with TemporaryDirectory() as tmpdir: + sessions_dir = Path(tmpdir) + lock = SessionLock.acquire(sessions_dir, 795) + + assert lock.pr_number == 795 + assert lock.holder_pid == os.getpid() + assert (sessions_dir / ".lock-pr-795").exists() + + lock.release() + assert not (sessions_dir / ".lock-pr-795").exists() + + def test_cannot_acquire_twice(self): + """Should not be able to acquire same lock twice.""" + with TemporaryDirectory() as tmpdir: + sessions_dir = Path(tmpdir) + lock1 = SessionLock.acquire(sessions_dir, 795) + + with pytest.raises(LockError) as exc: + SessionLock.acquire(sessions_dir, 795) + + assert "PID" in str(exc.value) + lock1.release() + + def test_different_prs_can_lock(self): + """Different PRs should be able to lock independently.""" + with TemporaryDirectory() as tmpdir: + sessions_dir = Path(tmpdir) + lock1 = SessionLock.acquire(sessions_dir, 795) + lock2 = SessionLock.acquire(sessions_dir, 796) + + assert lock1.pr_number == 795 + assert lock2.pr_number == 796 + + lock1.release() + lock2.release() + + def test_context_manager(self): + """Context manager should auto-release lock.""" + with TemporaryDirectory() as tmpdir: + sessions_dir = Path(tmpdir) + + with session_lock(sessions_dir, 795) as lock: + assert lock.pr_number == 795 + assert (sessions_dir / ".lock-pr-795").exists() + + # Released after context + assert not (sessions_dir / ".lock-pr-795").exists() + + def test_context_manager_releases_on_exception(self): + """Context manager should release lock on exception.""" + with TemporaryDirectory() as tmpdir: + sessions_dir = Path(tmpdir) + + with pytest.raises(ValueError): + with session_lock(sessions_dir, 795) as lock: + assert (sessions_dir / ".lock-pr-795").exists() + raise ValueError("test error") + + # Released despite exception + assert not (sessions_dir / ".lock-pr-795").exists() + + def test_lock_to_dict(self): + """Should serialize lock info.""" + with TemporaryDirectory() as tmpdir: + sessions_dir = Path(tmpdir) + lock = SessionLock.acquire(sessions_dir, 795) + + data = lock.to_dict() + assert data["pr_number"] == 795 + assert data["holder_pid"] == os.getpid() + assert "acquired_at" in data + + lock.release() + + def test_creates_sessions_dir(self): + """Should create sessions directory if it doesn't exist.""" + with TemporaryDirectory() as tmpdir: + sessions_dir = Path(tmpdir) / "nonexistent" / "sessions" + lock = SessionLock.acquire(sessions_dir, 795) + + assert sessions_dir.exists() + lock.release() + + +class TestForceUnlock: + def test_unlock_nonexistent_lock(self): + """Should handle non-existent lock gracefully.""" + with TemporaryDirectory() as tmpdir: + sessions_dir = Path(tmpdir) + success, message = force_unlock(sessions_dir, 795) + + assert success is True + assert "No lock exists" in message + + def test_unlock_existing_lock(self): + """Should unlock existing lock when process not running.""" + with TemporaryDirectory() as tmpdir: + sessions_dir = Path(tmpdir) + lock = SessionLock.acquire(sessions_dir, 795) + + # Manually close the fd to simulate crashed process + # but keep the file + lock._fd.close() + lock._fd = None + + # Now force unlock + success, message = force_unlock(sessions_dir, 795, force=True) + + assert success is True + assert "Released" in message + assert not (sessions_dir / ".lock-pr-795").exists() + + def test_unlock_with_force_flag(self): + """Should unlock with force flag even if PID running.""" + with TemporaryDirectory() as tmpdir: + sessions_dir = Path(tmpdir) + + # Create a fake lock file with our PID + lock_file = sessions_dir / ".lock-pr-795" + sessions_dir.mkdir(parents=True, exist_ok=True) + lock_file.write_text(f'{{"pr_number": 795, "holder_pid": {os.getpid()}}}') + + success, message = force_unlock(sessions_dir, 795, force=True) + + assert success is True + assert "Released" in message diff --git a/.claude/agents/skill-pr-addresser/tests/test_models.py b/.claude/agents/skill-pr-addresser/tests/test_models.py new file mode 100644 index 0000000..2af6a64 --- /dev/null +++ b/.claude/agents/skill-pr-addresser/tests/test_models.py @@ -0,0 +1,558 @@ +# tests/test_models.py +"""Tests for feedback models and hashing utilities. + +Stage 8 tests for #796: detect updated comments after addressing. +""" + +import pytest +from datetime import datetime, timezone +from unittest.mock import MagicMock + +import sys +from pathlib import Path + +# Add agent directory to path for imports +_agent_dir = Path(__file__).parent.parent +if str(_agent_dir) not in sys.path: + sys.path.insert(0, str(_agent_dir)) + +from src.hashing import hash_content, hashes_match, hash_lines +from src.models import ( + ReviewFeedback, + CommentFeedback, + ThreadComment, + ThreadFeedback, + TokenUsage, + Location, + ActionGroup, + AddressedLocation, + FixResult, + RawFeedback, +) +from src.session_schema import AddressedItem, ThreadState, FeedbackState + + +# ============================================================================= +# Hashing Tests +# ============================================================================= + + +class TestHashing: + def test_hash_content_deterministic(self): + """Same content should produce same hash.""" + assert hash_content("hello") == hash_content("hello") + + def test_hash_content_normalizes_whitespace(self): + """Whitespace normalization should produce consistent hashes.""" + assert hash_content("hello world") == hash_content("hello world") + assert hash_content("hello\n\nworld") == hash_content("hello world") + assert hash_content(" hello ") == hash_content("hello") + + def test_hash_content_empty(self): + """Empty content should return special hash.""" + assert hash_content("") == "sha256:empty" + assert hash_content(None) == "sha256:empty" + + def test_hash_content_format(self): + """Hash should have correct format.""" + h = hash_content("test content") + assert h.startswith("sha256:") + assert len(h) == len("sha256:") + 16 # Truncated to 16 chars + + def test_hashes_match(self): + """hashes_match should compare correctly.""" + h1 = hash_content("test") + h2 = hash_content("test") + h3 = hash_content("different") + assert hashes_match(h1, h2) + assert not hashes_match(h1, h3) + + def test_hash_lines(self): + """hash_lines should hash specific lines.""" + content = "line1\nline2\nline3\nline4" + h = hash_lines(content, 2, 3) + assert h.startswith("sha256:") + # Should match hashing lines 2-3 directly + assert h == hash_content("line2\nline3") + + +# ============================================================================= +# ReviewFeedback Tests +# ============================================================================= + + +class TestReviewFeedback: + def test_creates_review(self): + """Should create review with computed hash.""" + review = ReviewFeedback( + id="R_123", + state="CHANGES_REQUESTED", + body="Please fix this", + author="reviewer", + submitted_at=datetime.now(timezone.utc), + ) + assert review.content_hash.startswith("sha256:") + assert review.content == "Please fix this" + + def test_content_property(self): + """content property should return body.""" + review = ReviewFeedback( + id="R_123", + state="COMMENTED", + body="Test body", + author="reviewer", + submitted_at=datetime.now(timezone.utc), + ) + assert review.content == review.body + + def test_created_at_property(self): + """created_at property should return submitted_at.""" + now = datetime.now(timezone.utc) + review = ReviewFeedback( + id="R_123", + state="COMMENTED", + body="Test", + author="reviewer", + submitted_at=now, + ) + assert review.created_at == now + + def test_is_resolved_by_always_false(self): + """Reviews can't be resolved by responses.""" + review = ReviewFeedback( + id="R_123", + state="CHANGES_REQUESTED", + body="Fix this", + author="reviewer", + submitted_at=datetime.now(timezone.utc), + ) + other = ReviewFeedback( + id="R_456", + state="APPROVED", + body="Looks good", + author="reviewer", + submitted_at=datetime.now(timezone.utc), + ) + assert not review.is_resolved_by(other) + + def test_from_github(self): + """Should parse GitHub API response.""" + data = { + "id": "R_abc", + "state": "CHANGES_REQUESTED", + "body": "Please fix", + "author": {"login": "reviewer"}, + "submittedAt": "2025-01-01T12:00:00Z", + } + review = ReviewFeedback.from_github(data) + assert review.id == "R_abc" + assert review.author == "reviewer" + assert review.state == "CHANGES_REQUESTED" + + +# ============================================================================= +# CommentFeedback Tests +# ============================================================================= + + +class TestCommentFeedback: + def test_is_resolved_by_reviewer_withdrawal(self): + """Should detect reviewer withdrawal.""" + comment = CommentFeedback( + id="IC_123", + body="Please add tests", + author="reviewer", + created_at=datetime.now(timezone.utc), + ) + response = CommentFeedback( + id="IC_456", + body="Never mind, I see you already have them", + author="reviewer", + created_at=datetime.now(timezone.utc), + ) + assert comment.is_resolved_by(response) + + def test_is_resolved_by_other_phrases(self): + """Should detect various resolution phrases.""" + comment = CommentFeedback( + id="IC_123", + body="Add tests", + author="reviewer", + created_at=datetime.now(timezone.utc), + ) + + phrases = ["ignore", "looks good now", "resolved", "my mistake", "disregard"] + for phrase in phrases: + response = CommentFeedback( + id="IC_456", + body=f"Oh, {phrase}!", + author="reviewer", + created_at=datetime.now(timezone.utc), + ) + assert comment.is_resolved_by(response), f"Should resolve with '{phrase}'" + + def test_not_resolved_by_different_author(self): + """Different author can't resolve.""" + comment = CommentFeedback( + id="IC_123", + body="Add tests", + author="reviewer", + created_at=datetime.now(timezone.utc), + ) + response = CommentFeedback( + id="IC_456", + body="Never mind", + author="other_user", + created_at=datetime.now(timezone.utc), + ) + assert not comment.is_resolved_by(response) + + def test_has_acknowledgment_reaction(self): + """Should detect thumbs up reaction.""" + comment = CommentFeedback( + id="IC_123", + body="Test", + author="reviewer", + created_at=datetime.now(timezone.utc), + reactions={"thumbsUp": 1, "thumbsDown": 0}, + ) + assert comment.has_acknowledgment_reaction("anyone") + + comment_no_reaction = CommentFeedback( + id="IC_456", + body="Test", + author="reviewer", + created_at=datetime.now(timezone.utc), + reactions={}, + ) + assert not comment_no_reaction.has_acknowledgment_reaction("anyone") + + +# ============================================================================= +# ThreadFeedback Tests +# ============================================================================= + + +class TestThreadFeedback: + def _make_thread(self, comments: list[tuple[str, str, str]]) -> ThreadFeedback: + """Helper to create threads with comments.""" + return ThreadFeedback( + id="PRRT_123", + path="SKILL.md", + line=42, + is_resolved=False, + is_outdated=False, + comments=[ + ThreadComment( + id=cid, + body=body, + author=author, + created_at=datetime.now(timezone.utc), + ) + for cid, body, author in comments + ], + ) + + def test_has_author_resolution(self): + """Should detect PR author resolution signal.""" + thread = self._make_thread([ + ("c1", "Fix this typo", "reviewer"), + ("c2", "Done!", "pr_author"), + ]) + assert thread.has_author_resolution("pr_author") + + def test_has_author_resolution_with_variations(self): + """Should detect various resolution phrases.""" + for phrase in ["done", "fixed", "addressed", "resolved", "will do"]: + thread = self._make_thread([ + ("c1", "Fix this", "reviewer"), + ("c2", phrase, "pr_author"), + ]) + assert thread.has_author_resolution("pr_author"), f"Should detect '{phrase}'" + + def test_has_reviewer_withdrawal(self): + """Should detect reviewer withdrawal.""" + thread = self._make_thread([ + ("c1", "Fix this", "reviewer"), + ("c2", "Actually, never mind", "reviewer"), + ]) + assert thread.has_reviewer_withdrawal() + + def test_get_new_comments_since(self): + """Should return comments after specified ID.""" + thread = self._make_thread([ + ("c1", "First", "reviewer"), + ("c2", "Second", "author"), + ("c3", "Third", "reviewer"), + ]) + new = thread.get_new_comments_since("c1") + assert len(new) == 2 + assert new[0].id == "c2" + assert new[1].id == "c3" + + def test_get_new_comments_since_none(self): + """Should return all comments when last_seen is None.""" + thread = self._make_thread([ + ("c1", "First", "reviewer"), + ("c2", "Second", "author"), + ]) + new = thread.get_new_comments_since(None) + assert len(new) == 2 + + def test_from_github(self): + """Should parse GitHub GraphQL response.""" + data = { + "id": "PRRT_abc", + "path": "SKILL.md", + "line": 42, + "isResolved": False, + "isOutdated": False, + "comments": { + "nodes": [ + { + "id": "PRRTC_1", + "body": "Fix this", + "author": {"login": "reviewer"}, + "createdAt": "2025-01-01T12:00:00Z", + } + ] + }, + } + thread = ThreadFeedback.from_github(data) + assert thread.id == "PRRT_abc" + assert thread.path == "SKILL.md" + assert len(thread.comments) == 1 + assert thread.comments[0].author == "reviewer" + + +# ============================================================================= +# TokenUsage Tests +# ============================================================================= + + +class TestTokenUsage: + def test_total_calculation(self): + """Should calculate total tokens correctly.""" + usage = TokenUsage(input_tokens=100, output_tokens=50) + assert usage.total == 150 + + def test_addition(self): + """Should add token usages correctly.""" + usage1 = TokenUsage(input_tokens=100, output_tokens=50, total_cost=0.01) + usage2 = TokenUsage(input_tokens=200, output_tokens=100, total_cost=0.02) + combined = usage1 + usage2 + assert combined.input_tokens == 300 + assert combined.output_tokens == 150 + assert combined.total_cost == 0.03 + + def test_to_dict(self): + """Should serialize to dict.""" + usage = TokenUsage(input_tokens=100, output_tokens=50) + d = usage.to_dict() + assert d["input_tokens"] == 100 + assert d["output_tokens"] == 50 + assert d["total"] == 150 + + +# ============================================================================= +# ActionGroup Tests +# ============================================================================= + + +class TestActionGroup: + def test_location_count(self): + """Should count locations correctly.""" + group = ActionGroup( + id="g1", + type="fix_code", + description="Fix type errors", + locations=[ + Location(file="SKILL.md", line=42), + Location(file="SKILL.md", line=100), + ], + ) + assert group.location_count == 2 + + def test_thread_ids(self): + """Should extract thread IDs from locations.""" + group = ActionGroup( + id="g1", + type="fix_code", + description="Fix", + locations=[ + Location(file="SKILL.md", line=42, thread_id="T_1"), + Location(file="SKILL.md", line=100), + Location(file="SKILL.md", line=200, thread_id="T_2"), + ], + ) + assert group.thread_ids == ["T_1", "T_2"] + + def test_serialization(self): + """Should serialize and deserialize correctly.""" + group = ActionGroup( + id="g1", + type="fix_code", + description="Fix type errors", + locations=[Location(file="SKILL.md", line=42, thread_id="T_1")], + priority="high", + ) + data = group.to_dict() + restored = ActionGroup.from_dict(data) + assert restored.id == "g1" + assert restored.type == "fix_code" + assert restored.locations[0].line == 42 + assert restored.locations[0].thread_id == "T_1" + assert restored.priority == "high" + + +# ============================================================================= +# FixResult Tests +# ============================================================================= + + +class TestFixResult: + def test_success_factory(self): + """Should create success result.""" + result = FixResult.success( + group_id="g1", + addressed_locations=[ + AddressedLocation( + file="SKILL.md", + line=42, + thread_id="T_1", + addressed_at=datetime.now(timezone.utc), + commit_sha="abc123", + ) + ], + addressed_thread_ids=["T_1"], + ) + assert result.has_changes + assert not result.failed + assert not result.skipped + + def test_skipped_factory(self): + """Should create skipped result.""" + result = FixResult.skipped_result("g1", "already_complete") + assert result.skipped + assert result.reason == "already_complete" + assert not result.has_changes + + def test_failed_factory(self): + """Should create failed result.""" + error = Exception("Something went wrong") + result = FixResult.failed_result("g1", error) + assert result.failed + assert result.error == error + + +# ============================================================================= +# RawFeedback Tests +# ============================================================================= + + +class TestRawFeedback: + def test_total_count(self): + """Should count all feedback types.""" + raw = RawFeedback( + reviews=[MagicMock()], + comments=[MagicMock(), MagicMock()], + threads=[MagicMock()], + ) + assert raw.total_count == 4 + + def test_summary(self): + """Should generate summary dict.""" + raw = RawFeedback( + reviews=[MagicMock()], + comments=[MagicMock(), MagicMock()], + threads=[MagicMock(), MagicMock(), MagicMock()], + ) + summary = raw.summary() + assert summary["reviews"] == 1 + assert summary["comments"] == 2 + assert summary["threads"] == 3 + assert summary["total"] == 6 + + +# ============================================================================= +# Session Schema Tests +# ============================================================================= + + +class TestAddressedItem: + def test_serialization(self): + """Should serialize and deserialize.""" + item = AddressedItem( + id="item_1", + content_hash="sha256:abc123", + addressed_at=datetime(2025, 1, 1, 12, 0, 0, tzinfo=timezone.utc), + addressed_in_commit="commit123", + iteration=1, + ) + data = item.to_dict() + restored = AddressedItem.from_dict(data) + assert restored.id == "item_1" + assert restored.content_hash == "sha256:abc123" + assert restored.addressed_in_commit == "commit123" + assert restored.iteration == 1 + + +class TestThreadState: + def test_serialization(self): + """Should serialize and deserialize.""" + state = ThreadState( + thread_id="T_1", + last_seen_comment_id="c_3", + comments_processed=["c_1", "c_2", "c_3"], + last_processed_at=datetime(2025, 1, 1, 12, 0, 0, tzinfo=timezone.utc), + ) + data = state.to_dict() + restored = ThreadState.from_dict(data) + assert restored.thread_id == "T_1" + assert restored.last_seen_comment_id == "c_3" + assert len(restored.comments_processed) == 3 + + +class TestFeedbackState: + def test_mark_addressed(self): + """Should mark items as addressed.""" + state = FeedbackState() + state.mark_addressed("item_1", "sha256:abc", "commit123", 1) + assert state.was_addressed("item_1") + assert state.was_addressed_with_hash("item_1", "sha256:abc") + assert not state.was_addressed_with_hash("item_1", "sha256:different") + + def test_update_thread(self): + """Should update thread state.""" + state = FeedbackState() + state.update_thread("T_1", ["c_1", "c_2"]) + assert "T_1" in state.threads + assert state.threads["T_1"].last_seen_comment_id == "c_2" + + # Update with more comments + state.update_thread("T_1", ["c_3"]) + assert state.threads["T_1"].last_seen_comment_id == "c_3" + assert "c_1" in state.threads["T_1"].comments_processed + assert "c_3" in state.threads["T_1"].comments_processed + + def test_get_unprocessed_comments(self): + """Should return unprocessed comments.""" + state = FeedbackState() + state.update_thread("T_1", ["c_1", "c_2"]) + + unprocessed = state.get_unprocessed_comments("T_1", ["c_1", "c_2", "c_3", "c_4"]) + assert unprocessed == ["c_3", "c_4"] + + def test_serialization(self): + """Should serialize and deserialize complete state.""" + state = FeedbackState() + state.mark_addressed("item_1", "sha256:abc", "commit123", 1) + state.update_thread("T_1", ["c_1", "c_2"]) + state.record_run() + + data = state.to_dict() + restored = FeedbackState.from_dict(data) + + assert restored.was_addressed("item_1") + assert "T_1" in restored.threads + assert restored.last_run is not None diff --git a/.claude/agents/skill-pr-addresser/tests/test_pipeline.py b/.claude/agents/skill-pr-addresser/tests/test_pipeline.py new file mode 100644 index 0000000..107a7d7 --- /dev/null +++ b/.claude/agents/skill-pr-addresser/tests/test_pipeline.py @@ -0,0 +1,472 @@ +# tests/test_pipeline.py +"""Tests for the pipeline executor. + +Stage 12 tests for the main orchestration pipeline. +Note: The Pipeline class uses relative imports so we test through the app or mocks. +""" + +import pytest +from datetime import datetime, timezone +from pathlib import Path +from tempfile import TemporaryDirectory +from unittest.mock import MagicMock, patch, PropertyMock +import sys + +# Add agent directory to path for imports +_agent_dir = Path(__file__).parent.parent +if str(_agent_dir) not in sys.path: + sys.path.insert(0, str(_agent_dir)) + +from src.hooks import HookContext, HookRegistry, PIPELINE_HOOKS +from src.locking import LockError + + +# ============================================================================= +# Pipeline Data Classes (tested through direct construction) +# ============================================================================= + + +class TestStageResultLike: + """Test stage result behavior through dict-based simulation.""" + + def test_stage_result_structure(self): + """Stage results should have expected structure.""" + result = { + "stage": "discovery", + "success": True, + "duration_ms": 100.5, + "error": None, + "data": {}, + } + + assert result["stage"] == "discovery" + assert result["success"] is True + assert result["error"] is None + + def test_stage_result_with_error(self): + """Stage results should track errors.""" + result = { + "stage": "fix", + "success": False, + "duration_ms": 50.0, + "error": "Fix failed", + "data": {}, + } + + assert result["success"] is False + assert result["error"] == "Fix failed" + + +class TestIterationResultLike: + """Test iteration result behavior through dict-based simulation.""" + + def test_iteration_result_structure(self): + """Iteration results should have expected structure.""" + result = { + "iteration": 1, + "addressed_count": 5, + "skipped_count": 1, + "failed_count": 0, + "commit_sha": "abc123", + "pushed": True, + "cost": 0.75, + "stages": [], + } + + assert result["iteration"] == 1 + assert result["addressed_count"] == 5 + assert result["commit_sha"] == "abc123" + + def test_iteration_success_computed(self): + """Iteration success should be computed from stages.""" + stages = [ + {"stage": "filter", "success": True}, + {"stage": "fix", "success": True}, + ] + + success = all(s["success"] for s in stages) + assert success is True + + stages.append({"stage": "commit", "success": False}) + success = all(s["success"] for s in stages) + assert success is False + + +class TestPipelineResultLike: + """Test pipeline result behavior through dict-based simulation.""" + + def test_pipeline_result_structure(self): + """Pipeline results should have expected structure.""" + result = { + "success": True, + "pr_number": 795, + "iterations_run": 2, + "total_addressed": 10, + "total_skipped": 2, + "total_failed": 0, + "final_commit_sha": "abc123", + "ready_for_review": True, + "error": None, + "total_cost": 1.50, + "dry_run": False, + "iterations": [], + } + + assert result["success"] is True + assert result["pr_number"] == 795 + assert result["total_addressed"] == 10 + assert result["ready_for_review"] is True + + +# ============================================================================= +# Pipeline Hook Integration Tests +# ============================================================================= + + +class TestPipelineHooksIntegration: + def test_pipeline_hooks_defined(self): + """All pipeline hooks should be defined.""" + stages = ["discovery", "filter", "consolidate", "plan", "fix", "commit", "notify"] + + for stage in stages: + assert f"pre_{stage}" in PIPELINE_HOOKS + assert f"post_{stage}" in PIPELINE_HOOKS + + def test_iteration_hooks(self): + """Iteration hooks should exist.""" + assert "pre_iteration" in PIPELINE_HOOKS + assert "post_iteration" in PIPELINE_HOOKS + + def test_group_hooks(self): + """Per-group hooks should exist for fix stage.""" + assert "pre_fix_group" in PIPELINE_HOOKS + assert "post_fix_group" in PIPELINE_HOOKS + + def test_error_hooks(self): + """Error hooks should exist.""" + assert "on_error" in PIPELINE_HOOKS + assert "on_rate_limit" in PIPELINE_HOOKS + + +class TestHookRegistryForPipeline: + def test_register_pipeline_hooks(self): + """Should be able to register hooks for pipeline stages.""" + registry = HookRegistry() + for hook_name in PIPELINE_HOOKS: + registry.define(hook_name) + + called = [] + + def track_discovery(pipeline, ctx): + called.append("discovery") + + def track_fix(pipeline, ctx): + called.append("fix") + + registry.register("pre_discovery", track_discovery) + registry.register("pre_fix", track_fix) + + # Simulate running hooks + ctx = HookContext(pr_number=795, stage="discovery") + list(registry.run("pre_discovery", None, ctx)) + + ctx.stage = "fix" + list(registry.run("pre_fix", None, ctx)) + + assert called == ["discovery", "fix"] + + def test_hook_context_flows_through_stages(self): + """HookContext should preserve data between hooks.""" + registry = HookRegistry() + registry.define("pre_discovery") + registry.define("post_discovery") + + def add_data(pipeline, ctx): + ctx.data["from_discovery"] = True + + def check_data(pipeline, ctx): + return ctx.data.get("from_discovery") + + registry.register("pre_discovery", add_data) + registry.register("post_discovery", check_data) + + ctx = HookContext(pr_number=795) + list(registry.run("pre_discovery", None, ctx)) + results = list(registry.run("post_discovery", None, ctx)) + + assert results[0].output is True + + +# ============================================================================= +# Pipeline Flow Tests (Mocked) +# ============================================================================= + + +class TestPipelineFlowMocked: + def test_discovery_to_filter_flow(self): + """Should flow from discovery to filter.""" + stages_run = [] + + registry = HookRegistry() + for hook_name in PIPELINE_HOOKS: + registry.define(hook_name) + + def track_pre(pipeline, ctx): + stages_run.append(f"pre_{ctx.stage}") + + def track_post(pipeline, ctx): + stages_run.append(f"post_{ctx.stage}") + + for stage in ["discovery", "filter", "consolidate"]: + registry.register(f"pre_{stage}", track_pre) + registry.register(f"post_{stage}", track_post) + + # Simulate pipeline running stages + for stage in ["discovery", "filter", "consolidate"]: + ctx = HookContext(pr_number=795, stage=stage) + list(registry.run(f"pre_{stage}", None, ctx)) + # ... stage would run here ... + list(registry.run(f"post_{stage}", None, ctx)) + + expected = [ + "pre_discovery", "post_discovery", + "pre_filter", "post_filter", + "pre_consolidate", "post_consolidate", + ] + assert stages_run == expected + + def test_iteration_wraps_stages(self): + """Iteration hooks should wrap stage hooks.""" + order = [] + + registry = HookRegistry() + for hook_name in PIPELINE_HOOKS: + registry.define(hook_name) + + def track(name): + def _track(pipeline, ctx): + order.append(name) + return _track + + registry.register("pre_iteration", track("iter_start")) + registry.register("pre_fix", track("fix_start")) + registry.register("post_fix", track("fix_end")) + registry.register("post_iteration", track("iter_end")) + + ctx = HookContext(pr_number=795, iteration=1) + + # Simulate iteration + ctx.stage = "iteration" + list(registry.run("pre_iteration", None, ctx)) + + ctx.stage = "fix" + list(registry.run("pre_fix", None, ctx)) + list(registry.run("post_fix", None, ctx)) + + ctx.stage = "iteration" + list(registry.run("post_iteration", None, ctx)) + + assert order == ["iter_start", "fix_start", "fix_end", "iter_end"] + + +# ============================================================================= +# Pipeline Error Handling Tests +# ============================================================================= + + +class TestPipelineErrorHandling: + def test_on_error_hook_triggered(self): + """on_error hook should be triggered on errors.""" + registry = HookRegistry() + registry.define("on_error") + + errors = [] + + def capture_error(pipeline, ctx): + errors.append(ctx.data.get("error")) + + registry.register("on_error", capture_error) + + # Simulate error + ctx = HookContext(pr_number=795) + ctx.data["error"] = "Something failed" + ctx.data["stage"] = "fix" + + list(registry.run("on_error", None, ctx)) + + assert len(errors) == 1 + assert errors[0] == "Something failed" + + def test_on_rate_limit_hook_triggered(self): + """on_rate_limit hook should be triggered on rate limits.""" + registry = HookRegistry() + registry.define("on_rate_limit") + + waits = [] + + def capture_rate_limit(pipeline, ctx): + waits.append(ctx.data.get("retry_after")) + + registry.register("on_rate_limit", capture_rate_limit) + + ctx = HookContext(pr_number=795) + ctx.data["retry_after"] = 60 + + list(registry.run("on_rate_limit", None, ctx)) + + assert waits == [60] + + +# ============================================================================= +# Pipeline Dry Run Tests +# ============================================================================= + + +class TestPipelineDryRunBehavior: + def test_dry_run_context_tracked(self): + """Dry run should be tracked in context.""" + ctx = HookContext(pr_number=795, dry_run=True) + assert ctx.dry_run is True + + def test_dry_run_serializes(self): + """Dry run should serialize correctly.""" + ctx = HookContext(pr_number=795, dry_run=True) + data = ctx.to_dict() + assert data["dry_run"] is True + + restored = HookContext.from_dict(data) + assert restored.dry_run is True + + +# ============================================================================= +# Pipeline Cancellation Tests +# ============================================================================= + + +class TestPipelineCancellation: + def test_cancelled_flag_in_context(self): + """Cancelled flag should be trackable.""" + ctx = HookContext(pr_number=795) + assert ctx.cancelled is False + + ctx.cancelled = True + assert ctx.cancelled is True + + def test_cancelled_stops_hooks(self): + """Hooks should check cancelled flag.""" + registry = HookRegistry() + registry.define("pre_fix") + + executed = [] + + def check_cancelled(pipeline, ctx): + if ctx.cancelled: + raise RuntimeError("Cancelled") + executed.append("ran") + + registry.register("pre_fix", check_cancelled) + + ctx = HookContext(pr_number=795, cancelled=True) + results = list(registry.run("pre_fix", None, ctx)) + + assert results[0].success is False + assert "Cancelled" in results[0].error + + +# ============================================================================= +# Pipeline State Management Tests +# ============================================================================= + + +class TestPipelineStateManagement: + def test_previous_state_loading(self): + """Should handle previous state loading.""" + import json + + with TemporaryDirectory() as tmpdir: + sessions_dir = Path(tmpdir) / "sessions" + pr_dir = sessions_dir / "pr-795" + pr_dir.mkdir(parents=True) + + state = {"version": 1, "items": [{"id": "item-1", "hash": "abc"}]} + (pr_dir / "state.json").write_text(json.dumps(state)) + + # Read back + loaded = json.loads((pr_dir / "state.json").read_text()) + assert loaded["version"] == 1 + assert len(loaded["items"]) == 1 + + def test_missing_state_returns_none(self): + """Should handle missing state gracefully.""" + with TemporaryDirectory() as tmpdir: + state_file = Path(tmpdir) / "pr-795" / "state.json" + assert not state_file.exists() + + def test_invalid_state_handled(self): + """Should handle invalid JSON gracefully.""" + with TemporaryDirectory() as tmpdir: + sessions_dir = Path(tmpdir) / "sessions" + pr_dir = sessions_dir / "pr-795" + pr_dir.mkdir(parents=True) + + (pr_dir / "state.json").write_text("not valid json") + + try: + import json + json.loads((pr_dir / "state.json").read_text()) + assert False, "Should have raised" + except json.JSONDecodeError: + pass # Expected + + +# ============================================================================= +# Pipeline Lock Tests +# ============================================================================= + + +class TestPipelineLocking: + def test_lock_error_import(self): + """LockError should be importable.""" + from src.locking import LockError + + err = LockError("Already locked") + assert "Already locked" in str(err) + + def test_session_lock_import(self): + """Session lock should be importable.""" + from src.locking import session_lock, SessionLock + + with TemporaryDirectory() as tmpdir: + sessions_dir = Path(tmpdir) + lock = SessionLock.acquire(sessions_dir, 795) + + assert lock.pr_number == 795 + lock.release() + + +# ============================================================================= +# Pipeline Progress Tracking Tests +# ============================================================================= + + +class TestPipelineProgressTracking: + def test_location_progress_import(self): + """Location progress should be importable.""" + from src.location_progress import PRLocationProgress + + progress = PRLocationProgress(pr_number=795) + assert progress.pr_number == 795 + + def test_iteration_progress(self): + """Should track iteration progress.""" + from src.location_progress import PRLocationProgress + + progress = PRLocationProgress(pr_number=795) + iter1 = progress.start_iteration() + assert iter1.iteration == 1 + + group = iter1.get_or_create_group("g1", 3) + group.add_location("SKILL.md", 42, "PRRT_123", "abc123") + + assert group.addressed_count == 1 + assert group.pending_count == 2 diff --git a/.claude/agents/skill-pr-addresser/tests/test_progress.py b/.claude/agents/skill-pr-addresser/tests/test_progress.py new file mode 100644 index 0000000..156ae7a --- /dev/null +++ b/.claude/agents/skill-pr-addresser/tests/test_progress.py @@ -0,0 +1,193 @@ +"""Tests for the progress module.""" + +import pytest +from pathlib import Path + +import sys + +# Add agent directory to path +_agent_dir = Path(__file__).parent.parent +if str(_agent_dir) not in sys.path: + sys.path.insert(0, str(_agent_dir)) + +from src.progress import ( + PRStatus, + PRProgress, + BatchProgress, + ProgressTracker, +) + + +class TestPRProgress: + """Tests for PRProgress class.""" + + def test_creates_progress(self): + """Should create PR progress with defaults.""" + pr = PRProgress(pr_number=795) + + assert pr.pr_number == 795 + assert pr.status == PRStatus.PENDING + assert pr.iterations_completed == 0 + + def test_to_dict(self): + """Should convert to dictionary.""" + pr = PRProgress( + pr_number=795, + status=PRStatus.IN_PROGRESS, + title="Test PR", + skill_path="components/skills/lang-rust-dev", + ) + + data = pr.to_dict() + + assert data["pr_number"] == 795 + assert data["status"] == "in_progress" + assert data["skill_path"] == "components/skills/lang-rust-dev" + + +class TestBatchProgress: + """Tests for BatchProgress class.""" + + def test_computes_counts(self): + """Should compute progress counts correctly.""" + batch = BatchProgress( + prs=[ + PRProgress(pr_number=1, status=PRStatus.SUCCESS), + PRProgress(pr_number=2, status=PRStatus.FAILED), + PRProgress(pr_number=3, status=PRStatus.SKIPPED), + PRProgress(pr_number=4, status=PRStatus.IN_PROGRESS), + PRProgress(pr_number=5, status=PRStatus.PENDING), + ] + ) + + assert batch.total_prs == 5 + assert batch.completed_prs == 3 + assert batch.success_count == 1 + assert batch.failed_count == 1 + assert batch.skipped_count == 1 + assert batch.in_progress_count == 1 + assert batch.pending_count == 1 + + def test_progress_percent(self): + """Should calculate progress percentage.""" + batch = BatchProgress( + prs=[ + PRProgress(pr_number=1, status=PRStatus.SUCCESS), + PRProgress(pr_number=2, status=PRStatus.PENDING), + PRProgress(pr_number=3, status=PRStatus.PENDING), + PRProgress(pr_number=4, status=PRStatus.PENDING), + ] + ) + + assert batch.progress_percent == 25.0 + + def test_empty_batch(self): + """Should handle empty batch.""" + batch = BatchProgress(prs=[]) + + assert batch.total_prs == 0 + assert batch.progress_percent == 100.0 + + +class TestProgressTracker: + """Tests for ProgressTracker class.""" + + def test_start_batch(self, tmp_path): + """Should start tracking a batch.""" + tracker = ProgressTracker(tmp_path) + tracker.start_batch([795, 800, 805]) + + assert tracker.batch is not None + assert tracker.batch.total_prs == 3 + assert (tmp_path / "progress.json").exists() + + def test_start_pr(self, tmp_path): + """Should mark PR as in progress.""" + tracker = ProgressTracker(tmp_path) + tracker.start_batch([795]) + tracker.start_pr(795, title="Test PR", skill_path="skills/test") + + pr = tracker._get_pr(795) + assert pr is not None + assert pr.status == PRStatus.IN_PROGRESS + assert pr.title == "Test PR" + + def test_update_iteration(self, tmp_path): + """Should update iteration progress.""" + tracker = ProgressTracker(tmp_path) + tracker.start_batch([795]) + tracker.start_pr(795) + tracker.update_iteration( + pr_number=795, + iteration=1, + feedback_count=5, + addressed_count=3, + skipped_count=2, + cost=0.50, + ) + + pr = tracker._get_pr(795) + assert pr.iterations_completed == 1 + assert pr.feedback_count == 5 + assert pr.addressed_count == 3 + assert pr.cost == 0.50 + + def test_complete_pr_success(self, tmp_path): + """Should mark PR as successful.""" + tracker = ProgressTracker(tmp_path) + tracker.start_batch([795]) + tracker.start_pr(795) + tracker.complete_pr(795, success=True) + + pr = tracker._get_pr(795) + assert pr.status == PRStatus.SUCCESS + assert pr.completed_at is not None + + def test_complete_pr_failure(self, tmp_path): + """Should mark PR as failed with error.""" + tracker = ProgressTracker(tmp_path) + tracker.start_batch([795]) + tracker.start_pr(795) + tracker.complete_pr(795, success=False, error="Analysis failed") + + pr = tracker._get_pr(795) + assert pr.status == PRStatus.FAILED + assert pr.error == "Analysis failed" + + def test_skip_pr(self, tmp_path): + """Should mark PR as skipped.""" + tracker = ProgressTracker(tmp_path) + tracker.start_batch([795]) + tracker.skip_pr(795, reason="No pending feedback") + + pr = tracker._get_pr(795) + assert pr.status == PRStatus.SKIPPED + assert pr.error == "No pending feedback" + + def test_get_summary(self, tmp_path): + """Should generate readable summary.""" + tracker = ProgressTracker(tmp_path) + tracker.start_batch([795, 800]) + tracker.start_pr(795) + tracker.complete_pr(795, success=True) + + summary = tracker.get_summary() + + assert "Batch Progress" in summary + assert "Total PRs: 2" in summary + assert "Success: 1" in summary + + def test_load_existing(self, tmp_path): + """Should load existing progress.""" + # Create initial tracker + tracker1 = ProgressTracker(tmp_path) + tracker1.start_batch([795, 800]) + tracker1.complete_pr(795, success=True) + + # Load with new tracker + tracker2 = ProgressTracker(tmp_path) + loaded = tracker2.load() + + assert loaded is not None + assert loaded.total_prs == 2 + assert loaded.success_count == 1 diff --git a/.claude/agents/skill-pr-addresser/tests/test_templates.py b/.claude/agents/skill-pr-addresser/tests/test_templates.py new file mode 100644 index 0000000..e3ee145 --- /dev/null +++ b/.claude/agents/skill-pr-addresser/tests/test_templates.py @@ -0,0 +1,376 @@ +"""Tests for the templates module.""" + +import pytest +from pathlib import Path + +import sys + +# Add agent directory to path +_agent_dir = Path(__file__).parent.parent +if str(_agent_dir) not in sys.path: + sys.path.insert(0, str(_agent_dir)) + +from src.templates import ( + render_template, + _fallback_template, + format_summary_comment, + format_iteration_limit_comment, + format_error_comment, + format_no_feedback_comment, + format_partial_progress_comment, +) + + +# --- Fixtures --- + + +@pytest.fixture +def templates_dir(tmp_path): + """Create a templates directory with test templates.""" + templates = tmp_path / "templates" + templates.mkdir() + return templates + + +# --- Tests for render_template --- + + +class TestRenderTemplate: + """Tests for render_template function.""" + + def test_renders_template(self, templates_dir): + """Should render a Handlebars template.""" + (templates_dir / "test.hbs").write_text("Hello {{name}}!") + + result = render_template(templates_dir, "test", {"name": "World"}) + + assert result == "Hello World!" + + def test_renders_template_with_list(self, templates_dir): + """Should render templates with list iteration (Mustache syntax).""" + # Mustache uses {{#items}}...{{/items}}, not {{#each items}} + (templates_dir / "list.hbs").write_text( + "{{#items}}- {{.}}\n{{/items}}" + ) + + result = render_template( + templates_dir, "list", {"items": ["one", "two", "three"]} + ) + + assert "- one" in result + assert "- two" in result + assert "- three" in result + + def test_renders_template_with_conditionals(self, templates_dir): + """Should handle conditionals (Mustache syntax).""" + # Mustache uses {{#show}}...{{/show}}, not {{#if show}} + (templates_dir / "cond.hbs").write_text( + "{{#show}}visible{{/show}}{{^show}}hidden{{/show}}" + ) + + result_true = render_template(templates_dir, "cond", {"show": True}) + result_false = render_template(templates_dir, "cond", {"show": False}) + + assert result_true == "visible" + assert result_false == "hidden" + + def test_uses_fallback_for_missing_template(self, templates_dir): + """Should use fallback when template doesn't exist.""" + result = render_template(templates_dir, "nonexistent", {"key": "value"}) + + # Should return something, not crash + assert result is not None + assert len(result) > 0 + + def test_handles_empty_data(self, templates_dir): + """Should handle empty data dict.""" + (templates_dir / "empty.hbs").write_text("Static content") + + result = render_template(templates_dir, "empty", {}) + + assert result == "Static content" + + +# --- Tests for _fallback_template --- + + +class TestFallbackTemplate: + """Tests for fallback template generation.""" + + def test_iteration_comment_fallback(self): + """Should generate iteration comment fallback.""" + data = { + "iteration": 2, + "feedback_count": 5, + "addressed_count": 4, + "skipped_count": 1, + "commit_sha": "abc123def", + "commit_short": "abc123d", + "files_modified": ["SKILL.md"], + "lines_added": 10, + "lines_removed": 2, + } + + result = _fallback_template("iteration_comment", data) + + assert "Iteration 2" in result + assert "5" in result # feedback count + assert "4" in result # addressed count + assert "abc123d" in result # commit short + + def test_ready_comment_fallback(self): + """Should generate ready comment fallback.""" + data = { + "pr_number": 795, + "skill_path": "components/skills/lang-rust-dev", + "reviewers": ["reviewer1", "reviewer2"], + } + + result = _fallback_template("ready_comment", data) + + assert "Re-Review" in result + assert "@reviewer1" in result + assert "@reviewer2" in result + + def test_skipped_feedback_fallback(self): + """Should generate skipped feedback fallback.""" + data = { + "skipped": [ + {"id": "thread-1", "reason": "Too complex"}, + {"id": "thread-2", "reason": "Needs clarification"}, + ] + } + + result = _fallback_template("skipped_feedback", data) + + assert "Not Addressed" in result + assert "thread-1" in result + assert "Too complex" in result + + def test_unknown_template_fallback(self): + """Should handle unknown template names.""" + result = _fallback_template("unknown_template", {"a": 1, "b": 2}) + + assert "unknown_template" in result + assert "2" in result # data item count + + +# --- Integration tests with actual templates --- + + +class TestActualTemplates: + """Tests with the actual template files.""" + + def test_iteration_comment_template(self): + """Should render the actual iteration_comment template.""" + templates_dir = Path(__file__).parent.parent / "templates" + + if not (templates_dir / "iteration_comment.hbs").exists(): + pytest.skip("Template file not found") + + data = { + "iteration": 1, + "feedback_count": 3, + "addressed_count": 2, + "skipped_count": 1, + "success_rate": "67%", + "addressed": [ + {"id": "thread-1", "action": "Added example"}, + {"id": "thread-2", "action": "Fixed typo"}, + ], + "skipped": [ + {"id": "thread-3", "reason": "Needs clarification"}, + ], + "commit_sha": "abc123def456", + "commit_short": "abc123d", + "files_modified": ["SKILL.md", "examples/error.md"], + "lines_added": 25, + "lines_removed": 5, + "timestamp": "2024-01-15T12:00:00Z", + } + + result = render_template(templates_dir, "iteration_comment", data) + + assert "Iteration 1" in result + assert "thread-1" in result + assert "abc123d" in result + assert "SKILL.md" in result + + def test_ready_comment_template(self): + """Should render the actual ready_comment template.""" + templates_dir = Path(__file__).parent.parent / "templates" + + if not (templates_dir / "ready_comment.hbs").exists(): + pytest.skip("Template file not found") + + data = { + "pr_number": 795, + "skill_path": "components/skills/lang-rust-dev", + "reviewers": ["reviewer1"], + "timestamp": "2024-01-15T12:00:00Z", + } + + result = render_template(templates_dir, "ready_comment", data) + + assert "Re-Review" in result or "review" in result.lower() + assert "reviewer1" in result + + +# ============================================================================= +# Stage 10: PR Comment Template Tests +# ============================================================================= + + +class TestFormatSummaryComment: + """Tests for format_summary_comment.""" + + def test_basic_format(self): + """Should format basic summary comment.""" + fix_results = [ + { + "group_id": "g1", + "skipped": False, + "failed": False, + "addressed_locations": [{"file": "a.md"}], + "addressed_thread_ids": ["PRRT_1"], + }, + ] + + comment = format_summary_comment(795, 1, fix_results, "abc123456789") + + assert "✅ Feedback Addressed" in comment + assert "Iteration 1" in comment + assert "abc1234" in comment + assert "g1" in comment + assert "✅ Complete" in comment + assert "1 locations addressed" in comment + + def test_with_skipped_result(self): + """Should show skipped status.""" + fix_results = [ + { + "group_id": "g1", + "skipped": True, + "failed": False, + "reason": "already_complete", + "addressed_locations": [], + "addressed_thread_ids": [], + }, + ] + + comment = format_summary_comment(795, 2, fix_results, "abc123") + + assert "⏭️ Skipped (already_complete)" in comment + + def test_with_failed_result(self): + """Should show failed status.""" + fix_results = [ + { + "group_id": "g1", + "skipped": False, + "failed": True, + "addressed_locations": [], + "addressed_thread_ids": [], + }, + ] + + comment = format_summary_comment(795, 1, fix_results, "abc123") + + assert "❌ Failed" in comment + + def test_includes_attribution(self): + """Should include attribution footer.""" + comment = format_summary_comment(795, 1, [], "abc123") + + assert "skill-pr-addresser" in comment + + +class TestFormatIterationLimitComment: + """Tests for format_iteration_limit_comment.""" + + def test_basic_format(self): + """Should format iteration limit warning.""" + comment = format_iteration_limit_comment(3, 5) + + assert "⚠️ Iteration Limit Reached" in comment + assert "maximum iterations (3)" in comment + assert "**Resolved:** 5 threads" in comment + + def test_includes_suggestions(self): + """Should include suggestions for next steps.""" + comment = format_iteration_limit_comment(3, 0) + + assert "--max-iterations" in comment + assert "Manually addressing" in comment + + +class TestFormatErrorComment: + """Tests for format_error_comment.""" + + def test_basic_format(self): + """Should format error comment.""" + comment = format_error_comment("consolidation", "LLM call failed: timeout") + + assert "❌ Processing Failed" in comment + assert "consolidation" in comment + assert "LLM call failed: timeout" in comment + + def test_code_block_formatting(self): + """Should wrap error in code block.""" + comment = format_error_comment("fix", "Error details here") + + assert "```" in comment + + +class TestFormatNoFeedbackComment: + """Tests for format_no_feedback_comment.""" + + def test_basic_format(self): + """Should format no feedback message.""" + comment = format_no_feedback_comment() + + assert "✅ No New Feedback" in comment + + +class TestFormatPartialProgressComment: + """Tests for format_partial_progress_comment.""" + + def test_basic_format(self): + """Should format partial progress.""" + comment = format_partial_progress_comment( + iteration=2, + addressed_count=5, + total_count=10, + pending_groups=["g1", "g2"], + ) + + assert "⏳ Partial Progress" in comment + assert "Iteration 2" in comment + assert "5/10 (50%)" in comment + + def test_with_pending_groups(self): + """Should list pending groups.""" + comment = format_partial_progress_comment( + iteration=1, + addressed_count=2, + total_count=8, + pending_groups=["add_section", "fix_typo"], + ) + + assert "Pending Groups" in comment + assert "add_section" in comment + assert "fix_typo" in comment + + def test_truncates_many_groups(self): + """Should truncate long group list.""" + groups = [f"group_{i}" for i in range(10)] + comment = format_partial_progress_comment( + iteration=1, + addressed_count=0, + total_count=10, + pending_groups=groups, + ) + + assert "group_0" in comment + assert "5 more" in comment + assert "group_9" not in comment diff --git a/.claude/agents/skill-pr-addresser/tests/test_tracing.py b/.claude/agents/skill-pr-addresser/tests/test_tracing.py new file mode 100644 index 0000000..488f716 --- /dev/null +++ b/.claude/agents/skill-pr-addresser/tests/test_tracing.py @@ -0,0 +1,172 @@ +"""Tests for the tracing module.""" + +import pytest +from pathlib import Path + +import sys + +# Add agent directory to path +_agent_dir = Path(__file__).parent.parent +if str(_agent_dir) not in sys.path: + sys.path.insert(0, str(_agent_dir)) + +from src.tracing import ( + TracingConfig, + init_tracing, + get_tracer, + span, + traced, + add_event, + set_attribute, + record_subagent_call, + record_iteration, + OTEL_AVAILABLE, +) + + +class TestTracingConfig: + """Tests for TracingConfig class.""" + + def test_default_config(self): + """Should create config with defaults.""" + config = TracingConfig() + + assert config.enabled is False + assert config.endpoint == "localhost:4317" # gRPC (no http://) + assert config.service_name == "skill-pr-addresser" + + def test_custom_config(self): + """Should accept custom values.""" + config = TracingConfig( + enabled=True, + endpoint="custom:4317", + service_name="custom-service", + ) + + assert config.enabled is True + assert config.endpoint == "custom:4317" + + +class TestInitTracing: + """Tests for init_tracing function.""" + + def test_returns_false_when_disabled(self): + """Should return False when tracing is disabled.""" + config = TracingConfig(enabled=False) + result = init_tracing(config) + + assert result is False + + def test_returns_false_when_otel_not_available(self): + """Should return False when OTEL not installed.""" + if OTEL_AVAILABLE: + pytest.skip("OTEL is available, can't test unavailable case") + + config = TracingConfig(enabled=True) + result = init_tracing(config) + + assert result is False + + +class TestSpanContextManager: + """Tests for span context manager.""" + + def test_yields_none_when_no_tracer(self): + """Should yield None when tracer not initialized.""" + # Reset tracer state + import src.tracing as tracing_module + + original_tracer = tracing_module._tracer + tracing_module._tracer = None + + try: + with span("test-span") as s: + assert s is None + finally: + tracing_module._tracer = original_tracer + + def test_handles_attributes(self): + """Should not crash with attributes when tracer disabled.""" + with span("test-span", attributes={"key": "value"}) as s: + # Should work without error + pass + + +class TestTracedDecorator: + """Tests for traced decorator.""" + + def test_decorates_function(self): + """Should wrap function without changing behavior.""" + + @traced("test-function") + def sample_function(x): + return x * 2 + + result = sample_function(5) + assert result == 10 + + def test_uses_function_name_by_default(self): + """Should use function name when no name provided.""" + + @traced() + def my_function(): + return "result" + + result = my_function() + assert result == "result" + + def test_propagates_exceptions(self): + """Should propagate exceptions from decorated function.""" + + @traced("failing-function") + def failing_function(): + raise ValueError("Test error") + + with pytest.raises(ValueError, match="Test error"): + failing_function() + + +class TestEventFunctions: + """Tests for event and attribute functions.""" + + def test_add_event_no_crash_when_disabled(self): + """Should not crash when tracing disabled.""" + add_event("test-event", {"key": "value"}) + # Should not raise + + def test_set_attribute_no_crash_when_disabled(self): + """Should not crash when tracing disabled.""" + set_attribute("key", "value") + # Should not raise + + def test_record_subagent_call_no_crash(self): + """Should not crash when tracing disabled.""" + record_subagent_call( + name="feedback-analyzer", + model="haiku", + duration_seconds=1.5, + success=True, + ) + # Should not raise + + def test_record_subagent_call_with_error(self): + """Should handle error parameter.""" + record_subagent_call( + name="feedback-fixer", + model="sonnet", + duration_seconds=2.0, + success=False, + error="Timeout", + ) + # Should not raise + + def test_record_iteration_no_crash(self): + """Should not crash when tracing disabled.""" + record_iteration( + iteration=1, + feedback_count=5, + addressed_count=4, + skipped_count=1, + success_rate=0.8, + ) + # Should not raise diff --git a/.claude/agents/skill-pr-addresser/tests/test_worktree.py b/.claude/agents/skill-pr-addresser/tests/test_worktree.py new file mode 100644 index 0000000..d7f578d --- /dev/null +++ b/.claude/agents/skill-pr-addresser/tests/test_worktree.py @@ -0,0 +1,253 @@ +# tests/test_worktree.py +"""Tests for worktree management and rate limit detection. + +Stage 10 tests for git worktree and GitHub API operations. +""" + +import pytest +from pathlib import Path +from tempfile import TemporaryDirectory +from unittest.mock import patch, MagicMock +import sys + +# Add src to path for imports +sys.path.insert(0, str(Path(__file__).parent.parent / "src")) + +from worktree import ( + create_worktree, + verify_worktree_clean, + get_worktree_info, + remove_worktree, + worktree_context, + WorktreeError, + WorktreeInfo, +) +from github_pr import parse_rate_limit_error, RateLimitError + + +# ============================================================================= +# Worktree Tests +# ============================================================================= + + +class TestCreateWorktree: + def test_create_worktree_success(self): + """Should create worktree successfully.""" + with TemporaryDirectory() as tmpdir: + repo_path = Path(tmpdir) / "repo" + worktree_dir = Path(tmpdir) / "worktrees" + + with patch("subprocess.run") as mock_run: + mock_run.return_value = MagicMock(returncode=0) + + result = create_worktree(repo_path, "feature/test", worktree_dir) + + assert result == worktree_dir / "feature-test" + mock_run.assert_called_once() + # Check sanitized path + call_args = mock_run.call_args[0][0] + assert "feature-test" in str(call_args[-2]) + + def test_create_worktree_failure(self): + """Should raise WorktreeError on failure.""" + with TemporaryDirectory() as tmpdir: + repo_path = Path(tmpdir) / "repo" + worktree_dir = Path(tmpdir) / "worktrees" + + with patch("subprocess.run") as mock_run: + mock_run.return_value = MagicMock( + returncode=1, + stderr="fatal: 'feature/test' is already checked out", + ) + + with pytest.raises(WorktreeError) as exc: + create_worktree(repo_path, "feature/test", worktree_dir) + + assert "already checked out" in str(exc.value) + + def test_create_worktree_existing(self): + """Should return existing worktree path.""" + with TemporaryDirectory() as tmpdir: + repo_path = Path(tmpdir) / "repo" + worktree_dir = Path(tmpdir) / "worktrees" + + # Pre-create the worktree directory + (worktree_dir / "feature-test").mkdir(parents=True) + + with patch("subprocess.run") as mock_run: + result = create_worktree(repo_path, "feature/test", worktree_dir) + + assert result == worktree_dir / "feature-test" + mock_run.assert_not_called() + + +class TestVerifyWorktreeClean: + def test_clean_worktree(self): + """Should return True for clean worktree.""" + with patch("subprocess.run") as mock_run: + mock_run.return_value = MagicMock(returncode=0, stdout="") + + result = verify_worktree_clean(Path("/tmp/worktree")) + + assert result is True + + def test_dirty_worktree(self): + """Should return False for dirty worktree.""" + with patch("subprocess.run") as mock_run: + mock_run.return_value = MagicMock( + returncode=0, + stdout=" M SKILL.md\n?? new-file.txt\n", + ) + + result = verify_worktree_clean(Path("/tmp/worktree")) + + assert result is False + + def test_error_returns_false(self): + """Should return False on git error.""" + with patch("subprocess.run") as mock_run: + mock_run.return_value = MagicMock(returncode=1) + + result = verify_worktree_clean(Path("/tmp/worktree")) + + assert result is False + + +class TestGetWorktreeInfo: + def test_get_info(self): + """Should return WorktreeInfo with current state.""" + with patch("subprocess.run") as mock_run: + mock_run.side_effect = [ + MagicMock(returncode=0, stdout="feature/test\n"), # branch + MagicMock(returncode=0, stdout="abc123456789\n"), # commit + MagicMock(returncode=0, stdout=""), # clean check + ] + + info = get_worktree_info(Path("/tmp/worktree")) + + assert info.branch == "feature/test" + assert info.commit == "abc12345" # Truncated + assert info.is_clean is True + + +class TestRemoveWorktree: + def test_remove_success(self): + """Should remove worktree successfully.""" + with patch("subprocess.run") as mock_run: + mock_run.return_value = MagicMock(returncode=0) + + remove_worktree(Path("/repo"), Path("/tmp/worktree")) + + mock_run.assert_called_once() + args = mock_run.call_args[0][0] + assert "worktree" in args + assert "remove" in args + + def test_remove_force(self): + """Should pass --force flag when specified.""" + with patch("subprocess.run") as mock_run: + mock_run.return_value = MagicMock(returncode=0) + + remove_worktree(Path("/repo"), Path("/tmp/worktree"), force=True) + + args = mock_run.call_args[0][0] + assert "--force" in args + + +class TestWorktreeContext: + def test_context_manager(self): + """Should manage worktree lifecycle.""" + with patch("worktree.create_worktree") as mock_create: + with patch("worktree.remove_worktree") as mock_remove: + mock_create.return_value = Path("/tmp/worktree") + + with worktree_context( + Path("/repo"), + "feature/test", + Path("/tmp/worktrees"), + cleanup=True, + ) as wt_path: + assert wt_path == Path("/tmp/worktree") + mock_remove.assert_not_called() + + mock_remove.assert_called_once() + + def test_context_manager_no_cleanup(self): + """Should not remove worktree when cleanup=False.""" + with patch("worktree.create_worktree") as mock_create: + with patch("worktree.remove_worktree") as mock_remove: + mock_create.return_value = Path("/tmp/worktree") + + with worktree_context( + Path("/repo"), + "feature/test", + Path("/tmp/worktrees"), + cleanup=False, + ) as wt_path: + pass + + mock_remove.assert_not_called() + + +class TestWorktreeInfo: + def test_to_dict(self): + """Should serialize to dictionary.""" + info = WorktreeInfo( + path=Path("/tmp/worktree"), + branch="feature/test", + commit="abc123", + is_clean=True, + ) + + data = info.to_dict() + assert data["path"] == "/tmp/worktree" + assert data["branch"] == "feature/test" + assert data["commit"] == "abc123" + assert data["is_clean"] is True + + +# ============================================================================= +# Rate Limit Detection Tests +# ============================================================================= + + +class TestRateLimitError: + def test_error_message(self): + """Should format error message.""" + err = RateLimitError(60, "Too many requests") + + assert err.retry_after == 60 + assert "60s" in str(err) + assert "Too many requests" in str(err) + + +class TestParseRateLimitError: + def test_parse_seconds_pattern(self): + """Should parse 'N seconds' pattern.""" + result = parse_rate_limit_error("rate limit exceeded, retry after 60 seconds") + assert result == 60 + + def test_parse_retry_after_pattern(self): + """Should parse 'retry-after: N' pattern.""" + result = parse_rate_limit_error("retry-after: 30") + assert result == 30 + + def test_parse_wait_pattern(self): + """Should parse 'wait N seconds' pattern.""" + result = parse_rate_limit_error("please wait 120 seconds before trying again") + assert result == 120 + + def test_generic_rate_limit(self): + """Should return default for generic rate limit message.""" + result = parse_rate_limit_error("API rate limit exceeded for user") + assert result == 60 # Default + + def test_not_rate_limit(self): + """Should return None for non-rate-limit error.""" + result = parse_rate_limit_error("Not found: resource does not exist") + assert result is None + + def test_case_insensitive(self): + """Should be case insensitive.""" + result = parse_rate_limit_error("RATE LIMIT exceeded, retry after 45 SECONDS") + assert result == 45 diff --git a/.claude/agents/skill-pr-addresser/uv.lock b/.claude/agents/skill-pr-addresser/uv.lock new file mode 100644 index 0000000..168f75b --- /dev/null +++ b/.claude/agents/skill-pr-addresser/uv.lock @@ -0,0 +1,693 @@ +version = 1 +revision = 3 +requires-python = ">=3.11" +resolution-markers = [ + "python_full_version >= '3.13'", + "python_full_version < '3.13'", +] + +[[package]] +name = "cement" +version = "3.0.14" +source = { registry = "https://pypi.org/simple" } +sdist = { url = "https://files.pythonhosted.org/packages/3a/9a/9f444fcb8d372e5fd0791029fdc44162c95c4944b56114e27e366874a984/cement-3.0.14.tar.gz", hash = "sha256:0a8efc10646bd9a68d5cc5d2b69cfa0d9b3c186ce5d268497e3bbfc823dcb525", size = 98676, upload-time = "2025-05-05T16:54:20.704Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/42/34/019ede76aeed47e6056700d9538ad28087a0f1aaec5ee483cf22ece716c4/cement-3.0.14-py3-none-any.whl", hash = "sha256:d20376f8447717998aa7b0bc0384fcad730be580a13df336b326c575e340c43e", size = 138386, upload-time = "2025-05-05T16:54:19.448Z" }, +] + +[[package]] +name = "chevron" +version = "0.14.0" +source = { registry = "https://pypi.org/simple" } +sdist = { url = "https://files.pythonhosted.org/packages/15/1f/ca74b65b19798895d63a6e92874162f44233467c9e7c1ed8afd19016ebe9/chevron-0.14.0.tar.gz", hash = "sha256:87613aafdf6d77b6a90ff073165a61ae5086e21ad49057aa0e53681601800ebf", size = 11440, upload-time = "2021-01-02T22:47:59.233Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/52/93/342cc62a70ab727e093ed98e02a725d85b746345f05d2b5e5034649f4ec8/chevron-0.14.0-py3-none-any.whl", hash = "sha256:fbf996a709f8da2e745ef763f482ce2d311aa817d287593a5b990d6d6e4f0443", size = 11595, upload-time = "2021-01-02T22:47:57.847Z" }, +] + +[[package]] +name = "colorama" +version = "0.4.6" +source = { registry = "https://pypi.org/simple" } +sdist = { url = "https://files.pythonhosted.org/packages/d8/53/6f443c9a4a8358a93a6792e2acffb9d9d5cb0a5cfd8802644b7b1c9a02e4/colorama-0.4.6.tar.gz", hash = "sha256:08695f5cb7ed6e0531a20572697297273c47b8cae5a63ffc6d6ed5c201be6e44", size = 27697, upload-time = "2022-10-25T02:36:22.414Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/d1/d6/3965ed04c63042e047cb6a3e6ed1a63a35087b6a609aa3a15ed8ac56c221/colorama-0.4.6-py2.py3-none-any.whl", hash = "sha256:4f1d9991f5acc0ca119f9d443620b77f9d6b33703e51011c16baf57afb285fc6", size = 25335, upload-time = "2022-10-25T02:36:20.889Z" }, +] + +[[package]] +name = "colorlog" +version = "6.10.1" +source = { registry = "https://pypi.org/simple" } +dependencies = [ + { name = "colorama", marker = "sys_platform == 'win32'" }, +] +sdist = { url = "https://files.pythonhosted.org/packages/a2/61/f083b5ac52e505dfc1c624eafbf8c7589a0d7f32daa398d2e7590efa5fda/colorlog-6.10.1.tar.gz", hash = "sha256:eb4ae5cb65fe7fec7773c2306061a8e63e02efc2c72eba9d27b0fa23c94f1321", size = 17162, upload-time = "2025-10-16T16:14:11.978Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/6d/c1/e419ef3723a074172b68aaa89c9f3de486ed4c2399e2dbd8113a4fdcaf9e/colorlog-6.10.1-py3-none-any.whl", hash = "sha256:2d7e8348291948af66122cff006c9f8da6255d224e7cf8e37d8de2df3bad8c9c", size = 11743, upload-time = "2025-10-16T16:14:10.512Z" }, +] + +[[package]] +name = "coverage" +version = "7.13.1" +source = { registry = "https://pypi.org/simple" } +sdist = { url = "https://files.pythonhosted.org/packages/23/f9/e92df5e07f3fc8d4c7f9a0f146ef75446bf870351cd37b788cf5897f8079/coverage-7.13.1.tar.gz", hash = "sha256:b7593fe7eb5feaa3fbb461ac79aac9f9fc0387a5ca8080b0c6fe2ca27b091afd", size = 825862, upload-time = "2025-12-28T15:42:56.969Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/b4/9b/77baf488516e9ced25fc215a6f75d803493fc3f6a1a1227ac35697910c2a/coverage-7.13.1-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:1a55d509a1dc5a5b708b5dad3b5334e07a16ad4c2185e27b40e4dba796ab7f88", size = 218755, upload-time = "2025-12-28T15:40:30.812Z" }, + { url = "https://files.pythonhosted.org/packages/d7/cd/7ab01154e6eb79ee2fab76bf4d89e94c6648116557307ee4ebbb85e5c1bf/coverage-7.13.1-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:4d010d080c4888371033baab27e47c9df7d6fb28d0b7b7adf85a4a49be9298b3", size = 219257, upload-time = "2025-12-28T15:40:32.333Z" }, + { url = "https://files.pythonhosted.org/packages/01/d5/b11ef7863ffbbdb509da0023fad1e9eda1c0eaea61a6d2ea5b17d4ac706e/coverage-7.13.1-cp311-cp311-manylinux1_i686.manylinux_2_28_i686.manylinux_2_5_i686.whl", hash = "sha256:d938b4a840fb1523b9dfbbb454f652967f18e197569c32266d4d13f37244c3d9", size = 249657, upload-time = "2025-12-28T15:40:34.1Z" }, + { url = "https://files.pythonhosted.org/packages/f7/7c/347280982982383621d29b8c544cf497ae07ac41e44b1ca4903024131f55/coverage-7.13.1-cp311-cp311-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:bf100a3288f9bb7f919b87eb84f87101e197535b9bd0e2c2b5b3179633324fee", size = 251581, upload-time = "2025-12-28T15:40:36.131Z" }, + { url = "https://files.pythonhosted.org/packages/82/f6/ebcfed11036ade4c0d75fa4453a6282bdd225bc073862766eec184a4c643/coverage-7.13.1-cp311-cp311-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:ef6688db9bf91ba111ae734ba6ef1a063304a881749726e0d3575f5c10a9facf", size = 253691, upload-time = "2025-12-28T15:40:37.626Z" }, + { url = "https://files.pythonhosted.org/packages/02/92/af8f5582787f5d1a8b130b2dcba785fa5e9a7a8e121a0bb2220a6fdbdb8a/coverage-7.13.1-cp311-cp311-manylinux_2_31_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:0b609fc9cdbd1f02e51f67f51e5aee60a841ef58a68d00d5ee2c0faf357481a3", size = 249799, upload-time = "2025-12-28T15:40:39.47Z" }, + { url = "https://files.pythonhosted.org/packages/24/aa/0e39a2a3b16eebf7f193863323edbff38b6daba711abaaf807d4290cf61a/coverage-7.13.1-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:c43257717611ff5e9a1d79dce8e47566235ebda63328718d9b65dd640bc832ef", size = 251389, upload-time = "2025-12-28T15:40:40.954Z" }, + { url = "https://files.pythonhosted.org/packages/73/46/7f0c13111154dc5b978900c0ccee2e2ca239b910890e674a77f1363d483e/coverage-7.13.1-cp311-cp311-musllinux_1_2_i686.whl", hash = "sha256:e09fbecc007f7b6afdfb3b07ce5bd9f8494b6856dd4f577d26c66c391b829851", size = 249450, upload-time = "2025-12-28T15:40:42.489Z" }, + { url = "https://files.pythonhosted.org/packages/ac/ca/e80da6769e8b669ec3695598c58eef7ad98b0e26e66333996aee6316db23/coverage-7.13.1-cp311-cp311-musllinux_1_2_riscv64.whl", hash = "sha256:a03a4f3a19a189919c7055098790285cc5c5b0b3976f8d227aea39dbf9f8bfdb", size = 249170, upload-time = "2025-12-28T15:40:44.279Z" }, + { url = "https://files.pythonhosted.org/packages/af/18/9e29baabdec1a8644157f572541079b4658199cfd372a578f84228e860de/coverage-7.13.1-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:3820778ea1387c2b6a818caec01c63adc5b3750211af6447e8dcfb9b6f08dbba", size = 250081, upload-time = "2025-12-28T15:40:45.748Z" }, + { url = "https://files.pythonhosted.org/packages/00/f8/c3021625a71c3b2f516464d322e41636aea381018319050a8114105872ee/coverage-7.13.1-cp311-cp311-win32.whl", hash = "sha256:ff10896fa55167371960c5908150b434b71c876dfab97b69478f22c8b445ea19", size = 221281, upload-time = "2025-12-28T15:40:47.232Z" }, + { url = "https://files.pythonhosted.org/packages/27/56/c216625f453df6e0559ed666d246fcbaaa93f3aa99eaa5080cea1229aa3d/coverage-7.13.1-cp311-cp311-win_amd64.whl", hash = "sha256:a998cc0aeeea4c6d5622a3754da5a493055d2d95186bad877b0a34ea6e6dbe0a", size = 222215, upload-time = "2025-12-28T15:40:49.19Z" }, + { url = "https://files.pythonhosted.org/packages/5c/9a/be342e76f6e531cae6406dc46af0d350586f24d9b67fdfa6daee02df71af/coverage-7.13.1-cp311-cp311-win_arm64.whl", hash = "sha256:fea07c1a39a22614acb762e3fbbb4011f65eedafcb2948feeef641ac78b4ee5c", size = 220886, upload-time = "2025-12-28T15:40:51.067Z" }, + { url = "https://files.pythonhosted.org/packages/ce/8a/87af46cccdfa78f53db747b09f5f9a21d5fc38d796834adac09b30a8ce74/coverage-7.13.1-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:6f34591000f06e62085b1865c9bc5f7858df748834662a51edadfd2c3bfe0dd3", size = 218927, upload-time = "2025-12-28T15:40:52.814Z" }, + { url = "https://files.pythonhosted.org/packages/82/a8/6e22fdc67242a4a5a153f9438d05944553121c8f4ba70cb072af4c41362e/coverage-7.13.1-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:b67e47c5595b9224599016e333f5ec25392597a89d5744658f837d204e16c63e", size = 219288, upload-time = "2025-12-28T15:40:54.262Z" }, + { url = "https://files.pythonhosted.org/packages/d0/0a/853a76e03b0f7c4375e2ca025df45c918beb367f3e20a0a8e91967f6e96c/coverage-7.13.1-cp312-cp312-manylinux1_i686.manylinux_2_28_i686.manylinux_2_5_i686.whl", hash = "sha256:3e7b8bd70c48ffb28461ebe092c2345536fb18bbbf19d287c8913699735f505c", size = 250786, upload-time = "2025-12-28T15:40:56.059Z" }, + { url = "https://files.pythonhosted.org/packages/ea/b4/694159c15c52b9f7ec7adf49d50e5f8ee71d3e9ef38adb4445d13dd56c20/coverage-7.13.1-cp312-cp312-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:c223d078112e90dc0e5c4e35b98b9584164bea9fbbd221c0b21c5241f6d51b62", size = 253543, upload-time = "2025-12-28T15:40:57.585Z" }, + { url = "https://files.pythonhosted.org/packages/96/b2/7f1f0437a5c855f87e17cf5d0dc35920b6440ff2b58b1ba9788c059c26c8/coverage-7.13.1-cp312-cp312-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:794f7c05af0763b1bbd1b9e6eff0e52ad068be3b12cd96c87de037b01390c968", size = 254635, upload-time = "2025-12-28T15:40:59.443Z" }, + { url = "https://files.pythonhosted.org/packages/e9/d1/73c3fdb8d7d3bddd9473c9c6a2e0682f09fc3dfbcb9c3f36412a7368bcab/coverage-7.13.1-cp312-cp312-manylinux_2_31_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:0642eae483cc8c2902e4af7298bf886d605e80f26382124cddc3967c2a3df09e", size = 251202, upload-time = "2025-12-28T15:41:01.328Z" }, + { url = "https://files.pythonhosted.org/packages/66/3c/f0edf75dcc152f145d5598329e864bbbe04ab78660fe3e8e395f9fff010f/coverage-7.13.1-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:9f5e772ed5fef25b3de9f2008fe67b92d46831bd2bc5bdc5dd6bfd06b83b316f", size = 252566, upload-time = "2025-12-28T15:41:03.319Z" }, + { url = "https://files.pythonhosted.org/packages/17/b3/e64206d3c5f7dcbceafd14941345a754d3dbc78a823a6ed526e23b9cdaab/coverage-7.13.1-cp312-cp312-musllinux_1_2_i686.whl", hash = "sha256:45980ea19277dc0a579e432aef6a504fe098ef3a9032ead15e446eb0f1191aee", size = 250711, upload-time = "2025-12-28T15:41:06.411Z" }, + { url = "https://files.pythonhosted.org/packages/dc/ad/28a3eb970a8ef5b479ee7f0c484a19c34e277479a5b70269dc652b730733/coverage-7.13.1-cp312-cp312-musllinux_1_2_riscv64.whl", hash = "sha256:e4f18eca6028ffa62adbd185a8f1e1dd242f2e68164dba5c2b74a5204850b4cf", size = 250278, upload-time = "2025-12-28T15:41:08.285Z" }, + { url = "https://files.pythonhosted.org/packages/54/e3/c8f0f1a93133e3e1291ca76cbb63565bd4b5c5df63b141f539d747fff348/coverage-7.13.1-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:f8dca5590fec7a89ed6826fce625595279e586ead52e9e958d3237821fbc750c", size = 252154, upload-time = "2025-12-28T15:41:09.969Z" }, + { url = "https://files.pythonhosted.org/packages/d0/bf/9939c5d6859c380e405b19e736321f1c7d402728792f4c752ad1adcce005/coverage-7.13.1-cp312-cp312-win32.whl", hash = "sha256:ff86d4e85188bba72cfb876df3e11fa243439882c55957184af44a35bd5880b7", size = 221487, upload-time = "2025-12-28T15:41:11.468Z" }, + { url = "https://files.pythonhosted.org/packages/fa/dc/7282856a407c621c2aad74021680a01b23010bb8ebf427cf5eacda2e876f/coverage-7.13.1-cp312-cp312-win_amd64.whl", hash = "sha256:16cc1da46c04fb0fb128b4dc430b78fa2aba8a6c0c9f8eb391fd5103409a6ac6", size = 222299, upload-time = "2025-12-28T15:41:13.386Z" }, + { url = "https://files.pythonhosted.org/packages/10/79/176a11203412c350b3e9578620013af35bcdb79b651eb976f4a4b32044fa/coverage-7.13.1-cp312-cp312-win_arm64.whl", hash = "sha256:8d9bc218650022a768f3775dd7fdac1886437325d8d295d923ebcfef4892ad5c", size = 220941, upload-time = "2025-12-28T15:41:14.975Z" }, + { url = "https://files.pythonhosted.org/packages/a3/a4/e98e689347a1ff1a7f67932ab535cef82eb5e78f32a9e4132e114bbb3a0a/coverage-7.13.1-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:cb237bfd0ef4d5eb6a19e29f9e528ac67ac3be932ea6b44fb6cc09b9f3ecff78", size = 218951, upload-time = "2025-12-28T15:41:16.653Z" }, + { url = "https://files.pythonhosted.org/packages/32/33/7cbfe2bdc6e2f03d6b240d23dc45fdaf3fd270aaf2d640be77b7f16989ab/coverage-7.13.1-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:1dcb645d7e34dcbcc96cd7c132b1fc55c39263ca62eb961c064eb3928997363b", size = 219325, upload-time = "2025-12-28T15:41:18.609Z" }, + { url = "https://files.pythonhosted.org/packages/59/f6/efdabdb4929487baeb7cb2a9f7dac457d9356f6ad1b255be283d58b16316/coverage-7.13.1-cp313-cp313-manylinux1_i686.manylinux_2_28_i686.manylinux_2_5_i686.whl", hash = "sha256:3d42df8201e00384736f0df9be2ced39324c3907607d17d50d50116c989d84cd", size = 250309, upload-time = "2025-12-28T15:41:20.629Z" }, + { url = "https://files.pythonhosted.org/packages/12/da/91a52516e9d5aea87d32d1523f9cdcf7a35a3b298e6be05d6509ba3cfab2/coverage-7.13.1-cp313-cp313-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:fa3edde1aa8807de1d05934982416cb3ec46d1d4d91e280bcce7cca01c507992", size = 252907, upload-time = "2025-12-28T15:41:22.257Z" }, + { url = "https://files.pythonhosted.org/packages/75/38/f1ea837e3dc1231e086db1638947e00d264e7e8c41aa8ecacf6e1e0c05f4/coverage-7.13.1-cp313-cp313-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:9edd0e01a343766add6817bc448408858ba6b489039eaaa2018474e4001651a4", size = 254148, upload-time = "2025-12-28T15:41:23.87Z" }, + { url = "https://files.pythonhosted.org/packages/7f/43/f4f16b881aaa34954ba446318dea6b9ed5405dd725dd8daac2358eda869a/coverage-7.13.1-cp313-cp313-manylinux_2_31_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:985b7836931d033570b94c94713c6dba5f9d3ff26045f72c3e5dbc5fe3361e5a", size = 250515, upload-time = "2025-12-28T15:41:25.437Z" }, + { url = "https://files.pythonhosted.org/packages/84/34/8cba7f00078bd468ea914134e0144263194ce849ec3baad187ffb6203d1c/coverage-7.13.1-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:ffed1e4980889765c84a5d1a566159e363b71d6b6fbaf0bebc9d3c30bc016766", size = 252292, upload-time = "2025-12-28T15:41:28.459Z" }, + { url = "https://files.pythonhosted.org/packages/8c/a4/cffac66c7652d84ee4ac52d3ccb94c015687d3b513f9db04bfcac2ac800d/coverage-7.13.1-cp313-cp313-musllinux_1_2_i686.whl", hash = "sha256:8842af7f175078456b8b17f1b73a0d16a65dcbdc653ecefeb00a56b3c8c298c4", size = 250242, upload-time = "2025-12-28T15:41:30.02Z" }, + { url = "https://files.pythonhosted.org/packages/f4/78/9a64d462263dde416f3c0067efade7b52b52796f489b1037a95b0dc389c9/coverage-7.13.1-cp313-cp313-musllinux_1_2_riscv64.whl", hash = "sha256:ccd7a6fca48ca9c131d9b0a2972a581e28b13416fc313fb98b6d24a03ce9a398", size = 250068, upload-time = "2025-12-28T15:41:32.007Z" }, + { url = "https://files.pythonhosted.org/packages/69/c8/a8994f5fece06db7c4a97c8fc1973684e178599b42e66280dded0524ef00/coverage-7.13.1-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:0403f647055de2609be776965108447deb8e384fe4a553c119e3ff6bfbab4784", size = 251846, upload-time = "2025-12-28T15:41:33.946Z" }, + { url = "https://files.pythonhosted.org/packages/cc/f7/91fa73c4b80305c86598a2d4e54ba22df6bf7d0d97500944af7ef155d9f7/coverage-7.13.1-cp313-cp313-win32.whl", hash = "sha256:549d195116a1ba1e1ae2f5ca143f9777800f6636eab917d4f02b5310d6d73461", size = 221512, upload-time = "2025-12-28T15:41:35.519Z" }, + { url = "https://files.pythonhosted.org/packages/45/0b/0768b4231d5a044da8f75e097a8714ae1041246bb765d6b5563bab456735/coverage-7.13.1-cp313-cp313-win_amd64.whl", hash = "sha256:5899d28b5276f536fcf840b18b61a9fce23cc3aec1d114c44c07fe94ebeaa500", size = 222321, upload-time = "2025-12-28T15:41:37.371Z" }, + { url = "https://files.pythonhosted.org/packages/9b/b8/bdcb7253b7e85157282450262008f1366aa04663f3e3e4c30436f596c3e2/coverage-7.13.1-cp313-cp313-win_arm64.whl", hash = "sha256:868a2fae76dfb06e87291bcbd4dcbcc778a8500510b618d50496e520bd94d9b9", size = 220949, upload-time = "2025-12-28T15:41:39.553Z" }, + { url = "https://files.pythonhosted.org/packages/70/52/f2be52cc445ff75ea8397948c96c1b4ee14f7f9086ea62fc929c5ae7b717/coverage-7.13.1-cp313-cp313t-macosx_10_13_x86_64.whl", hash = "sha256:67170979de0dacac3f3097d02b0ad188d8edcea44ccc44aaa0550af49150c7dc", size = 219643, upload-time = "2025-12-28T15:41:41.567Z" }, + { url = "https://files.pythonhosted.org/packages/47/79/c85e378eaa239e2edec0c5523f71542c7793fe3340954eafb0bc3904d32d/coverage-7.13.1-cp313-cp313t-macosx_11_0_arm64.whl", hash = "sha256:f80e2bb21bfab56ed7405c2d79d34b5dc0bc96c2c1d2a067b643a09fb756c43a", size = 219997, upload-time = "2025-12-28T15:41:43.418Z" }, + { url = "https://files.pythonhosted.org/packages/fe/9b/b1ade8bfb653c0bbce2d6d6e90cc6c254cbb99b7248531cc76253cb4da6d/coverage-7.13.1-cp313-cp313t-manylinux1_i686.manylinux_2_28_i686.manylinux_2_5_i686.whl", hash = "sha256:f83351e0f7dcdb14d7326c3d8d8c4e915fa685cbfdc6281f9470d97a04e9dfe4", size = 261296, upload-time = "2025-12-28T15:41:45.207Z" }, + { url = "https://files.pythonhosted.org/packages/1f/af/ebf91e3e1a2473d523e87e87fd8581e0aa08741b96265730e2d79ce78d8d/coverage-7.13.1-cp313-cp313t-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:bb3f6562e89bad0110afbe64e485aac2462efdce6232cdec7862a095dc3412f6", size = 263363, upload-time = "2025-12-28T15:41:47.163Z" }, + { url = "https://files.pythonhosted.org/packages/c4/8b/fb2423526d446596624ac7fde12ea4262e66f86f5120114c3cfd0bb2befa/coverage-7.13.1-cp313-cp313t-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:77545b5dcda13b70f872c3b5974ac64c21d05e65b1590b441c8560115dc3a0d1", size = 265783, upload-time = "2025-12-28T15:41:49.03Z" }, + { url = "https://files.pythonhosted.org/packages/9b/26/ef2adb1e22674913b89f0fe7490ecadcef4a71fa96f5ced90c60ec358789/coverage-7.13.1-cp313-cp313t-manylinux_2_31_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:a4d240d260a1aed814790bbe1f10a5ff31ce6c21bc78f0da4a1e8268d6c80dbd", size = 260508, upload-time = "2025-12-28T15:41:51.035Z" }, + { url = "https://files.pythonhosted.org/packages/ce/7d/f0f59b3404caf662e7b5346247883887687c074ce67ba453ea08c612b1d5/coverage-7.13.1-cp313-cp313t-musllinux_1_2_aarch64.whl", hash = "sha256:d2287ac9360dec3837bfdad969963a5d073a09a85d898bd86bea82aa8876ef3c", size = 263357, upload-time = "2025-12-28T15:41:52.631Z" }, + { url = "https://files.pythonhosted.org/packages/1a/b1/29896492b0b1a047604d35d6fa804f12818fa30cdad660763a5f3159e158/coverage-7.13.1-cp313-cp313t-musllinux_1_2_i686.whl", hash = "sha256:0d2c11f3ea4db66b5cbded23b20185c35066892c67d80ec4be4bab257b9ad1e0", size = 260978, upload-time = "2025-12-28T15:41:54.589Z" }, + { url = "https://files.pythonhosted.org/packages/48/f2/971de1238a62e6f0a4128d37adadc8bb882ee96afbe03ff1570291754629/coverage-7.13.1-cp313-cp313t-musllinux_1_2_riscv64.whl", hash = "sha256:3fc6a169517ca0d7ca6846c3c5392ef2b9e38896f61d615cb75b9e7134d4ee1e", size = 259877, upload-time = "2025-12-28T15:41:56.263Z" }, + { url = "https://files.pythonhosted.org/packages/6a/fc/0474efcbb590ff8628830e9aaec5f1831594874360e3251f1fdec31d07a3/coverage-7.13.1-cp313-cp313t-musllinux_1_2_x86_64.whl", hash = "sha256:d10a2ed46386e850bb3de503a54f9fe8192e5917fcbb143bfef653a9355e9a53", size = 262069, upload-time = "2025-12-28T15:41:58.093Z" }, + { url = "https://files.pythonhosted.org/packages/88/4f/3c159b7953db37a7b44c0eab8a95c37d1aa4257c47b4602c04022d5cb975/coverage-7.13.1-cp313-cp313t-win32.whl", hash = "sha256:75a6f4aa904301dab8022397a22c0039edc1f51e90b83dbd4464b8a38dc87842", size = 222184, upload-time = "2025-12-28T15:41:59.763Z" }, + { url = "https://files.pythonhosted.org/packages/58/a5/6b57d28f81417f9335774f20679d9d13b9a8fb90cd6160957aa3b54a2379/coverage-7.13.1-cp313-cp313t-win_amd64.whl", hash = "sha256:309ef5706e95e62578cda256b97f5e097916a2c26247c287bbe74794e7150df2", size = 223250, upload-time = "2025-12-28T15:42:01.52Z" }, + { url = "https://files.pythonhosted.org/packages/81/7c/160796f3b035acfbb58be80e02e484548595aa67e16a6345e7910ace0a38/coverage-7.13.1-cp313-cp313t-win_arm64.whl", hash = "sha256:92f980729e79b5d16d221038dbf2e8f9a9136afa072f9d5d6ed4cb984b126a09", size = 221521, upload-time = "2025-12-28T15:42:03.275Z" }, + { url = "https://files.pythonhosted.org/packages/aa/8e/ba0e597560c6563fc0adb902fda6526df5d4aa73bb10adf0574d03bd2206/coverage-7.13.1-cp314-cp314-macosx_10_15_x86_64.whl", hash = "sha256:97ab3647280d458a1f9adb85244e81587505a43c0c7cff851f5116cd2814b894", size = 218996, upload-time = "2025-12-28T15:42:04.978Z" }, + { url = "https://files.pythonhosted.org/packages/6b/8e/764c6e116f4221dc7aa26c4061181ff92edb9c799adae6433d18eeba7a14/coverage-7.13.1-cp314-cp314-macosx_11_0_arm64.whl", hash = "sha256:8f572d989142e0908e6acf57ad1b9b86989ff057c006d13b76c146ec6a20216a", size = 219326, upload-time = "2025-12-28T15:42:06.691Z" }, + { url = "https://files.pythonhosted.org/packages/4f/a6/6130dc6d8da28cdcbb0f2bf8865aeca9b157622f7c0031e48c6cf9a0e591/coverage-7.13.1-cp314-cp314-manylinux1_i686.manylinux_2_28_i686.manylinux_2_5_i686.whl", hash = "sha256:d72140ccf8a147e94274024ff6fd8fb7811354cf7ef88b1f0a988ebaa5bc774f", size = 250374, upload-time = "2025-12-28T15:42:08.786Z" }, + { url = "https://files.pythonhosted.org/packages/82/2b/783ded568f7cd6b677762f780ad338bf4b4750205860c17c25f7c708995e/coverage-7.13.1-cp314-cp314-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:d3c9f051b028810f5a87c88e5d6e9af3c0ff32ef62763bf15d29f740453ca909", size = 252882, upload-time = "2025-12-28T15:42:10.515Z" }, + { url = "https://files.pythonhosted.org/packages/cd/b2/9808766d082e6a4d59eb0cc881a57fc1600eb2c5882813eefff8254f71b5/coverage-7.13.1-cp314-cp314-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:f398ba4df52d30b1763f62eed9de5620dcde96e6f491f4c62686736b155aa6e4", size = 254218, upload-time = "2025-12-28T15:42:12.208Z" }, + { url = "https://files.pythonhosted.org/packages/44/ea/52a985bb447c871cb4d2e376e401116520991b597c85afdde1ea9ef54f2c/coverage-7.13.1-cp314-cp314-manylinux_2_31_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:132718176cc723026d201e347f800cd1a9e4b62ccd3f82476950834dad501c75", size = 250391, upload-time = "2025-12-28T15:42:14.21Z" }, + { url = "https://files.pythonhosted.org/packages/7f/1d/125b36cc12310718873cfc8209ecfbc1008f14f4f5fa0662aa608e579353/coverage-7.13.1-cp314-cp314-musllinux_1_2_aarch64.whl", hash = "sha256:9e549d642426e3579b3f4b92d0431543b012dcb6e825c91619d4e93b7363c3f9", size = 252239, upload-time = "2025-12-28T15:42:16.292Z" }, + { url = "https://files.pythonhosted.org/packages/6a/16/10c1c164950cade470107f9f14bbac8485f8fb8515f515fca53d337e4a7f/coverage-7.13.1-cp314-cp314-musllinux_1_2_i686.whl", hash = "sha256:90480b2134999301eea795b3a9dbf606c6fbab1b489150c501da84a959442465", size = 250196, upload-time = "2025-12-28T15:42:18.54Z" }, + { url = "https://files.pythonhosted.org/packages/2a/c6/cd860fac08780c6fd659732f6ced1b40b79c35977c1356344e44d72ba6c4/coverage-7.13.1-cp314-cp314-musllinux_1_2_riscv64.whl", hash = "sha256:e825dbb7f84dfa24663dd75835e7257f8882629fc11f03ecf77d84a75134b864", size = 250008, upload-time = "2025-12-28T15:42:20.365Z" }, + { url = "https://files.pythonhosted.org/packages/f0/3a/a8c58d3d38f82a5711e1e0a67268362af48e1a03df27c03072ac30feefcf/coverage-7.13.1-cp314-cp314-musllinux_1_2_x86_64.whl", hash = "sha256:623dcc6d7a7ba450bbdbeedbaa0c42b329bdae16491af2282f12a7e809be7eb9", size = 251671, upload-time = "2025-12-28T15:42:22.114Z" }, + { url = "https://files.pythonhosted.org/packages/f0/bc/fd4c1da651d037a1e3d53e8cb3f8182f4b53271ffa9a95a2e211bacc0349/coverage-7.13.1-cp314-cp314-win32.whl", hash = "sha256:6e73ebb44dca5f708dc871fe0b90cf4cff1a13f9956f747cc87b535a840386f5", size = 221777, upload-time = "2025-12-28T15:42:23.919Z" }, + { url = "https://files.pythonhosted.org/packages/4b/50/71acabdc8948464c17e90b5ffd92358579bd0910732c2a1c9537d7536aa6/coverage-7.13.1-cp314-cp314-win_amd64.whl", hash = "sha256:be753b225d159feb397bd0bf91ae86f689bad0da09d3b301478cd39b878ab31a", size = 222592, upload-time = "2025-12-28T15:42:25.619Z" }, + { url = "https://files.pythonhosted.org/packages/f7/c8/a6fb943081bb0cc926499c7907731a6dc9efc2cbdc76d738c0ab752f1a32/coverage-7.13.1-cp314-cp314-win_arm64.whl", hash = "sha256:228b90f613b25ba0019361e4ab81520b343b622fc657daf7e501c4ed6a2366c0", size = 221169, upload-time = "2025-12-28T15:42:27.629Z" }, + { url = "https://files.pythonhosted.org/packages/16/61/d5b7a0a0e0e40d62e59bc8c7aa1afbd86280d82728ba97f0673b746b78e2/coverage-7.13.1-cp314-cp314t-macosx_10_15_x86_64.whl", hash = "sha256:60cfb538fe9ef86e5b2ab0ca8fc8d62524777f6c611dcaf76dc16fbe9b8e698a", size = 219730, upload-time = "2025-12-28T15:42:29.306Z" }, + { url = "https://files.pythonhosted.org/packages/a3/2c/8881326445fd071bb49514d1ce97d18a46a980712b51fee84f9ab42845b4/coverage-7.13.1-cp314-cp314t-macosx_11_0_arm64.whl", hash = "sha256:57dfc8048c72ba48a8c45e188d811e5efd7e49b387effc8fb17e97936dde5bf6", size = 220001, upload-time = "2025-12-28T15:42:31.319Z" }, + { url = "https://files.pythonhosted.org/packages/b5/d7/50de63af51dfa3a7f91cc37ad8fcc1e244b734232fbc8b9ab0f3c834a5cd/coverage-7.13.1-cp314-cp314t-manylinux1_i686.manylinux_2_28_i686.manylinux_2_5_i686.whl", hash = "sha256:3f2f725aa3e909b3c5fdb8192490bdd8e1495e85906af74fe6e34a2a77ba0673", size = 261370, upload-time = "2025-12-28T15:42:32.992Z" }, + { url = "https://files.pythonhosted.org/packages/e1/2c/d31722f0ec918fd7453b2758312729f645978d212b410cd0f7c2aed88a94/coverage-7.13.1-cp314-cp314t-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:9ee68b21909686eeb21dfcba2c3b81fee70dcf38b140dcd5aa70680995fa3aa5", size = 263485, upload-time = "2025-12-28T15:42:34.759Z" }, + { url = "https://files.pythonhosted.org/packages/fa/7a/2c114fa5c5fc08ba0777e4aec4c97e0b4a1afcb69c75f1f54cff78b073ab/coverage-7.13.1-cp314-cp314t-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:724b1b270cb13ea2e6503476e34541a0b1f62280bc997eab443f87790202033d", size = 265890, upload-time = "2025-12-28T15:42:36.517Z" }, + { url = "https://files.pythonhosted.org/packages/65/d9/f0794aa1c74ceabc780fe17f6c338456bbc4e96bd950f2e969f48ac6fb20/coverage-7.13.1-cp314-cp314t-manylinux_2_31_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:916abf1ac5cf7eb16bc540a5bf75c71c43a676f5c52fcb9fe75a2bd75fb944e8", size = 260445, upload-time = "2025-12-28T15:42:38.646Z" }, + { url = "https://files.pythonhosted.org/packages/49/23/184b22a00d9bb97488863ced9454068c79e413cb23f472da6cbddc6cfc52/coverage-7.13.1-cp314-cp314t-musllinux_1_2_aarch64.whl", hash = "sha256:776483fd35b58d8afe3acbd9988d5de592ab6da2d2a865edfdbc9fdb43e7c486", size = 263357, upload-time = "2025-12-28T15:42:40.788Z" }, + { url = "https://files.pythonhosted.org/packages/7d/bd/58af54c0c9199ea4190284f389005779d7daf7bf3ce40dcd2d2b2f96da69/coverage-7.13.1-cp314-cp314t-musllinux_1_2_i686.whl", hash = "sha256:b6f3b96617e9852703f5b633ea01315ca45c77e879584f283c44127f0f1ec564", size = 260959, upload-time = "2025-12-28T15:42:42.808Z" }, + { url = "https://files.pythonhosted.org/packages/4b/2a/6839294e8f78a4891bf1df79d69c536880ba2f970d0ff09e7513d6e352e9/coverage-7.13.1-cp314-cp314t-musllinux_1_2_riscv64.whl", hash = "sha256:bd63e7b74661fed317212fab774e2a648bc4bb09b35f25474f8e3325d2945cd7", size = 259792, upload-time = "2025-12-28T15:42:44.818Z" }, + { url = "https://files.pythonhosted.org/packages/ba/c3/528674d4623283310ad676c5af7414b9850ab6d55c2300e8aa4b945ec554/coverage-7.13.1-cp314-cp314t-musllinux_1_2_x86_64.whl", hash = "sha256:933082f161bbb3e9f90d00990dc956120f608cdbcaeea15c4d897f56ef4fe416", size = 262123, upload-time = "2025-12-28T15:42:47.108Z" }, + { url = "https://files.pythonhosted.org/packages/06/c5/8c0515692fb4c73ac379d8dc09b18eaf0214ecb76ea6e62467ba7a1556ff/coverage-7.13.1-cp314-cp314t-win32.whl", hash = "sha256:18be793c4c87de2965e1c0f060f03d9e5aff66cfeae8e1dbe6e5b88056ec153f", size = 222562, upload-time = "2025-12-28T15:42:49.144Z" }, + { url = "https://files.pythonhosted.org/packages/05/0e/c0a0c4678cb30dac735811db529b321d7e1c9120b79bd728d4f4d6b010e9/coverage-7.13.1-cp314-cp314t-win_amd64.whl", hash = "sha256:0e42e0ec0cd3e0d851cb3c91f770c9301f48647cb2877cb78f74bdaa07639a79", size = 223670, upload-time = "2025-12-28T15:42:51.218Z" }, + { url = "https://files.pythonhosted.org/packages/f5/5f/b177aa0011f354abf03a8f30a85032686d290fdeed4222b27d36b4372a50/coverage-7.13.1-cp314-cp314t-win_arm64.whl", hash = "sha256:eaecf47ef10c72ece9a2a92118257da87e460e113b83cc0d2905cbbe931792b4", size = 221707, upload-time = "2025-12-28T15:42:53.034Z" }, + { url = "https://files.pythonhosted.org/packages/cc/48/d9f421cb8da5afaa1a64570d9989e00fb7955e6acddc5a12979f7666ef60/coverage-7.13.1-py3-none-any.whl", hash = "sha256:2016745cb3ba554469d02819d78958b571792bb68e31302610e898f80dd3a573", size = 210722, upload-time = "2025-12-28T15:42:54.901Z" }, +] + +[package.optional-dependencies] +toml = [ + { name = "tomli", marker = "python_full_version <= '3.11'" }, +] + +[[package]] +name = "googleapis-common-protos" +version = "1.72.0" +source = { registry = "https://pypi.org/simple" } +dependencies = [ + { name = "protobuf" }, +] +sdist = { url = "https://files.pythonhosted.org/packages/e5/7b/adfd75544c415c487b33061fe7ae526165241c1ea133f9a9125a56b39fd8/googleapis_common_protos-1.72.0.tar.gz", hash = "sha256:e55a601c1b32b52d7a3e65f43563e2aa61bcd737998ee672ac9b951cd49319f5", size = 147433, upload-time = "2025-11-06T18:29:24.087Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/c4/ab/09169d5a4612a5f92490806649ac8d41e3ec9129c636754575b3553f4ea4/googleapis_common_protos-1.72.0-py3-none-any.whl", hash = "sha256:4299c5a82d5ae1a9702ada957347726b167f9f8d1fc352477702a1e851ff4038", size = 297515, upload-time = "2025-11-06T18:29:13.14Z" }, +] + +[[package]] +name = "grpcio" +version = "1.76.0" +source = { registry = "https://pypi.org/simple" } +dependencies = [ + { name = "typing-extensions" }, +] +sdist = { url = "https://files.pythonhosted.org/packages/b6/e0/318c1ce3ae5a17894d5791e87aea147587c9e702f24122cc7a5c8bbaeeb1/grpcio-1.76.0.tar.gz", hash = "sha256:7be78388d6da1a25c0d5ec506523db58b18be22d9c37d8d3a32c08be4987bd73", size = 12785182, upload-time = "2025-10-21T16:23:12.106Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/a0/00/8163a1beeb6971f66b4bbe6ac9457b97948beba8dd2fc8e1281dce7f79ec/grpcio-1.76.0-cp311-cp311-linux_armv7l.whl", hash = "sha256:2e1743fbd7f5fa713a1b0a8ac8ebabf0ec980b5d8809ec358d488e273b9cf02a", size = 5843567, upload-time = "2025-10-21T16:20:52.829Z" }, + { url = "https://files.pythonhosted.org/packages/10/c1/934202f5cf335e6d852530ce14ddb0fef21be612ba9ecbbcbd4d748ca32d/grpcio-1.76.0-cp311-cp311-macosx_11_0_universal2.whl", hash = "sha256:a8c2cf1209497cf659a667d7dea88985e834c24b7c3b605e6254cbb5076d985c", size = 11848017, upload-time = "2025-10-21T16:20:56.705Z" }, + { url = "https://files.pythonhosted.org/packages/11/0b/8dec16b1863d74af6eb3543928600ec2195af49ca58b16334972f6775663/grpcio-1.76.0-cp311-cp311-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:08caea849a9d3c71a542827d6df9d5a69067b0a1efbea8a855633ff5d9571465", size = 6412027, upload-time = "2025-10-21T16:20:59.3Z" }, + { url = "https://files.pythonhosted.org/packages/d7/64/7b9e6e7ab910bea9d46f2c090380bab274a0b91fb0a2fe9b0cd399fffa12/grpcio-1.76.0-cp311-cp311-manylinux2014_i686.manylinux_2_17_i686.whl", hash = "sha256:f0e34c2079d47ae9f6188211db9e777c619a21d4faba6977774e8fa43b085e48", size = 7075913, upload-time = "2025-10-21T16:21:01.645Z" }, + { url = "https://files.pythonhosted.org/packages/68/86/093c46e9546073cefa789bd76d44c5cb2abc824ca62af0c18be590ff13ba/grpcio-1.76.0-cp311-cp311-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:8843114c0cfce61b40ad48df65abcfc00d4dba82eae8718fab5352390848c5da", size = 6615417, upload-time = "2025-10-21T16:21:03.844Z" }, + { url = "https://files.pythonhosted.org/packages/f7/b6/5709a3a68500a9c03da6fb71740dcdd5ef245e39266461a03f31a57036d8/grpcio-1.76.0-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:8eddfb4d203a237da6f3cc8a540dad0517d274b5a1e9e636fd8d2c79b5c1d397", size = 7199683, upload-time = "2025-10-21T16:21:06.195Z" }, + { url = "https://files.pythonhosted.org/packages/91/d3/4b1f2bf16ed52ce0b508161df3a2d186e4935379a159a834cb4a7d687429/grpcio-1.76.0-cp311-cp311-musllinux_1_2_i686.whl", hash = "sha256:32483fe2aab2c3794101c2a159070584e5db11d0aa091b2c0ea9c4fc43d0d749", size = 8163109, upload-time = "2025-10-21T16:21:08.498Z" }, + { url = "https://files.pythonhosted.org/packages/5c/61/d9043f95f5f4cf085ac5dd6137b469d41befb04bd80280952ffa2a4c3f12/grpcio-1.76.0-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:dcfe41187da8992c5f40aa8c5ec086fa3672834d2be57a32384c08d5a05b4c00", size = 7626676, upload-time = "2025-10-21T16:21:10.693Z" }, + { url = "https://files.pythonhosted.org/packages/36/95/fd9a5152ca02d8881e4dd419cdd790e11805979f499a2e5b96488b85cf27/grpcio-1.76.0-cp311-cp311-win32.whl", hash = "sha256:2107b0c024d1b35f4083f11245c0e23846ae64d02f40b2b226684840260ed054", size = 3997688, upload-time = "2025-10-21T16:21:12.746Z" }, + { url = "https://files.pythonhosted.org/packages/60/9c/5c359c8d4c9176cfa3c61ecd4efe5affe1f38d9bae81e81ac7186b4c9cc8/grpcio-1.76.0-cp311-cp311-win_amd64.whl", hash = "sha256:522175aba7af9113c48ec10cc471b9b9bd4f6ceb36aeb4544a8e2c80ed9d252d", size = 4709315, upload-time = "2025-10-21T16:21:15.26Z" }, + { url = "https://files.pythonhosted.org/packages/bf/05/8e29121994b8d959ffa0afd28996d452f291b48cfc0875619de0bde2c50c/grpcio-1.76.0-cp312-cp312-linux_armv7l.whl", hash = "sha256:81fd9652b37b36f16138611c7e884eb82e0cec137c40d3ef7c3f9b3ed00f6ed8", size = 5799718, upload-time = "2025-10-21T16:21:17.939Z" }, + { url = "https://files.pythonhosted.org/packages/d9/75/11d0e66b3cdf998c996489581bdad8900db79ebd83513e45c19548f1cba4/grpcio-1.76.0-cp312-cp312-macosx_11_0_universal2.whl", hash = "sha256:04bbe1bfe3a68bbfd4e52402ab7d4eb59d72d02647ae2042204326cf4bbad280", size = 11825627, upload-time = "2025-10-21T16:21:20.466Z" }, + { url = "https://files.pythonhosted.org/packages/28/50/2f0aa0498bc188048f5d9504dcc5c2c24f2eb1a9337cd0fa09a61a2e75f0/grpcio-1.76.0-cp312-cp312-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:d388087771c837cdb6515539f43b9d4bf0b0f23593a24054ac16f7a960be16f4", size = 6359167, upload-time = "2025-10-21T16:21:23.122Z" }, + { url = "https://files.pythonhosted.org/packages/66/e5/bbf0bb97d29ede1d59d6588af40018cfc345b17ce979b7b45424628dc8bb/grpcio-1.76.0-cp312-cp312-manylinux2014_i686.manylinux_2_17_i686.whl", hash = "sha256:9f8f757bebaaea112c00dba718fc0d3260052ce714e25804a03f93f5d1c6cc11", size = 7044267, upload-time = "2025-10-21T16:21:25.995Z" }, + { url = "https://files.pythonhosted.org/packages/f5/86/f6ec2164f743d9609691115ae8ece098c76b894ebe4f7c94a655c6b03e98/grpcio-1.76.0-cp312-cp312-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:980a846182ce88c4f2f7e2c22c56aefd515daeb36149d1c897f83cf57999e0b6", size = 6573963, upload-time = "2025-10-21T16:21:28.631Z" }, + { url = "https://files.pythonhosted.org/packages/60/bc/8d9d0d8505feccfdf38a766d262c71e73639c165b311c9457208b56d92ae/grpcio-1.76.0-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:f92f88e6c033db65a5ae3d97905c8fea9c725b63e28d5a75cb73b49bda5024d8", size = 7164484, upload-time = "2025-10-21T16:21:30.837Z" }, + { url = "https://files.pythonhosted.org/packages/67/e6/5d6c2fc10b95edf6df9b8f19cf10a34263b7fd48493936fffd5085521292/grpcio-1.76.0-cp312-cp312-musllinux_1_2_i686.whl", hash = "sha256:4baf3cbe2f0be3289eb68ac8ae771156971848bb8aaff60bad42005539431980", size = 8127777, upload-time = "2025-10-21T16:21:33.577Z" }, + { url = "https://files.pythonhosted.org/packages/3f/c8/dce8ff21c86abe025efe304d9e31fdb0deaaa3b502b6a78141080f206da0/grpcio-1.76.0-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:615ba64c208aaceb5ec83bfdce7728b80bfeb8be97562944836a7a0a9647d882", size = 7594014, upload-time = "2025-10-21T16:21:41.882Z" }, + { url = "https://files.pythonhosted.org/packages/e0/42/ad28191ebf983a5d0ecef90bab66baa5a6b18f2bfdef9d0a63b1973d9f75/grpcio-1.76.0-cp312-cp312-win32.whl", hash = "sha256:45d59a649a82df5718fd9527ce775fd66d1af35e6d31abdcdc906a49c6822958", size = 3984750, upload-time = "2025-10-21T16:21:44.006Z" }, + { url = "https://files.pythonhosted.org/packages/9e/00/7bd478cbb851c04a48baccaa49b75abaa8e4122f7d86da797500cccdd771/grpcio-1.76.0-cp312-cp312-win_amd64.whl", hash = "sha256:c088e7a90b6017307f423efbb9d1ba97a22aa2170876223f9709e9d1de0b5347", size = 4704003, upload-time = "2025-10-21T16:21:46.244Z" }, + { url = "https://files.pythonhosted.org/packages/fc/ed/71467ab770effc9e8cef5f2e7388beb2be26ed642d567697bb103a790c72/grpcio-1.76.0-cp313-cp313-linux_armv7l.whl", hash = "sha256:26ef06c73eb53267c2b319f43e6634c7556ea37672029241a056629af27c10e2", size = 5807716, upload-time = "2025-10-21T16:21:48.475Z" }, + { url = "https://files.pythonhosted.org/packages/2c/85/c6ed56f9817fab03fa8a111ca91469941fb514e3e3ce6d793cb8f1e1347b/grpcio-1.76.0-cp313-cp313-macosx_11_0_universal2.whl", hash = "sha256:45e0111e73f43f735d70786557dc38141185072d7ff8dc1829d6a77ac1471468", size = 11821522, upload-time = "2025-10-21T16:21:51.142Z" }, + { url = "https://files.pythonhosted.org/packages/ac/31/2b8a235ab40c39cbc141ef647f8a6eb7b0028f023015a4842933bc0d6831/grpcio-1.76.0-cp313-cp313-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:83d57312a58dcfe2a3a0f9d1389b299438909a02db60e2f2ea2ae2d8034909d3", size = 6362558, upload-time = "2025-10-21T16:21:54.213Z" }, + { url = "https://files.pythonhosted.org/packages/bd/64/9784eab483358e08847498ee56faf8ff6ea8e0a4592568d9f68edc97e9e9/grpcio-1.76.0-cp313-cp313-manylinux2014_i686.manylinux_2_17_i686.whl", hash = "sha256:3e2a27c89eb9ac3d81ec8835e12414d73536c6e620355d65102503064a4ed6eb", size = 7049990, upload-time = "2025-10-21T16:21:56.476Z" }, + { url = "https://files.pythonhosted.org/packages/2b/94/8c12319a6369434e7a184b987e8e9f3b49a114c489b8315f029e24de4837/grpcio-1.76.0-cp313-cp313-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:61f69297cba3950a524f61c7c8ee12e55c486cb5f7db47ff9dcee33da6f0d3ae", size = 6575387, upload-time = "2025-10-21T16:21:59.051Z" }, + { url = "https://files.pythonhosted.org/packages/15/0f/f12c32b03f731f4a6242f771f63039df182c8b8e2cf8075b245b409259d4/grpcio-1.76.0-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:6a15c17af8839b6801d554263c546c69c4d7718ad4321e3166175b37eaacca77", size = 7166668, upload-time = "2025-10-21T16:22:02.049Z" }, + { url = "https://files.pythonhosted.org/packages/ff/2d/3ec9ce0c2b1d92dd59d1c3264aaec9f0f7c817d6e8ac683b97198a36ed5a/grpcio-1.76.0-cp313-cp313-musllinux_1_2_i686.whl", hash = "sha256:25a18e9810fbc7e7f03ec2516addc116a957f8cbb8cbc95ccc80faa072743d03", size = 8124928, upload-time = "2025-10-21T16:22:04.984Z" }, + { url = "https://files.pythonhosted.org/packages/1a/74/fd3317be5672f4856bcdd1a9e7b5e17554692d3db9a3b273879dc02d657d/grpcio-1.76.0-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:931091142fd8cc14edccc0845a79248bc155425eee9a98b2db2ea4f00a235a42", size = 7589983, upload-time = "2025-10-21T16:22:07.881Z" }, + { url = "https://files.pythonhosted.org/packages/45/bb/ca038cf420f405971f19821c8c15bcbc875505f6ffadafe9ffd77871dc4c/grpcio-1.76.0-cp313-cp313-win32.whl", hash = "sha256:5e8571632780e08526f118f74170ad8d50fb0a48c23a746bef2a6ebade3abd6f", size = 3984727, upload-time = "2025-10-21T16:22:10.032Z" }, + { url = "https://files.pythonhosted.org/packages/41/80/84087dc56437ced7cdd4b13d7875e7439a52a261e3ab4e06488ba6173b0a/grpcio-1.76.0-cp313-cp313-win_amd64.whl", hash = "sha256:f9f7bd5faab55f47231ad8dba7787866b69f5e93bc306e3915606779bbfb4ba8", size = 4702799, upload-time = "2025-10-21T16:22:12.709Z" }, + { url = "https://files.pythonhosted.org/packages/b4/46/39adac80de49d678e6e073b70204091e76631e03e94928b9ea4ecf0f6e0e/grpcio-1.76.0-cp314-cp314-linux_armv7l.whl", hash = "sha256:ff8a59ea85a1f2191a0ffcc61298c571bc566332f82e5f5be1b83c9d8e668a62", size = 5808417, upload-time = "2025-10-21T16:22:15.02Z" }, + { url = "https://files.pythonhosted.org/packages/9c/f5/a4531f7fb8b4e2a60b94e39d5d924469b7a6988176b3422487be61fe2998/grpcio-1.76.0-cp314-cp314-macosx_11_0_universal2.whl", hash = "sha256:06c3d6b076e7b593905d04fdba6a0525711b3466f43b3400266f04ff735de0cd", size = 11828219, upload-time = "2025-10-21T16:22:17.954Z" }, + { url = "https://files.pythonhosted.org/packages/4b/1c/de55d868ed7a8bd6acc6b1d6ddc4aa36d07a9f31d33c912c804adb1b971b/grpcio-1.76.0-cp314-cp314-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:fd5ef5932f6475c436c4a55e4336ebbe47bd3272be04964a03d316bbf4afbcbc", size = 6367826, upload-time = "2025-10-21T16:22:20.721Z" }, + { url = "https://files.pythonhosted.org/packages/59/64/99e44c02b5adb0ad13ab3adc89cb33cb54bfa90c74770f2607eea629b86f/grpcio-1.76.0-cp314-cp314-manylinux2014_i686.manylinux_2_17_i686.whl", hash = "sha256:b331680e46239e090f5b3cead313cc772f6caa7d0fc8de349337563125361a4a", size = 7049550, upload-time = "2025-10-21T16:22:23.637Z" }, + { url = "https://files.pythonhosted.org/packages/43/28/40a5be3f9a86949b83e7d6a2ad6011d993cbe9b6bd27bea881f61c7788b6/grpcio-1.76.0-cp314-cp314-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:2229ae655ec4e8999599469559e97630185fdd53ae1e8997d147b7c9b2b72cba", size = 6575564, upload-time = "2025-10-21T16:22:26.016Z" }, + { url = "https://files.pythonhosted.org/packages/4b/a9/1be18e6055b64467440208a8559afac243c66a8b904213af6f392dc2212f/grpcio-1.76.0-cp314-cp314-musllinux_1_2_aarch64.whl", hash = "sha256:490fa6d203992c47c7b9e4a9d39003a0c2bcc1c9aa3c058730884bbbb0ee9f09", size = 7176236, upload-time = "2025-10-21T16:22:28.362Z" }, + { url = "https://files.pythonhosted.org/packages/0f/55/dba05d3fcc151ce6e81327541d2cc8394f442f6b350fead67401661bf041/grpcio-1.76.0-cp314-cp314-musllinux_1_2_i686.whl", hash = "sha256:479496325ce554792dba6548fae3df31a72cef7bad71ca2e12b0e58f9b336bfc", size = 8125795, upload-time = "2025-10-21T16:22:31.075Z" }, + { url = "https://files.pythonhosted.org/packages/4a/45/122df922d05655f63930cf42c9e3f72ba20aadb26c100ee105cad4ce4257/grpcio-1.76.0-cp314-cp314-musllinux_1_2_x86_64.whl", hash = "sha256:1c9b93f79f48b03ada57ea24725d83a30284a012ec27eab2cf7e50a550cbbbcc", size = 7592214, upload-time = "2025-10-21T16:22:33.831Z" }, + { url = "https://files.pythonhosted.org/packages/4a/6e/0b899b7f6b66e5af39e377055fb4a6675c9ee28431df5708139df2e93233/grpcio-1.76.0-cp314-cp314-win32.whl", hash = "sha256:747fa73efa9b8b1488a95d0ba1039c8e2dca0f741612d80415b1e1c560febf4e", size = 4062961, upload-time = "2025-10-21T16:22:36.468Z" }, + { url = "https://files.pythonhosted.org/packages/19/41/0b430b01a2eb38ee887f88c1f07644a1df8e289353b78e82b37ef988fb64/grpcio-1.76.0-cp314-cp314-win_amd64.whl", hash = "sha256:922fa70ba549fce362d2e2871ab542082d66e2aaf0c19480ea453905b01f384e", size = 4834462, upload-time = "2025-10-21T16:22:39.772Z" }, +] + +[[package]] +name = "importlib-metadata" +version = "8.7.1" +source = { registry = "https://pypi.org/simple" } +dependencies = [ + { name = "zipp" }, +] +sdist = { url = "https://files.pythonhosted.org/packages/f3/49/3b30cad09e7771a4982d9975a8cbf64f00d4a1ececb53297f1d9a7be1b10/importlib_metadata-8.7.1.tar.gz", hash = "sha256:49fef1ae6440c182052f407c8d34a68f72efc36db9ca90dc0113398f2fdde8bb", size = 57107, upload-time = "2025-12-21T10:00:19.278Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/fa/5e/f8e9a1d23b9c20a551a8a02ea3637b4642e22c2626e3a13a9a29cdea99eb/importlib_metadata-8.7.1-py3-none-any.whl", hash = "sha256:5a1f80bf1daa489495071efbb095d75a634cf28a8bc299581244063b53176151", size = 27865, upload-time = "2025-12-21T10:00:18.329Z" }, +] + +[[package]] +name = "iniconfig" +version = "2.3.0" +source = { registry = "https://pypi.org/simple" } +sdist = { url = "https://files.pythonhosted.org/packages/72/34/14ca021ce8e5dfedc35312d08ba8bf51fdd999c576889fc2c24cb97f4f10/iniconfig-2.3.0.tar.gz", hash = "sha256:c76315c77db068650d49c5b56314774a7804df16fee4402c1f19d6d15d8c4730", size = 20503, upload-time = "2025-10-18T21:55:43.219Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/cb/b1/3846dd7f199d53cb17f49cba7e651e9ce294d8497c8c150530ed11865bb8/iniconfig-2.3.0-py3-none-any.whl", hash = "sha256:f631c04d2c48c52b84d0d0549c99ff3859c98df65b3101406327ecc7d53fbf12", size = 7484, upload-time = "2025-10-18T21:55:41.639Z" }, +] + +[[package]] +name = "linkify-it-py" +version = "2.0.3" +source = { registry = "https://pypi.org/simple" } +dependencies = [ + { name = "uc-micro-py" }, +] +sdist = { url = "https://files.pythonhosted.org/packages/2a/ae/bb56c6828e4797ba5a4821eec7c43b8bf40f69cda4d4f5f8c8a2810ec96a/linkify-it-py-2.0.3.tar.gz", hash = "sha256:68cda27e162e9215c17d786649d1da0021a451bdc436ef9e0fa0ba5234b9b048", size = 27946, upload-time = "2024-02-04T14:48:04.179Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/04/1e/b832de447dee8b582cac175871d2f6c3d5077cc56d5575cadba1fd1cccfa/linkify_it_py-2.0.3-py3-none-any.whl", hash = "sha256:6bcbc417b0ac14323382aef5c5192c0075bf8a9d6b41820a2b66371eac6b6d79", size = 19820, upload-time = "2024-02-04T14:48:02.496Z" }, +] + +[[package]] +name = "markdown-it-py" +version = "4.0.0" +source = { registry = "https://pypi.org/simple" } +dependencies = [ + { name = "mdurl" }, +] +sdist = { url = "https://files.pythonhosted.org/packages/5b/f5/4ec618ed16cc4f8fb3b701563655a69816155e79e24a17b651541804721d/markdown_it_py-4.0.0.tar.gz", hash = "sha256:cb0a2b4aa34f932c007117b194e945bd74e0ec24133ceb5bac59009cda1cb9f3", size = 73070, upload-time = "2025-08-11T12:57:52.854Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/94/54/e7d793b573f298e1c9013b8c4dade17d481164aa517d1d7148619c2cedbf/markdown_it_py-4.0.0-py3-none-any.whl", hash = "sha256:87327c59b172c5011896038353a81343b6754500a08cd7a4973bb48c6d578147", size = 87321, upload-time = "2025-08-11T12:57:51.923Z" }, +] + +[package.optional-dependencies] +linkify = [ + { name = "linkify-it-py" }, +] + +[[package]] +name = "mdit-py-plugins" +version = "0.5.0" +source = { registry = "https://pypi.org/simple" } +dependencies = [ + { name = "markdown-it-py" }, +] +sdist = { url = "https://files.pythonhosted.org/packages/b2/fd/a756d36c0bfba5f6e39a1cdbdbfdd448dc02692467d83816dff4592a1ebc/mdit_py_plugins-0.5.0.tar.gz", hash = "sha256:f4918cb50119f50446560513a8e311d574ff6aaed72606ddae6d35716fe809c6", size = 44655, upload-time = "2025-08-11T07:25:49.083Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/fb/86/dd6e5db36df29e76c7a7699123569a4a18c1623ce68d826ed96c62643cae/mdit_py_plugins-0.5.0-py3-none-any.whl", hash = "sha256:07a08422fc1936a5d26d146759e9155ea466e842f5ab2f7d2266dd084c8dab1f", size = 57205, upload-time = "2025-08-11T07:25:47.597Z" }, +] + +[[package]] +name = "mdurl" +version = "0.1.2" +source = { registry = "https://pypi.org/simple" } +sdist = { url = "https://files.pythonhosted.org/packages/d6/54/cfe61301667036ec958cb99bd3efefba235e65cdeb9c84d24a8293ba1d90/mdurl-0.1.2.tar.gz", hash = "sha256:bb413d29f5eea38f31dd4754dd7377d4465116fb207585f97bf925588687c1ba", size = 8729, upload-time = "2022-08-14T12:40:10.846Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/b3/38/89ba8ad64ae25be8de66a6d463314cf1eb366222074cfda9ee839c56a4b4/mdurl-0.1.2-py3-none-any.whl", hash = "sha256:84008a41e51615a49fc9966191ff91509e3c40b939176e643fd50a5c2196b8f8", size = 9979, upload-time = "2022-08-14T12:40:09.779Z" }, +] + +[[package]] +name = "opentelemetry-api" +version = "1.39.1" +source = { registry = "https://pypi.org/simple" } +dependencies = [ + { name = "importlib-metadata" }, + { name = "typing-extensions" }, +] +sdist = { url = "https://files.pythonhosted.org/packages/97/b9/3161be15bb8e3ad01be8be5a968a9237c3027c5be504362ff800fca3e442/opentelemetry_api-1.39.1.tar.gz", hash = "sha256:fbde8c80e1b937a2c61f20347e91c0c18a1940cecf012d62e65a7caf08967c9c", size = 65767, upload-time = "2025-12-11T13:32:39.182Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/cf/df/d3f1ddf4bb4cb50ed9b1139cc7b1c54c34a1e7ce8fd1b9a37c0d1551a6bd/opentelemetry_api-1.39.1-py3-none-any.whl", hash = "sha256:2edd8463432a7f8443edce90972169b195e7d6a05500cd29e6d13898187c9950", size = 66356, upload-time = "2025-12-11T13:32:17.304Z" }, +] + +[[package]] +name = "opentelemetry-exporter-otlp-proto-common" +version = "1.39.1" +source = { registry = "https://pypi.org/simple" } +dependencies = [ + { name = "opentelemetry-proto" }, +] +sdist = { url = "https://files.pythonhosted.org/packages/e9/9d/22d241b66f7bbde88a3bfa6847a351d2c46b84de23e71222c6aae25c7050/opentelemetry_exporter_otlp_proto_common-1.39.1.tar.gz", hash = "sha256:763370d4737a59741c89a67b50f9e39271639ee4afc999dadfe768541c027464", size = 20409, upload-time = "2025-12-11T13:32:40.885Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/8c/02/ffc3e143d89a27ac21fd557365b98bd0653b98de8a101151d5805b5d4c33/opentelemetry_exporter_otlp_proto_common-1.39.1-py3-none-any.whl", hash = "sha256:08f8a5862d64cc3435105686d0216c1365dc5701f86844a8cd56597d0c764fde", size = 18366, upload-time = "2025-12-11T13:32:20.2Z" }, +] + +[[package]] +name = "opentelemetry-exporter-otlp-proto-grpc" +version = "1.39.1" +source = { registry = "https://pypi.org/simple" } +dependencies = [ + { name = "googleapis-common-protos" }, + { name = "grpcio" }, + { name = "opentelemetry-api" }, + { name = "opentelemetry-exporter-otlp-proto-common" }, + { name = "opentelemetry-proto" }, + { name = "opentelemetry-sdk" }, + { name = "typing-extensions" }, +] +sdist = { url = "https://files.pythonhosted.org/packages/53/48/b329fed2c610c2c32c9366d9dc597202c9d1e58e631c137ba15248d8850f/opentelemetry_exporter_otlp_proto_grpc-1.39.1.tar.gz", hash = "sha256:772eb1c9287485d625e4dbe9c879898e5253fea111d9181140f51291b5fec3ad", size = 24650, upload-time = "2025-12-11T13:32:41.429Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/81/a3/cc9b66575bd6597b98b886a2067eea2693408d2d5f39dad9ab7fc264f5f3/opentelemetry_exporter_otlp_proto_grpc-1.39.1-py3-none-any.whl", hash = "sha256:fa1c136a05c7e9b4c09f739469cbdb927ea20b34088ab1d959a849b5cc589c18", size = 19766, upload-time = "2025-12-11T13:32:21.027Z" }, +] + +[[package]] +name = "opentelemetry-proto" +version = "1.39.1" +source = { registry = "https://pypi.org/simple" } +dependencies = [ + { name = "protobuf" }, +] +sdist = { url = "https://files.pythonhosted.org/packages/49/1d/f25d76d8260c156c40c97c9ed4511ec0f9ce353f8108ca6e7561f82a06b2/opentelemetry_proto-1.39.1.tar.gz", hash = "sha256:6c8e05144fc0d3ed4d22c2289c6b126e03bcd0e6a7da0f16cedd2e1c2772e2c8", size = 46152, upload-time = "2025-12-11T13:32:48.681Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/51/95/b40c96a7b5203005a0b03d8ce8cd212ff23f1793d5ba289c87a097571b18/opentelemetry_proto-1.39.1-py3-none-any.whl", hash = "sha256:22cdc78efd3b3765d09e68bfbd010d4fc254c9818afd0b6b423387d9dee46007", size = 72535, upload-time = "2025-12-11T13:32:33.866Z" }, +] + +[[package]] +name = "opentelemetry-sdk" +version = "1.39.1" +source = { registry = "https://pypi.org/simple" } +dependencies = [ + { name = "opentelemetry-api" }, + { name = "opentelemetry-semantic-conventions" }, + { name = "typing-extensions" }, +] +sdist = { url = "https://files.pythonhosted.org/packages/eb/fb/c76080c9ba07e1e8235d24cdcc4d125ef7aa3edf23eb4e497c2e50889adc/opentelemetry_sdk-1.39.1.tar.gz", hash = "sha256:cf4d4563caf7bff906c9f7967e2be22d0d6b349b908be0d90fb21c8e9c995cc6", size = 171460, upload-time = "2025-12-11T13:32:49.369Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/7c/98/e91cf858f203d86f4eccdf763dcf01cf03f1dae80c3750f7e635bfa206b6/opentelemetry_sdk-1.39.1-py3-none-any.whl", hash = "sha256:4d5482c478513ecb0a5d938dcc61394e647066e0cc2676bee9f3af3f3f45f01c", size = 132565, upload-time = "2025-12-11T13:32:35.069Z" }, +] + +[[package]] +name = "opentelemetry-semantic-conventions" +version = "0.60b1" +source = { registry = "https://pypi.org/simple" } +dependencies = [ + { name = "opentelemetry-api" }, + { name = "typing-extensions" }, +] +sdist = { url = "https://files.pythonhosted.org/packages/91/df/553f93ed38bf22f4b999d9be9c185adb558982214f33eae539d3b5cd0858/opentelemetry_semantic_conventions-0.60b1.tar.gz", hash = "sha256:87c228b5a0669b748c76d76df6c364c369c28f1c465e50f661e39737e84bc953", size = 137935, upload-time = "2025-12-11T13:32:50.487Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/7a/5e/5958555e09635d09b75de3c4f8b9cae7335ca545d77392ffe7331534c402/opentelemetry_semantic_conventions-0.60b1-py3-none-any.whl", hash = "sha256:9fa8c8b0c110da289809292b0591220d3a7b53c1526a23021e977d68597893fb", size = 219982, upload-time = "2025-12-11T13:32:36.955Z" }, +] + +[[package]] +name = "packaging" +version = "25.0" +source = { registry = "https://pypi.org/simple" } +sdist = { url = "https://files.pythonhosted.org/packages/a1/d4/1fc4078c65507b51b96ca8f8c3ba19e6a61c8253c72794544580a7b6c24d/packaging-25.0.tar.gz", hash = "sha256:d443872c98d677bf60f6a1f2f8c1cb748e8fe762d2bf9d3148b5599295b0fc4f", size = 165727, upload-time = "2025-04-19T11:48:59.673Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/20/12/38679034af332785aac8774540895e234f4d07f7545804097de4b666afd8/packaging-25.0-py3-none-any.whl", hash = "sha256:29572ef2b1f17581046b3a2227d5c611fb25ec70ca1ba8554b24b0e69331a484", size = 66469, upload-time = "2025-04-19T11:48:57.875Z" }, +] + +[[package]] +name = "platformdirs" +version = "4.5.1" +source = { registry = "https://pypi.org/simple" } +sdist = { url = "https://files.pythonhosted.org/packages/cf/86/0248f086a84f01b37aaec0fa567b397df1a119f73c16f6c7a9aac73ea309/platformdirs-4.5.1.tar.gz", hash = "sha256:61d5cdcc6065745cdd94f0f878977f8de9437be93de97c1c12f853c9c0cdcbda", size = 21715, upload-time = "2025-12-05T13:52:58.638Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/cb/28/3bfe2fa5a7b9c46fe7e13c97bda14c895fb10fa2ebf1d0abb90e0cea7ee1/platformdirs-4.5.1-py3-none-any.whl", hash = "sha256:d03afa3963c806a9bed9d5125c8f4cb2fdaf74a55ab60e5d59b3fde758104d31", size = 18731, upload-time = "2025-12-05T13:52:56.823Z" }, +] + +[[package]] +name = "pluggy" +version = "1.6.0" +source = { registry = "https://pypi.org/simple" } +sdist = { url = "https://files.pythonhosted.org/packages/f9/e2/3e91f31a7d2b083fe6ef3fa267035b518369d9511ffab804f839851d2779/pluggy-1.6.0.tar.gz", hash = "sha256:7dcc130b76258d33b90f61b658791dede3486c3e6bfb003ee5c9bfb396dd22f3", size = 69412, upload-time = "2025-05-15T12:30:07.975Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/54/20/4d324d65cc6d9205fabedc306948156824eb9f0ee1633355a8f7ec5c66bf/pluggy-1.6.0-py3-none-any.whl", hash = "sha256:e920276dd6813095e9377c0bc5566d94c932c33b27a3e3945d8389c374dd4746", size = 20538, upload-time = "2025-05-15T12:30:06.134Z" }, +] + +[[package]] +name = "protobuf" +version = "6.33.2" +source = { registry = "https://pypi.org/simple" } +sdist = { url = "https://files.pythonhosted.org/packages/34/44/e49ecff446afeec9d1a66d6bbf9adc21e3c7cea7803a920ca3773379d4f6/protobuf-6.33.2.tar.gz", hash = "sha256:56dc370c91fbb8ac85bc13582c9e373569668a290aa2e66a590c2a0d35ddb9e4", size = 444296, upload-time = "2025-12-06T00:17:53.311Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/bc/91/1e3a34881a88697a7354ffd177e8746e97a722e5e8db101544b47e84afb1/protobuf-6.33.2-cp310-abi3-win32.whl", hash = "sha256:87eb388bd2d0f78febd8f4c8779c79247b26a5befad525008e49a6955787ff3d", size = 425603, upload-time = "2025-12-06T00:17:41.114Z" }, + { url = "https://files.pythonhosted.org/packages/64/20/4d50191997e917ae13ad0a235c8b42d8c1ab9c3e6fd455ca16d416944355/protobuf-6.33.2-cp310-abi3-win_amd64.whl", hash = "sha256:fc2a0e8b05b180e5fc0dd1559fe8ebdae21a27e81ac77728fb6c42b12c7419b4", size = 436930, upload-time = "2025-12-06T00:17:43.278Z" }, + { url = "https://files.pythonhosted.org/packages/b2/ca/7e485da88ba45c920fb3f50ae78de29ab925d9e54ef0de678306abfbb497/protobuf-6.33.2-cp39-abi3-macosx_10_9_universal2.whl", hash = "sha256:d9b19771ca75935b3a4422957bc518b0cecb978b31d1dd12037b088f6bcc0e43", size = 427621, upload-time = "2025-12-06T00:17:44.445Z" }, + { url = "https://files.pythonhosted.org/packages/7d/4f/f743761e41d3b2b2566748eb76bbff2b43e14d5fcab694f494a16458b05f/protobuf-6.33.2-cp39-abi3-manylinux2014_aarch64.whl", hash = "sha256:b5d3b5625192214066d99b2b605f5783483575656784de223f00a8d00754fc0e", size = 324460, upload-time = "2025-12-06T00:17:45.678Z" }, + { url = "https://files.pythonhosted.org/packages/b1/fa/26468d00a92824020f6f2090d827078c09c9c587e34cbfd2d0c7911221f8/protobuf-6.33.2-cp39-abi3-manylinux2014_s390x.whl", hash = "sha256:8cd7640aee0b7828b6d03ae518b5b4806fdfc1afe8de82f79c3454f8aef29872", size = 339168, upload-time = "2025-12-06T00:17:46.813Z" }, + { url = "https://files.pythonhosted.org/packages/56/13/333b8f421738f149d4fe5e49553bc2a2ab75235486259f689b4b91f96cec/protobuf-6.33.2-cp39-abi3-manylinux2014_x86_64.whl", hash = "sha256:1f8017c48c07ec5859106533b682260ba3d7c5567b1ca1f24297ce03384d1b4f", size = 323270, upload-time = "2025-12-06T00:17:48.253Z" }, + { url = "https://files.pythonhosted.org/packages/0e/15/4f02896cc3df04fc465010a4c6a0cd89810f54617a32a70ef531ed75d61c/protobuf-6.33.2-py3-none-any.whl", hash = "sha256:7636aad9bb01768870266de5dc009de2d1b936771b38a793f73cbbf279c91c5c", size = 170501, upload-time = "2025-12-06T00:17:52.211Z" }, +] + +[[package]] +name = "pygments" +version = "2.19.2" +source = { registry = "https://pypi.org/simple" } +sdist = { url = "https://files.pythonhosted.org/packages/b0/77/a5b8c569bf593b0140bde72ea885a803b82086995367bf2037de0159d924/pygments-2.19.2.tar.gz", hash = "sha256:636cb2477cec7f8952536970bc533bc43743542f70392ae026374600add5b887", size = 4968631, upload-time = "2025-06-21T13:39:12.283Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/c7/21/705964c7812476f378728bdf590ca4b771ec72385c533964653c68e86bdc/pygments-2.19.2-py3-none-any.whl", hash = "sha256:86540386c03d588bb81d44bc3928634ff26449851e99741617ecb9037ee5ec0b", size = 1225217, upload-time = "2025-06-21T13:39:07.939Z" }, +] + +[[package]] +name = "pytest" +version = "9.0.2" +source = { registry = "https://pypi.org/simple" } +dependencies = [ + { name = "colorama", marker = "sys_platform == 'win32'" }, + { name = "iniconfig" }, + { name = "packaging" }, + { name = "pluggy" }, + { name = "pygments" }, +] +sdist = { url = "https://files.pythonhosted.org/packages/d1/db/7ef3487e0fb0049ddb5ce41d3a49c235bf9ad299b6a25d5780a89f19230f/pytest-9.0.2.tar.gz", hash = "sha256:75186651a92bd89611d1d9fc20f0b4345fd827c41ccd5c299a868a05d70edf11", size = 1568901, upload-time = "2025-12-06T21:30:51.014Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/3b/ab/b3226f0bd7cdcf710fbede2b3548584366da3b19b5021e74f5bde2a8fa3f/pytest-9.0.2-py3-none-any.whl", hash = "sha256:711ffd45bf766d5264d487b917733b453d917afd2b0ad65223959f59089f875b", size = 374801, upload-time = "2025-12-06T21:30:49.154Z" }, +] + +[[package]] +name = "pytest-asyncio" +version = "1.3.0" +source = { registry = "https://pypi.org/simple" } +dependencies = [ + { name = "pytest" }, + { name = "typing-extensions", marker = "python_full_version < '3.13'" }, +] +sdist = { url = "https://files.pythonhosted.org/packages/90/2c/8af215c0f776415f3590cac4f9086ccefd6fd463befeae41cd4d3f193e5a/pytest_asyncio-1.3.0.tar.gz", hash = "sha256:d7f52f36d231b80ee124cd216ffb19369aa168fc10095013c6b014a34d3ee9e5", size = 50087, upload-time = "2025-11-10T16:07:47.256Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/e5/35/f8b19922b6a25bc0880171a2f1a003eaeb93657475193ab516fd87cac9da/pytest_asyncio-1.3.0-py3-none-any.whl", hash = "sha256:611e26147c7f77640e6d0a92a38ed17c3e9848063698d5c93d5aa7aa11cebff5", size = 15075, upload-time = "2025-11-10T16:07:45.537Z" }, +] + +[[package]] +name = "pytest-cov" +version = "7.0.0" +source = { registry = "https://pypi.org/simple" } +dependencies = [ + { name = "coverage", extra = ["toml"] }, + { name = "pluggy" }, + { name = "pytest" }, +] +sdist = { url = "https://files.pythonhosted.org/packages/5e/f7/c933acc76f5208b3b00089573cf6a2bc26dc80a8aece8f52bb7d6b1855ca/pytest_cov-7.0.0.tar.gz", hash = "sha256:33c97eda2e049a0c5298e91f519302a1334c26ac65c1a483d6206fd458361af1", size = 54328, upload-time = "2025-09-09T10:57:02.113Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/ee/49/1377b49de7d0c1ce41292161ea0f721913fa8722c19fb9c1e3aa0367eecb/pytest_cov-7.0.0-py3-none-any.whl", hash = "sha256:3b8e9558b16cc1479da72058bdecf8073661c7f57f7d3c5f22a1c23507f2d861", size = 22424, upload-time = "2025-09-09T10:57:00.695Z" }, +] + +[[package]] +name = "pyyaml" +version = "6.0.3" +source = { registry = "https://pypi.org/simple" } +sdist = { url = "https://files.pythonhosted.org/packages/05/8e/961c0007c59b8dd7729d542c61a4d537767a59645b82a0b521206e1e25c2/pyyaml-6.0.3.tar.gz", hash = "sha256:d76623373421df22fb4cf8817020cbb7ef15c725b9d5e45f17e189bfc384190f", size = 130960, upload-time = "2025-09-25T21:33:16.546Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/6d/16/a95b6757765b7b031c9374925bb718d55e0a9ba8a1b6a12d25962ea44347/pyyaml-6.0.3-cp311-cp311-macosx_10_13_x86_64.whl", hash = "sha256:44edc647873928551a01e7a563d7452ccdebee747728c1080d881d68af7b997e", size = 185826, upload-time = "2025-09-25T21:31:58.655Z" }, + { url = "https://files.pythonhosted.org/packages/16/19/13de8e4377ed53079ee996e1ab0a9c33ec2faf808a4647b7b4c0d46dd239/pyyaml-6.0.3-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:652cb6edd41e718550aad172851962662ff2681490a8a711af6a4d288dd96824", size = 175577, upload-time = "2025-09-25T21:32:00.088Z" }, + { url = "https://files.pythonhosted.org/packages/0c/62/d2eb46264d4b157dae1275b573017abec435397aa59cbcdab6fc978a8af4/pyyaml-6.0.3-cp311-cp311-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:10892704fc220243f5305762e276552a0395f7beb4dbf9b14ec8fd43b57f126c", size = 775556, upload-time = "2025-09-25T21:32:01.31Z" }, + { url = "https://files.pythonhosted.org/packages/10/cb/16c3f2cf3266edd25aaa00d6c4350381c8b012ed6f5276675b9eba8d9ff4/pyyaml-6.0.3-cp311-cp311-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:850774a7879607d3a6f50d36d04f00ee69e7fc816450e5f7e58d7f17f1ae5c00", size = 882114, upload-time = "2025-09-25T21:32:03.376Z" }, + { url = "https://files.pythonhosted.org/packages/71/60/917329f640924b18ff085ab889a11c763e0b573da888e8404ff486657602/pyyaml-6.0.3-cp311-cp311-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:b8bb0864c5a28024fac8a632c443c87c5aa6f215c0b126c449ae1a150412f31d", size = 806638, upload-time = "2025-09-25T21:32:04.553Z" }, + { url = "https://files.pythonhosted.org/packages/dd/6f/529b0f316a9fd167281a6c3826b5583e6192dba792dd55e3203d3f8e655a/pyyaml-6.0.3-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:1d37d57ad971609cf3c53ba6a7e365e40660e3be0e5175fa9f2365a379d6095a", size = 767463, upload-time = "2025-09-25T21:32:06.152Z" }, + { url = "https://files.pythonhosted.org/packages/f2/6a/b627b4e0c1dd03718543519ffb2f1deea4a1e6d42fbab8021936a4d22589/pyyaml-6.0.3-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:37503bfbfc9d2c40b344d06b2199cf0e96e97957ab1c1b546fd4f87e53e5d3e4", size = 794986, upload-time = "2025-09-25T21:32:07.367Z" }, + { url = "https://files.pythonhosted.org/packages/45/91/47a6e1c42d9ee337c4839208f30d9f09caa9f720ec7582917b264defc875/pyyaml-6.0.3-cp311-cp311-win32.whl", hash = "sha256:8098f252adfa6c80ab48096053f512f2321f0b998f98150cea9bd23d83e1467b", size = 142543, upload-time = "2025-09-25T21:32:08.95Z" }, + { url = "https://files.pythonhosted.org/packages/da/e3/ea007450a105ae919a72393cb06f122f288ef60bba2dc64b26e2646fa315/pyyaml-6.0.3-cp311-cp311-win_amd64.whl", hash = "sha256:9f3bfb4965eb874431221a3ff3fdcddc7e74e3b07799e0e84ca4a0f867d449bf", size = 158763, upload-time = "2025-09-25T21:32:09.96Z" }, + { url = "https://files.pythonhosted.org/packages/d1/33/422b98d2195232ca1826284a76852ad5a86fe23e31b009c9886b2d0fb8b2/pyyaml-6.0.3-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:7f047e29dcae44602496db43be01ad42fc6f1cc0d8cd6c83d342306c32270196", size = 182063, upload-time = "2025-09-25T21:32:11.445Z" }, + { url = "https://files.pythonhosted.org/packages/89/a0/6cf41a19a1f2f3feab0e9c0b74134aa2ce6849093d5517a0c550fe37a648/pyyaml-6.0.3-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:fc09d0aa354569bc501d4e787133afc08552722d3ab34836a80547331bb5d4a0", size = 173973, upload-time = "2025-09-25T21:32:12.492Z" }, + { url = "https://files.pythonhosted.org/packages/ed/23/7a778b6bd0b9a8039df8b1b1d80e2e2ad78aa04171592c8a5c43a56a6af4/pyyaml-6.0.3-cp312-cp312-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:9149cad251584d5fb4981be1ecde53a1ca46c891a79788c0df828d2f166bda28", size = 775116, upload-time = "2025-09-25T21:32:13.652Z" }, + { url = "https://files.pythonhosted.org/packages/65/30/d7353c338e12baef4ecc1b09e877c1970bd3382789c159b4f89d6a70dc09/pyyaml-6.0.3-cp312-cp312-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:5fdec68f91a0c6739b380c83b951e2c72ac0197ace422360e6d5a959d8d97b2c", size = 844011, upload-time = "2025-09-25T21:32:15.21Z" }, + { url = "https://files.pythonhosted.org/packages/8b/9d/b3589d3877982d4f2329302ef98a8026e7f4443c765c46cfecc8858c6b4b/pyyaml-6.0.3-cp312-cp312-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:ba1cc08a7ccde2d2ec775841541641e4548226580ab850948cbfda66a1befcdc", size = 807870, upload-time = "2025-09-25T21:32:16.431Z" }, + { url = "https://files.pythonhosted.org/packages/05/c0/b3be26a015601b822b97d9149ff8cb5ead58c66f981e04fedf4e762f4bd4/pyyaml-6.0.3-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:8dc52c23056b9ddd46818a57b78404882310fb473d63f17b07d5c40421e47f8e", size = 761089, upload-time = "2025-09-25T21:32:17.56Z" }, + { url = "https://files.pythonhosted.org/packages/be/8e/98435a21d1d4b46590d5459a22d88128103f8da4c2d4cb8f14f2a96504e1/pyyaml-6.0.3-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:41715c910c881bc081f1e8872880d3c650acf13dfa8214bad49ed4cede7c34ea", size = 790181, upload-time = "2025-09-25T21:32:18.834Z" }, + { url = "https://files.pythonhosted.org/packages/74/93/7baea19427dcfbe1e5a372d81473250b379f04b1bd3c4c5ff825e2327202/pyyaml-6.0.3-cp312-cp312-win32.whl", hash = "sha256:96b533f0e99f6579b3d4d4995707cf36df9100d67e0c8303a0c55b27b5f99bc5", size = 137658, upload-time = "2025-09-25T21:32:20.209Z" }, + { url = "https://files.pythonhosted.org/packages/86/bf/899e81e4cce32febab4fb42bb97dcdf66bc135272882d1987881a4b519e9/pyyaml-6.0.3-cp312-cp312-win_amd64.whl", hash = "sha256:5fcd34e47f6e0b794d17de1b4ff496c00986e1c83f7ab2fb8fcfe9616ff7477b", size = 154003, upload-time = "2025-09-25T21:32:21.167Z" }, + { url = "https://files.pythonhosted.org/packages/1a/08/67bd04656199bbb51dbed1439b7f27601dfb576fb864099c7ef0c3e55531/pyyaml-6.0.3-cp312-cp312-win_arm64.whl", hash = "sha256:64386e5e707d03a7e172c0701abfb7e10f0fb753ee1d773128192742712a98fd", size = 140344, upload-time = "2025-09-25T21:32:22.617Z" }, + { url = "https://files.pythonhosted.org/packages/d1/11/0fd08f8192109f7169db964b5707a2f1e8b745d4e239b784a5a1dd80d1db/pyyaml-6.0.3-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:8da9669d359f02c0b91ccc01cac4a67f16afec0dac22c2ad09f46bee0697eba8", size = 181669, upload-time = "2025-09-25T21:32:23.673Z" }, + { url = "https://files.pythonhosted.org/packages/b1/16/95309993f1d3748cd644e02e38b75d50cbc0d9561d21f390a76242ce073f/pyyaml-6.0.3-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:2283a07e2c21a2aa78d9c4442724ec1eb15f5e42a723b99cb3d822d48f5f7ad1", size = 173252, upload-time = "2025-09-25T21:32:25.149Z" }, + { url = "https://files.pythonhosted.org/packages/50/31/b20f376d3f810b9b2371e72ef5adb33879b25edb7a6d072cb7ca0c486398/pyyaml-6.0.3-cp313-cp313-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:ee2922902c45ae8ccada2c5b501ab86c36525b883eff4255313a253a3160861c", size = 767081, upload-time = "2025-09-25T21:32:26.575Z" }, + { url = "https://files.pythonhosted.org/packages/49/1e/a55ca81e949270d5d4432fbbd19dfea5321eda7c41a849d443dc92fd1ff7/pyyaml-6.0.3-cp313-cp313-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:a33284e20b78bd4a18c8c2282d549d10bc8408a2a7ff57653c0cf0b9be0afce5", size = 841159, upload-time = "2025-09-25T21:32:27.727Z" }, + { url = "https://files.pythonhosted.org/packages/74/27/e5b8f34d02d9995b80abcef563ea1f8b56d20134d8f4e5e81733b1feceb2/pyyaml-6.0.3-cp313-cp313-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:0f29edc409a6392443abf94b9cf89ce99889a1dd5376d94316ae5145dfedd5d6", size = 801626, upload-time = "2025-09-25T21:32:28.878Z" }, + { url = "https://files.pythonhosted.org/packages/f9/11/ba845c23988798f40e52ba45f34849aa8a1f2d4af4b798588010792ebad6/pyyaml-6.0.3-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:f7057c9a337546edc7973c0d3ba84ddcdf0daa14533c2065749c9075001090e6", size = 753613, upload-time = "2025-09-25T21:32:30.178Z" }, + { url = "https://files.pythonhosted.org/packages/3d/e0/7966e1a7bfc0a45bf0a7fb6b98ea03fc9b8d84fa7f2229e9659680b69ee3/pyyaml-6.0.3-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:eda16858a3cab07b80edaf74336ece1f986ba330fdb8ee0d6c0d68fe82bc96be", size = 794115, upload-time = "2025-09-25T21:32:31.353Z" }, + { url = "https://files.pythonhosted.org/packages/de/94/980b50a6531b3019e45ddeada0626d45fa85cbe22300844a7983285bed3b/pyyaml-6.0.3-cp313-cp313-win32.whl", hash = "sha256:d0eae10f8159e8fdad514efdc92d74fd8d682c933a6dd088030f3834bc8e6b26", size = 137427, upload-time = "2025-09-25T21:32:32.58Z" }, + { url = "https://files.pythonhosted.org/packages/97/c9/39d5b874e8b28845e4ec2202b5da735d0199dbe5b8fb85f91398814a9a46/pyyaml-6.0.3-cp313-cp313-win_amd64.whl", hash = "sha256:79005a0d97d5ddabfeeea4cf676af11e647e41d81c9a7722a193022accdb6b7c", size = 154090, upload-time = "2025-09-25T21:32:33.659Z" }, + { url = "https://files.pythonhosted.org/packages/73/e8/2bdf3ca2090f68bb3d75b44da7bbc71843b19c9f2b9cb9b0f4ab7a5a4329/pyyaml-6.0.3-cp313-cp313-win_arm64.whl", hash = "sha256:5498cd1645aa724a7c71c8f378eb29ebe23da2fc0d7a08071d89469bf1d2defb", size = 140246, upload-time = "2025-09-25T21:32:34.663Z" }, + { url = "https://files.pythonhosted.org/packages/9d/8c/f4bd7f6465179953d3ac9bc44ac1a8a3e6122cf8ada906b4f96c60172d43/pyyaml-6.0.3-cp314-cp314-macosx_10_13_x86_64.whl", hash = "sha256:8d1fab6bb153a416f9aeb4b8763bc0f22a5586065f86f7664fc23339fc1c1fac", size = 181814, upload-time = "2025-09-25T21:32:35.712Z" }, + { url = "https://files.pythonhosted.org/packages/bd/9c/4d95bb87eb2063d20db7b60faa3840c1b18025517ae857371c4dd55a6b3a/pyyaml-6.0.3-cp314-cp314-macosx_11_0_arm64.whl", hash = "sha256:34d5fcd24b8445fadc33f9cf348c1047101756fd760b4dacb5c3e99755703310", size = 173809, upload-time = "2025-09-25T21:32:36.789Z" }, + { url = "https://files.pythonhosted.org/packages/92/b5/47e807c2623074914e29dabd16cbbdd4bf5e9b2db9f8090fa64411fc5382/pyyaml-6.0.3-cp314-cp314-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:501a031947e3a9025ed4405a168e6ef5ae3126c59f90ce0cd6f2bfc477be31b7", size = 766454, upload-time = "2025-09-25T21:32:37.966Z" }, + { url = "https://files.pythonhosted.org/packages/02/9e/e5e9b168be58564121efb3de6859c452fccde0ab093d8438905899a3a483/pyyaml-6.0.3-cp314-cp314-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:b3bc83488de33889877a0f2543ade9f70c67d66d9ebb4ac959502e12de895788", size = 836355, upload-time = "2025-09-25T21:32:39.178Z" }, + { url = "https://files.pythonhosted.org/packages/88/f9/16491d7ed2a919954993e48aa941b200f38040928474c9e85ea9e64222c3/pyyaml-6.0.3-cp314-cp314-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:c458b6d084f9b935061bc36216e8a69a7e293a2f1e68bf956dcd9e6cbcd143f5", size = 794175, upload-time = "2025-09-25T21:32:40.865Z" }, + { url = "https://files.pythonhosted.org/packages/dd/3f/5989debef34dc6397317802b527dbbafb2b4760878a53d4166579111411e/pyyaml-6.0.3-cp314-cp314-musllinux_1_2_aarch64.whl", hash = "sha256:7c6610def4f163542a622a73fb39f534f8c101d690126992300bf3207eab9764", size = 755228, upload-time = "2025-09-25T21:32:42.084Z" }, + { url = "https://files.pythonhosted.org/packages/d7/ce/af88a49043cd2e265be63d083fc75b27b6ed062f5f9fd6cdc223ad62f03e/pyyaml-6.0.3-cp314-cp314-musllinux_1_2_x86_64.whl", hash = "sha256:5190d403f121660ce8d1d2c1bb2ef1bd05b5f68533fc5c2ea899bd15f4399b35", size = 789194, upload-time = "2025-09-25T21:32:43.362Z" }, + { url = "https://files.pythonhosted.org/packages/23/20/bb6982b26a40bb43951265ba29d4c246ef0ff59c9fdcdf0ed04e0687de4d/pyyaml-6.0.3-cp314-cp314-win_amd64.whl", hash = "sha256:4a2e8cebe2ff6ab7d1050ecd59c25d4c8bd7e6f400f5f82b96557ac0abafd0ac", size = 156429, upload-time = "2025-09-25T21:32:57.844Z" }, + { url = "https://files.pythonhosted.org/packages/f4/f4/a4541072bb9422c8a883ab55255f918fa378ecf083f5b85e87fc2b4eda1b/pyyaml-6.0.3-cp314-cp314-win_arm64.whl", hash = "sha256:93dda82c9c22deb0a405ea4dc5f2d0cda384168e466364dec6255b293923b2f3", size = 143912, upload-time = "2025-09-25T21:32:59.247Z" }, + { url = "https://files.pythonhosted.org/packages/7c/f9/07dd09ae774e4616edf6cda684ee78f97777bdd15847253637a6f052a62f/pyyaml-6.0.3-cp314-cp314t-macosx_10_13_x86_64.whl", hash = "sha256:02893d100e99e03eda1c8fd5c441d8c60103fd175728e23e431db1b589cf5ab3", size = 189108, upload-time = "2025-09-25T21:32:44.377Z" }, + { url = "https://files.pythonhosted.org/packages/4e/78/8d08c9fb7ce09ad8c38ad533c1191cf27f7ae1effe5bb9400a46d9437fcf/pyyaml-6.0.3-cp314-cp314t-macosx_11_0_arm64.whl", hash = "sha256:c1ff362665ae507275af2853520967820d9124984e0f7466736aea23d8611fba", size = 183641, upload-time = "2025-09-25T21:32:45.407Z" }, + { url = "https://files.pythonhosted.org/packages/7b/5b/3babb19104a46945cf816d047db2788bcaf8c94527a805610b0289a01c6b/pyyaml-6.0.3-cp314-cp314t-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:6adc77889b628398debc7b65c073bcb99c4a0237b248cacaf3fe8a557563ef6c", size = 831901, upload-time = "2025-09-25T21:32:48.83Z" }, + { url = "https://files.pythonhosted.org/packages/8b/cc/dff0684d8dc44da4d22a13f35f073d558c268780ce3c6ba1b87055bb0b87/pyyaml-6.0.3-cp314-cp314t-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:a80cb027f6b349846a3bf6d73b5e95e782175e52f22108cfa17876aaeff93702", size = 861132, upload-time = "2025-09-25T21:32:50.149Z" }, + { url = "https://files.pythonhosted.org/packages/b1/5e/f77dc6b9036943e285ba76b49e118d9ea929885becb0a29ba8a7c75e29fe/pyyaml-6.0.3-cp314-cp314t-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:00c4bdeba853cc34e7dd471f16b4114f4162dc03e6b7afcc2128711f0eca823c", size = 839261, upload-time = "2025-09-25T21:32:51.808Z" }, + { url = "https://files.pythonhosted.org/packages/ce/88/a9db1376aa2a228197c58b37302f284b5617f56a5d959fd1763fb1675ce6/pyyaml-6.0.3-cp314-cp314t-musllinux_1_2_aarch64.whl", hash = "sha256:66e1674c3ef6f541c35191caae2d429b967b99e02040f5ba928632d9a7f0f065", size = 805272, upload-time = "2025-09-25T21:32:52.941Z" }, + { url = "https://files.pythonhosted.org/packages/da/92/1446574745d74df0c92e6aa4a7b0b3130706a4142b2d1a5869f2eaa423c6/pyyaml-6.0.3-cp314-cp314t-musllinux_1_2_x86_64.whl", hash = "sha256:16249ee61e95f858e83976573de0f5b2893b3677ba71c9dd36b9cf8be9ac6d65", size = 829923, upload-time = "2025-09-25T21:32:54.537Z" }, + { url = "https://files.pythonhosted.org/packages/f0/7a/1c7270340330e575b92f397352af856a8c06f230aa3e76f86b39d01b416a/pyyaml-6.0.3-cp314-cp314t-win_amd64.whl", hash = "sha256:4ad1906908f2f5ae4e5a8ddfce73c320c2a1429ec52eafd27138b7f1cbe341c9", size = 174062, upload-time = "2025-09-25T21:32:55.767Z" }, + { url = "https://files.pythonhosted.org/packages/f1/12/de94a39c2ef588c7e6455cfbe7343d3b2dc9d6b6b2f40c4c6565744c873d/pyyaml-6.0.3-cp314-cp314t-win_arm64.whl", hash = "sha256:ebc55a14a21cb14062aa4162f906cd962b28e2e9ea38f9b4391244cd8de4ae0b", size = 149341, upload-time = "2025-09-25T21:32:56.828Z" }, +] + +[[package]] +name = "rich" +version = "14.2.0" +source = { registry = "https://pypi.org/simple" } +dependencies = [ + { name = "markdown-it-py" }, + { name = "pygments" }, +] +sdist = { url = "https://files.pythonhosted.org/packages/fb/d2/8920e102050a0de7bfabeb4c4614a49248cf8d5d7a8d01885fbb24dc767a/rich-14.2.0.tar.gz", hash = "sha256:73ff50c7c0c1c77c8243079283f4edb376f0f6442433aecb8ce7e6d0b92d1fe4", size = 219990, upload-time = "2025-10-09T14:16:53.064Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/25/7a/b0178788f8dc6cafce37a212c99565fa1fe7872c70c6c9c1e1a372d9d88f/rich-14.2.0-py3-none-any.whl", hash = "sha256:76bc51fe2e57d2b1be1f96c524b890b816e334ab4c1e45888799bfaab0021edd", size = 243393, upload-time = "2025-10-09T14:16:51.245Z" }, +] + +[[package]] +name = "ruff" +version = "0.14.10" +source = { registry = "https://pypi.org/simple" } +sdist = { url = "https://files.pythonhosted.org/packages/57/08/52232a877978dd8f9cf2aeddce3e611b40a63287dfca29b6b8da791f5e8d/ruff-0.14.10.tar.gz", hash = "sha256:9a2e830f075d1a42cd28420d7809ace390832a490ed0966fe373ba288e77aaf4", size = 5859763, upload-time = "2025-12-18T19:28:57.98Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/60/01/933704d69f3f05ee16ef11406b78881733c186fe14b6a46b05cfcaf6d3b2/ruff-0.14.10-py3-none-linux_armv6l.whl", hash = "sha256:7a3ce585f2ade3e1f29ec1b92df13e3da262178df8c8bdf876f48fa0e8316c49", size = 13527080, upload-time = "2025-12-18T19:29:25.642Z" }, + { url = "https://files.pythonhosted.org/packages/df/58/a0349197a7dfa603ffb7f5b0470391efa79ddc327c1e29c4851e85b09cc5/ruff-0.14.10-py3-none-macosx_10_12_x86_64.whl", hash = "sha256:674f9be9372907f7257c51f1d4fc902cb7cf014b9980152b802794317941f08f", size = 13797320, upload-time = "2025-12-18T19:29:02.571Z" }, + { url = "https://files.pythonhosted.org/packages/7b/82/36be59f00a6082e38c23536df4e71cdbc6af8d7c707eade97fcad5c98235/ruff-0.14.10-py3-none-macosx_11_0_arm64.whl", hash = "sha256:d85713d522348837ef9df8efca33ccb8bd6fcfc86a2cde3ccb4bc9d28a18003d", size = 12918434, upload-time = "2025-12-18T19:28:51.202Z" }, + { url = "https://files.pythonhosted.org/packages/a6/00/45c62a7f7e34da92a25804f813ebe05c88aa9e0c25e5cb5a7d23dd7450e3/ruff-0.14.10-py3-none-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:6987ebe0501ae4f4308d7d24e2d0fe3d7a98430f5adfd0f1fead050a740a3a77", size = 13371961, upload-time = "2025-12-18T19:29:04.991Z" }, + { url = "https://files.pythonhosted.org/packages/40/31/a5906d60f0405f7e57045a70f2d57084a93ca7425f22e1d66904769d1628/ruff-0.14.10-py3-none-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:16a01dfb7b9e4eee556fbfd5392806b1b8550c9b4a9f6acd3dbe6812b193c70a", size = 13275629, upload-time = "2025-12-18T19:29:21.381Z" }, + { url = "https://files.pythonhosted.org/packages/3e/60/61c0087df21894cf9d928dc04bcd4fb10e8b2e8dca7b1a276ba2155b2002/ruff-0.14.10-py3-none-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:7165d31a925b7a294465fa81be8c12a0e9b60fb02bf177e79067c867e71f8b1f", size = 14029234, upload-time = "2025-12-18T19:29:00.132Z" }, + { url = "https://files.pythonhosted.org/packages/44/84/77d911bee3b92348b6e5dab5a0c898d87084ea03ac5dc708f46d88407def/ruff-0.14.10-py3-none-manylinux_2_17_ppc64.manylinux2014_ppc64.whl", hash = "sha256:c561695675b972effb0c0a45db233f2c816ff3da8dcfbe7dfc7eed625f218935", size = 15449890, upload-time = "2025-12-18T19:28:53.573Z" }, + { url = "https://files.pythonhosted.org/packages/e9/36/480206eaefa24a7ec321582dda580443a8f0671fdbf6b1c80e9c3e93a16a/ruff-0.14.10-py3-none-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:4bb98fcbbc61725968893682fd4df8966a34611239c9fd07a1f6a07e7103d08e", size = 15123172, upload-time = "2025-12-18T19:29:23.453Z" }, + { url = "https://files.pythonhosted.org/packages/5c/38/68e414156015ba80cef5473d57919d27dfb62ec804b96180bafdeaf0e090/ruff-0.14.10-py3-none-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:f24b47993a9d8cb858429e97bdf8544c78029f09b520af615c1d261bf827001d", size = 14460260, upload-time = "2025-12-18T19:29:27.808Z" }, + { url = "https://files.pythonhosted.org/packages/b3/19/9e050c0dca8aba824d67cc0db69fb459c28d8cd3f6855b1405b3f29cc91d/ruff-0.14.10-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:59aabd2e2c4fd614d2862e7939c34a532c04f1084476d6833dddef4afab87e9f", size = 14229978, upload-time = "2025-12-18T19:29:11.32Z" }, + { url = "https://files.pythonhosted.org/packages/51/eb/e8dd1dd6e05b9e695aa9dd420f4577debdd0f87a5ff2fedda33c09e9be8c/ruff-0.14.10-py3-none-manylinux_2_31_riscv64.whl", hash = "sha256:213db2b2e44be8625002dbea33bb9c60c66ea2c07c084a00d55732689d697a7f", size = 14338036, upload-time = "2025-12-18T19:29:09.184Z" }, + { url = "https://files.pythonhosted.org/packages/6a/12/f3e3a505db7c19303b70af370d137795fcfec136d670d5de5391e295c134/ruff-0.14.10-py3-none-musllinux_1_2_aarch64.whl", hash = "sha256:b914c40ab64865a17a9a5b67911d14df72346a634527240039eb3bd650e5979d", size = 13264051, upload-time = "2025-12-18T19:29:13.431Z" }, + { url = "https://files.pythonhosted.org/packages/08/64/8c3a47eaccfef8ac20e0484e68e0772013eb85802f8a9f7603ca751eb166/ruff-0.14.10-py3-none-musllinux_1_2_armv7l.whl", hash = "sha256:1484983559f026788e3a5c07c81ef7d1e97c1c78ed03041a18f75df104c45405", size = 13283998, upload-time = "2025-12-18T19:29:06.994Z" }, + { url = "https://files.pythonhosted.org/packages/12/84/534a5506f4074e5cc0529e5cd96cfc01bb480e460c7edf5af70d2bcae55e/ruff-0.14.10-py3-none-musllinux_1_2_i686.whl", hash = "sha256:c70427132db492d25f982fffc8d6c7535cc2fd2c83fc8888f05caaa248521e60", size = 13601891, upload-time = "2025-12-18T19:28:55.811Z" }, + { url = "https://files.pythonhosted.org/packages/0d/1e/14c916087d8598917dbad9b2921d340f7884824ad6e9c55de948a93b106d/ruff-0.14.10-py3-none-musllinux_1_2_x86_64.whl", hash = "sha256:5bcf45b681e9f1ee6445d317ce1fa9d6cba9a6049542d1c3d5b5958986be8830", size = 14336660, upload-time = "2025-12-18T19:29:16.531Z" }, + { url = "https://files.pythonhosted.org/packages/f2/1c/d7b67ab43f30013b47c12b42d1acd354c195351a3f7a1d67f59e54227ede/ruff-0.14.10-py3-none-win32.whl", hash = "sha256:104c49fc7ab73f3f3a758039adea978869a918f31b73280db175b43a2d9b51d6", size = 13196187, upload-time = "2025-12-18T19:29:19.006Z" }, + { url = "https://files.pythonhosted.org/packages/fb/9c/896c862e13886fae2af961bef3e6312db9ebc6adc2b156fe95e615dee8c1/ruff-0.14.10-py3-none-win_amd64.whl", hash = "sha256:466297bd73638c6bdf06485683e812db1c00c7ac96d4ddd0294a338c62fdc154", size = 14661283, upload-time = "2025-12-18T19:29:30.16Z" }, + { url = "https://files.pythonhosted.org/packages/74/31/b0e29d572670dca3674eeee78e418f20bdf97fa8aa9ea71380885e175ca0/ruff-0.14.10-py3-none-win_arm64.whl", hash = "sha256:e51d046cf6dda98a4633b8a8a771451107413b0f07183b2bef03f075599e44e6", size = 13729839, upload-time = "2025-12-18T19:28:48.636Z" }, +] + +[[package]] +name = "skill-agents-common" +version = "0.1.0" +source = { editable = "../skill-agents-common" } + +[[package]] +name = "skill-pr-addresser" +version = "0.1.0" +source = { editable = "." } +dependencies = [ + { name = "cement" }, + { name = "chevron" }, + { name = "colorlog" }, + { name = "opentelemetry-api" }, + { name = "opentelemetry-exporter-otlp-proto-grpc" }, + { name = "opentelemetry-sdk" }, + { name = "pyyaml" }, + { name = "skill-agents-common" }, + { name = "textual" }, + { name = "toml" }, +] + +[package.optional-dependencies] +dev = [ + { name = "pytest" }, + { name = "pytest-asyncio" }, + { name = "pytest-cov" }, + { name = "ruff" }, +] + +[package.metadata] +requires-dist = [ + { name = "cement", specifier = ">=3.0.10" }, + { name = "chevron", specifier = ">=0.14.0" }, + { name = "colorlog", specifier = ">=6.8.0" }, + { name = "opentelemetry-api", specifier = ">=1.22.0" }, + { name = "opentelemetry-exporter-otlp-proto-grpc", specifier = ">=1.22.0" }, + { name = "opentelemetry-sdk", specifier = ">=1.22.0" }, + { name = "pytest", marker = "extra == 'dev'", specifier = ">=8.0.0" }, + { name = "pytest-asyncio", marker = "extra == 'dev'", specifier = ">=0.23.0" }, + { name = "pytest-cov", marker = "extra == 'dev'", specifier = ">=4.1.0" }, + { name = "pyyaml", specifier = ">=6.0" }, + { name = "ruff", marker = "extra == 'dev'", specifier = ">=0.2.0" }, + { name = "skill-agents-common", editable = "../skill-agents-common" }, + { name = "textual", specifier = ">=0.47.0" }, + { name = "toml", specifier = ">=0.10.2" }, +] +provides-extras = ["dev"] + +[[package]] +name = "textual" +version = "6.11.0" +source = { registry = "https://pypi.org/simple" } +dependencies = [ + { name = "markdown-it-py", extra = ["linkify"] }, + { name = "mdit-py-plugins" }, + { name = "platformdirs" }, + { name = "pygments" }, + { name = "rich" }, + { name = "typing-extensions" }, +] +sdist = { url = "https://files.pythonhosted.org/packages/55/06/906f86bbc59ec7cd3fb424250e19ce670406d1f28e49e86c2221e9fd7ed2/textual-6.11.0.tar.gz", hash = "sha256:08237ebda0cfbbfd1a4e2fd3039882b35894a73994f6f0fcc12c5b0d78acf3cc", size = 1584292, upload-time = "2025-12-18T10:48:38.033Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/b5/fc/5e2988590ff2e0128eea6446806c904445a44e17256c67141573ea16b5a5/textual-6.11.0-py3-none-any.whl", hash = "sha256:9e663b73ed37123a9b13c16a0c85e09ef917a4cfded97814361ed5cccfa40f89", size = 714886, upload-time = "2025-12-18T10:48:36.269Z" }, +] + +[[package]] +name = "toml" +version = "0.10.2" +source = { registry = "https://pypi.org/simple" } +sdist = { url = "https://files.pythonhosted.org/packages/be/ba/1f744cdc819428fc6b5084ec34d9b30660f6f9daaf70eead706e3203ec3c/toml-0.10.2.tar.gz", hash = "sha256:b3bda1d108d5dd99f4a20d24d9c348e91c4db7ab1b749200bded2f839ccbe68f", size = 22253, upload-time = "2020-11-01T01:40:22.204Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/44/6f/7120676b6d73228c96e17f1f794d8ab046fc910d781c8d151120c3f1569e/toml-0.10.2-py2.py3-none-any.whl", hash = "sha256:806143ae5bfb6a3c6e736a764057db0e6a0e05e338b5630894a5f779cabb4f9b", size = 16588, upload-time = "2020-11-01T01:40:20.672Z" }, +] + +[[package]] +name = "tomli" +version = "2.3.0" +source = { registry = "https://pypi.org/simple" } +sdist = { url = "https://files.pythonhosted.org/packages/52/ed/3f73f72945444548f33eba9a87fc7a6e969915e7b1acc8260b30e1f76a2f/tomli-2.3.0.tar.gz", hash = "sha256:64be704a875d2a59753d80ee8a533c3fe183e3f06807ff7dc2232938ccb01549", size = 17392, upload-time = "2025-10-08T22:01:47.119Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/b3/2e/299f62b401438d5fe1624119c723f5d877acc86a4c2492da405626665f12/tomli-2.3.0-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:88bd15eb972f3664f5ed4b57c1634a97153b4bac4479dcb6a495f41921eb7f45", size = 153236, upload-time = "2025-10-08T22:01:00.137Z" }, + { url = "https://files.pythonhosted.org/packages/86/7f/d8fffe6a7aefdb61bced88fcb5e280cfd71e08939da5894161bd71bea022/tomli-2.3.0-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:883b1c0d6398a6a9d29b508c331fa56adbcdff647f6ace4dfca0f50e90dfd0ba", size = 148084, upload-time = "2025-10-08T22:01:01.63Z" }, + { url = "https://files.pythonhosted.org/packages/47/5c/24935fb6a2ee63e86d80e4d3b58b222dafaf438c416752c8b58537c8b89a/tomli-2.3.0-cp311-cp311-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:d1381caf13ab9f300e30dd8feadb3de072aeb86f1d34a8569453ff32a7dea4bf", size = 234832, upload-time = "2025-10-08T22:01:02.543Z" }, + { url = "https://files.pythonhosted.org/packages/89/da/75dfd804fc11e6612846758a23f13271b76d577e299592b4371a4ca4cd09/tomli-2.3.0-cp311-cp311-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:a0e285d2649b78c0d9027570d4da3425bdb49830a6156121360b3f8511ea3441", size = 242052, upload-time = "2025-10-08T22:01:03.836Z" }, + { url = "https://files.pythonhosted.org/packages/70/8c/f48ac899f7b3ca7eb13af73bacbc93aec37f9c954df3c08ad96991c8c373/tomli-2.3.0-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:0a154a9ae14bfcf5d8917a59b51ffd5a3ac1fd149b71b47a3a104ca4edcfa845", size = 239555, upload-time = "2025-10-08T22:01:04.834Z" }, + { url = "https://files.pythonhosted.org/packages/ba/28/72f8afd73f1d0e7829bfc093f4cb98ce0a40ffc0cc997009ee1ed94ba705/tomli-2.3.0-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:74bf8464ff93e413514fefd2be591c3b0b23231a77f901db1eb30d6f712fc42c", size = 245128, upload-time = "2025-10-08T22:01:05.84Z" }, + { url = "https://files.pythonhosted.org/packages/b6/eb/a7679c8ac85208706d27436e8d421dfa39d4c914dcf5fa8083a9305f58d9/tomli-2.3.0-cp311-cp311-win32.whl", hash = "sha256:00b5f5d95bbfc7d12f91ad8c593a1659b6387b43f054104cda404be6bda62456", size = 96445, upload-time = "2025-10-08T22:01:06.896Z" }, + { url = "https://files.pythonhosted.org/packages/0a/fe/3d3420c4cb1ad9cb462fb52967080575f15898da97e21cb6f1361d505383/tomli-2.3.0-cp311-cp311-win_amd64.whl", hash = "sha256:4dc4ce8483a5d429ab602f111a93a6ab1ed425eae3122032db7e9acf449451be", size = 107165, upload-time = "2025-10-08T22:01:08.107Z" }, + { url = "https://files.pythonhosted.org/packages/ff/b7/40f36368fcabc518bb11c8f06379a0fd631985046c038aca08c6d6a43c6e/tomli-2.3.0-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:d7d86942e56ded512a594786a5ba0a5e521d02529b3826e7761a05138341a2ac", size = 154891, upload-time = "2025-10-08T22:01:09.082Z" }, + { url = "https://files.pythonhosted.org/packages/f9/3f/d9dd692199e3b3aab2e4e4dd948abd0f790d9ded8cd10cbaae276a898434/tomli-2.3.0-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:73ee0b47d4dad1c5e996e3cd33b8a76a50167ae5f96a2607cbe8cc773506ab22", size = 148796, upload-time = "2025-10-08T22:01:10.266Z" }, + { url = "https://files.pythonhosted.org/packages/60/83/59bff4996c2cf9f9387a0f5a3394629c7efa5ef16142076a23a90f1955fa/tomli-2.3.0-cp312-cp312-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:792262b94d5d0a466afb5bc63c7daa9d75520110971ee269152083270998316f", size = 242121, upload-time = "2025-10-08T22:01:11.332Z" }, + { url = "https://files.pythonhosted.org/packages/45/e5/7c5119ff39de8693d6baab6c0b6dcb556d192c165596e9fc231ea1052041/tomli-2.3.0-cp312-cp312-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:4f195fe57ecceac95a66a75ac24d9d5fbc98ef0962e09b2eddec5d39375aae52", size = 250070, upload-time = "2025-10-08T22:01:12.498Z" }, + { url = "https://files.pythonhosted.org/packages/45/12/ad5126d3a278f27e6701abde51d342aa78d06e27ce2bb596a01f7709a5a2/tomli-2.3.0-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:e31d432427dcbf4d86958c184b9bfd1e96b5b71f8eb17e6d02531f434fd335b8", size = 245859, upload-time = "2025-10-08T22:01:13.551Z" }, + { url = "https://files.pythonhosted.org/packages/fb/a1/4d6865da6a71c603cfe6ad0e6556c73c76548557a8d658f9e3b142df245f/tomli-2.3.0-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:7b0882799624980785240ab732537fcfc372601015c00f7fc367c55308c186f6", size = 250296, upload-time = "2025-10-08T22:01:14.614Z" }, + { url = "https://files.pythonhosted.org/packages/a0/b7/a7a7042715d55c9ba6e8b196d65d2cb662578b4d8cd17d882d45322b0d78/tomli-2.3.0-cp312-cp312-win32.whl", hash = "sha256:ff72b71b5d10d22ecb084d345fc26f42b5143c5533db5e2eaba7d2d335358876", size = 97124, upload-time = "2025-10-08T22:01:15.629Z" }, + { url = "https://files.pythonhosted.org/packages/06/1e/f22f100db15a68b520664eb3328fb0ae4e90530887928558112c8d1f4515/tomli-2.3.0-cp312-cp312-win_amd64.whl", hash = "sha256:1cb4ed918939151a03f33d4242ccd0aa5f11b3547d0cf30f7c74a408a5b99878", size = 107698, upload-time = "2025-10-08T22:01:16.51Z" }, + { url = "https://files.pythonhosted.org/packages/89/48/06ee6eabe4fdd9ecd48bf488f4ac783844fd777f547b8d1b61c11939974e/tomli-2.3.0-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:5192f562738228945d7b13d4930baffda67b69425a7f0da96d360b0a3888136b", size = 154819, upload-time = "2025-10-08T22:01:17.964Z" }, + { url = "https://files.pythonhosted.org/packages/f1/01/88793757d54d8937015c75dcdfb673c65471945f6be98e6a0410fba167ed/tomli-2.3.0-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:be71c93a63d738597996be9528f4abe628d1adf5e6eb11607bc8fe1a510b5dae", size = 148766, upload-time = "2025-10-08T22:01:18.959Z" }, + { url = "https://files.pythonhosted.org/packages/42/17/5e2c956f0144b812e7e107f94f1cc54af734eb17b5191c0bbfb72de5e93e/tomli-2.3.0-cp313-cp313-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:c4665508bcbac83a31ff8ab08f424b665200c0e1e645d2bd9ab3d3e557b6185b", size = 240771, upload-time = "2025-10-08T22:01:20.106Z" }, + { url = "https://files.pythonhosted.org/packages/d5/f4/0fbd014909748706c01d16824eadb0307115f9562a15cbb012cd9b3512c5/tomli-2.3.0-cp313-cp313-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:4021923f97266babc6ccab9f5068642a0095faa0a51a246a6a02fccbb3514eaf", size = 248586, upload-time = "2025-10-08T22:01:21.164Z" }, + { url = "https://files.pythonhosted.org/packages/30/77/fed85e114bde5e81ecf9bc5da0cc69f2914b38f4708c80ae67d0c10180c5/tomli-2.3.0-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:a4ea38c40145a357d513bffad0ed869f13c1773716cf71ccaa83b0fa0cc4e42f", size = 244792, upload-time = "2025-10-08T22:01:22.417Z" }, + { url = "https://files.pythonhosted.org/packages/55/92/afed3d497f7c186dc71e6ee6d4fcb0acfa5f7d0a1a2878f8beae379ae0cc/tomli-2.3.0-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:ad805ea85eda330dbad64c7ea7a4556259665bdf9d2672f5dccc740eb9d3ca05", size = 248909, upload-time = "2025-10-08T22:01:23.859Z" }, + { url = "https://files.pythonhosted.org/packages/f8/84/ef50c51b5a9472e7265ce1ffc7f24cd4023d289e109f669bdb1553f6a7c2/tomli-2.3.0-cp313-cp313-win32.whl", hash = "sha256:97d5eec30149fd3294270e889b4234023f2c69747e555a27bd708828353ab606", size = 96946, upload-time = "2025-10-08T22:01:24.893Z" }, + { url = "https://files.pythonhosted.org/packages/b2/b7/718cd1da0884f281f95ccfa3a6cc572d30053cba64603f79d431d3c9b61b/tomli-2.3.0-cp313-cp313-win_amd64.whl", hash = "sha256:0c95ca56fbe89e065c6ead5b593ee64b84a26fca063b5d71a1122bf26e533999", size = 107705, upload-time = "2025-10-08T22:01:26.153Z" }, + { url = "https://files.pythonhosted.org/packages/19/94/aeafa14a52e16163008060506fcb6aa1949d13548d13752171a755c65611/tomli-2.3.0-cp314-cp314-macosx_10_13_x86_64.whl", hash = "sha256:cebc6fe843e0733ee827a282aca4999b596241195f43b4cc371d64fc6639da9e", size = 154244, upload-time = "2025-10-08T22:01:27.06Z" }, + { url = "https://files.pythonhosted.org/packages/db/e4/1e58409aa78eefa47ccd19779fc6f36787edbe7d4cd330eeeedb33a4515b/tomli-2.3.0-cp314-cp314-macosx_11_0_arm64.whl", hash = "sha256:4c2ef0244c75aba9355561272009d934953817c49f47d768070c3c94355c2aa3", size = 148637, upload-time = "2025-10-08T22:01:28.059Z" }, + { url = "https://files.pythonhosted.org/packages/26/b6/d1eccb62f665e44359226811064596dd6a366ea1f985839c566cd61525ae/tomli-2.3.0-cp314-cp314-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:c22a8bf253bacc0cf11f35ad9808b6cb75ada2631c2d97c971122583b129afbc", size = 241925, upload-time = "2025-10-08T22:01:29.066Z" }, + { url = "https://files.pythonhosted.org/packages/70/91/7cdab9a03e6d3d2bb11beae108da5bdc1c34bdeb06e21163482544ddcc90/tomli-2.3.0-cp314-cp314-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:0eea8cc5c5e9f89c9b90c4896a8deefc74f518db5927d0e0e8d4a80953d774d0", size = 249045, upload-time = "2025-10-08T22:01:31.98Z" }, + { url = "https://files.pythonhosted.org/packages/15/1b/8c26874ed1f6e4f1fcfeb868db8a794cbe9f227299402db58cfcc858766c/tomli-2.3.0-cp314-cp314-musllinux_1_2_aarch64.whl", hash = "sha256:b74a0e59ec5d15127acdabd75ea17726ac4c5178ae51b85bfe39c4f8a278e879", size = 245835, upload-time = "2025-10-08T22:01:32.989Z" }, + { url = "https://files.pythonhosted.org/packages/fd/42/8e3c6a9a4b1a1360c1a2a39f0b972cef2cc9ebd56025168c4137192a9321/tomli-2.3.0-cp314-cp314-musllinux_1_2_x86_64.whl", hash = "sha256:b5870b50c9db823c595983571d1296a6ff3e1b88f734a4c8f6fc6188397de005", size = 253109, upload-time = "2025-10-08T22:01:34.052Z" }, + { url = "https://files.pythonhosted.org/packages/22/0c/b4da635000a71b5f80130937eeac12e686eefb376b8dee113b4a582bba42/tomli-2.3.0-cp314-cp314-win32.whl", hash = "sha256:feb0dacc61170ed7ab602d3d972a58f14ee3ee60494292d384649a3dc38ef463", size = 97930, upload-time = "2025-10-08T22:01:35.082Z" }, + { url = "https://files.pythonhosted.org/packages/b9/74/cb1abc870a418ae99cd5c9547d6bce30701a954e0e721821df483ef7223c/tomli-2.3.0-cp314-cp314-win_amd64.whl", hash = "sha256:b273fcbd7fc64dc3600c098e39136522650c49bca95df2d11cf3b626422392c8", size = 107964, upload-time = "2025-10-08T22:01:36.057Z" }, + { url = "https://files.pythonhosted.org/packages/54/78/5c46fff6432a712af9f792944f4fcd7067d8823157949f4e40c56b8b3c83/tomli-2.3.0-cp314-cp314t-macosx_10_13_x86_64.whl", hash = "sha256:940d56ee0410fa17ee1f12b817b37a4d4e4dc4d27340863cc67236c74f582e77", size = 163065, upload-time = "2025-10-08T22:01:37.27Z" }, + { url = "https://files.pythonhosted.org/packages/39/67/f85d9bd23182f45eca8939cd2bc7050e1f90c41f4a2ecbbd5963a1d1c486/tomli-2.3.0-cp314-cp314t-macosx_11_0_arm64.whl", hash = "sha256:f85209946d1fe94416debbb88d00eb92ce9cd5266775424ff81bc959e001acaf", size = 159088, upload-time = "2025-10-08T22:01:38.235Z" }, + { url = "https://files.pythonhosted.org/packages/26/5a/4b546a0405b9cc0659b399f12b6adb750757baf04250b148d3c5059fc4eb/tomli-2.3.0-cp314-cp314t-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:a56212bdcce682e56b0aaf79e869ba5d15a6163f88d5451cbde388d48b13f530", size = 268193, upload-time = "2025-10-08T22:01:39.712Z" }, + { url = "https://files.pythonhosted.org/packages/42/4f/2c12a72ae22cf7b59a7fe75b3465b7aba40ea9145d026ba41cb382075b0e/tomli-2.3.0-cp314-cp314t-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:c5f3ffd1e098dfc032d4d3af5c0ac64f6d286d98bc148698356847b80fa4de1b", size = 275488, upload-time = "2025-10-08T22:01:40.773Z" }, + { url = "https://files.pythonhosted.org/packages/92/04/a038d65dbe160c3aa5a624e93ad98111090f6804027d474ba9c37c8ae186/tomli-2.3.0-cp314-cp314t-musllinux_1_2_aarch64.whl", hash = "sha256:5e01decd096b1530d97d5d85cb4dff4af2d8347bd35686654a004f8dea20fc67", size = 272669, upload-time = "2025-10-08T22:01:41.824Z" }, + { url = "https://files.pythonhosted.org/packages/be/2f/8b7c60a9d1612a7cbc39ffcca4f21a73bf368a80fc25bccf8253e2563267/tomli-2.3.0-cp314-cp314t-musllinux_1_2_x86_64.whl", hash = "sha256:8a35dd0e643bb2610f156cca8db95d213a90015c11fee76c946aa62b7ae7e02f", size = 279709, upload-time = "2025-10-08T22:01:43.177Z" }, + { url = "https://files.pythonhosted.org/packages/7e/46/cc36c679f09f27ded940281c38607716c86cf8ba4a518d524e349c8b4874/tomli-2.3.0-cp314-cp314t-win32.whl", hash = "sha256:a1f7f282fe248311650081faafa5f4732bdbfef5d45fe3f2e702fbc6f2d496e0", size = 107563, upload-time = "2025-10-08T22:01:44.233Z" }, + { url = "https://files.pythonhosted.org/packages/84/ff/426ca8683cf7b753614480484f6437f568fd2fda2edbdf57a2d3d8b27a0b/tomli-2.3.0-cp314-cp314t-win_amd64.whl", hash = "sha256:70a251f8d4ba2d9ac2542eecf008b3c8a9fc5c3f9f02c56a9d7952612be2fdba", size = 119756, upload-time = "2025-10-08T22:01:45.234Z" }, + { url = "https://files.pythonhosted.org/packages/77/b8/0135fadc89e73be292b473cb820b4f5a08197779206b33191e801feeae40/tomli-2.3.0-py3-none-any.whl", hash = "sha256:e95b1af3c5b07d9e643909b5abbec77cd9f1217e6d0bca72b0234736b9fb1f1b", size = 14408, upload-time = "2025-10-08T22:01:46.04Z" }, +] + +[[package]] +name = "typing-extensions" +version = "4.15.0" +source = { registry = "https://pypi.org/simple" } +sdist = { url = "https://files.pythonhosted.org/packages/72/94/1a15dd82efb362ac84269196e94cf00f187f7ed21c242792a923cdb1c61f/typing_extensions-4.15.0.tar.gz", hash = "sha256:0cea48d173cc12fa28ecabc3b837ea3cf6f38c6d1136f85cbaaf598984861466", size = 109391, upload-time = "2025-08-25T13:49:26.313Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/18/67/36e9267722cc04a6b9f15c7f3441c2363321a3ea07da7ae0c0707beb2a9c/typing_extensions-4.15.0-py3-none-any.whl", hash = "sha256:f0fa19c6845758ab08074a0cfa8b7aecb71c999ca73d62883bc25cc018c4e548", size = 44614, upload-time = "2025-08-25T13:49:24.86Z" }, +] + +[[package]] +name = "uc-micro-py" +version = "1.0.3" +source = { registry = "https://pypi.org/simple" } +sdist = { url = "https://files.pythonhosted.org/packages/91/7a/146a99696aee0609e3712f2b44c6274566bc368dfe8375191278045186b8/uc-micro-py-1.0.3.tar.gz", hash = "sha256:d321b92cff673ec58027c04015fcaa8bb1e005478643ff4a500882eaab88c48a", size = 6043, upload-time = "2024-02-09T16:52:01.654Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/37/87/1f677586e8ac487e29672e4b17455758fce261de06a0d086167bb760361a/uc_micro_py-1.0.3-py3-none-any.whl", hash = "sha256:db1dffff340817673d7b466ec86114a9dc0e9d4d9b5ba229d9d60e5c12600cd5", size = 6229, upload-time = "2024-02-09T16:52:00.371Z" }, +] + +[[package]] +name = "zipp" +version = "3.23.0" +source = { registry = "https://pypi.org/simple" } +sdist = { url = "https://files.pythonhosted.org/packages/e3/02/0f2892c661036d50ede074e376733dca2ae7c6eb617489437771209d4180/zipp-3.23.0.tar.gz", hash = "sha256:a07157588a12518c9d4034df3fbbee09c814741a33ff63c05fa29d26a2404166", size = 25547, upload-time = "2025-06-08T17:06:39.4Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/2e/54/647ade08bf0db230bfea292f893923872fd20be6ac6f53b2b936ba839d75/zipp-3.23.0-py3-none-any.whl", hash = "sha256:071652d6115ed432f5ce1d34c336c0adfd6a884660d1e9712a256d3d3bd4b14e", size = 10276, upload-time = "2025-06-08T17:06:38.034Z" }, +]