diff --git a/CHANGELOG.md b/CHANGELOG.md index c5318ff..a61f822 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -9,6 +9,18 @@ All notable changes to this project will be documented in this file. --- +## [0.26.11] - 2026-01-27 + +### Fixed (0.26.11) + +- **Backlog refine --import-from-tmp**: Implemented import path so refined content from a temporary file is applied to backlog items + - **Parser**: Added `_parse_refined_export_markdown()` to parse the same markdown format produced by `--export-to-tmp` (## Item blocks, **ID**, **Body** in ```markdown ... ```, **Acceptance Criteria**, optional **Metrics**) + - **Import flow**: When `--import-from-tmp` (and optional `--tmp-file`) is used, the CLI reads the file, matches blocks to fetched items by ID, updates `body_markdown`, `acceptance_criteria`, and optionally title/metrics; without `--write` shows "Would update N item(s)", with `--write` calls `adapter.update_backlog_item()` for each and prints success summary + - **Removed**: "Import functionality pending implementation" message and TODO + - **Tests**: Unit tests for the parser (single item, acceptance criteria and metrics, header-only, blocks without ID) + +--- + ## [0.26.10] - 2026-01-27 ### Added (0.26.10) diff --git a/openspec/changes/implement-backlog-refine-import-from-tmp/proposal.md b/openspec/changes/implement-backlog-refine-import-from-tmp/proposal.md new file mode 100644 index 0000000..1bf0a84 --- /dev/null +++ b/openspec/changes/implement-backlog-refine-import-from-tmp/proposal.md @@ -0,0 +1,29 @@ +# Change: Implement backlog refine --import-from-tmp + +## Why + +The `specfact backlog refine` command supports `--export-to-tmp` to export items to a markdown file for copilot processing and documents `--import-from-tmp` / `--tmp-file` to re-import refined content. When users run with `--import-from-tmp`, the CLI only checks that the file exists and then prints "Import functionality pending implementation" and exits. This leaves the export/import workflow unusable and contradicts the documented behavior. Implementing the import path completes the round-trip: export → edit with copilot → import with --write, so teams can refine backlog items in bulk via their IDE without interactive prompts. + +## What Changes + +- **NEW**: Parser for the refined export markdown format (same structure as export: `## Item N:`, **ID**, **Body** in ```markdown ... ```, **Acceptance Criteria**, optional title/metrics). Parser returns a list of parsed blocks keyed by item ID for matching against fetched items. +- **NEW**: Import branch in `specfact backlog refine`: when `--import-from-tmp` is set and the file exists, read and parse the file, match parsed blocks to currently fetched items by ID, update each matched `BacklogItem`'s `body_markdown` and `acceptance_criteria` (and optionally title/metrics), then call `adapter.update_backlog_item(item, update_fields=[...])` when `--write` is set. Without `--write`, show a preview (e.g. "Would update N items") and do not call the adapter. +- **EXTEND**: Reuse existing refine flow: same adapter/fetch as export so `items` is in scope; reuse `_build_adapter_kwargs` and `adapter_registry.get_adapter` for write-back; reuse the same `update_fields` logic as interactive refine (title, body_markdown, acceptance_criteria, story_points, business_value, priority). +- **NOTE**: Default import path remains `...-refined.md`; users are expected to pass `--tmp-file` to point to the file they edited (same path as export or a copy). No change to export path or naming. + +## Capabilities + +- **backlog-refinement**: ADDED requirement for import-from-tmp (parse refined export format, match by ID, update items via adapter with --write). + +## Impact + +- **Affected specs**: backlog-refinement (ADDED scenario for import-from-tmp). +- **Affected code**: `src/specfact_cli/commands/backlog_commands.py` (import branch implementation); optionally `src/specfact_cli/backlog/refine_export_parser.py` (parser helper). +- **Integration points**: BacklogAdapter.update_backlog_item (existing); _fetch_backlog_items, _build_adapter_kwargs (existing). + +## Source Tracking + +- **GitHub Issue**: #155 +- **Issue URL**: +- **Repository**: nold-ai/specfact-cli +- **Last Synced Status**: implemented diff --git a/openspec/changes/implement-backlog-refine-import-from-tmp/specs/backlog-refinement/spec.md b/openspec/changes/implement-backlog-refine-import-from-tmp/specs/backlog-refinement/spec.md new file mode 100644 index 0000000..b20a6a5 --- /dev/null +++ b/openspec/changes/implement-backlog-refine-import-from-tmp/specs/backlog-refinement/spec.md @@ -0,0 +1,25 @@ +# backlog-refinement (delta) + +## ADDED Requirements + +### Requirement: Import refined content from temporary file + +The system SHALL support importing refined backlog content from a temporary markdown file (same format as export) when `specfact backlog refine --import-from-tmp` is used, matching items by ID and updating remote backlog via the adapter when `--write` is set. + +#### Scenario: Import refined content from temporary file + +- **GIVEN** a markdown file in the same format as the export from `specfact backlog refine --export-to-tmp` (header, then per-item blocks with `## Item N:`, **ID**, **Body** in ```markdown ... ```, **Acceptance Criteria**) +- **AND** the user runs `specfact backlog refine --import-from-tmp --tmp-file ` with the same adapter and filters as used for export (so the same set of items is fetched) +- **WHEN** the import file exists and is readable +- **THEN** the system parses the file and matches each block to a fetched item by **ID** +- **AND** for each matched item the system updates `body_markdown` and `acceptance_criteria` (and optionally title/metrics) from the parsed block +- **AND** if `--write` is not set, the system prints a preview (e.g. "Would update N items") and does not call the adapter +- **AND** if `--write` is set, the system calls `adapter.update_backlog_item(item, update_fields=[...])` for each updated item and prints a success summary (e.g. "Updated N backlog items") +- **AND** the system does not show "Import functionality pending implementation" + +#### Scenario: Import file not found + +- **GIVEN** the user runs `specfact backlog refine --import-from-tmp` (or with `--tmp-file `) +- **WHEN** the resolved import file does not exist +- **THEN** the system prints an error with the expected path and suggests using `--tmp-file` to specify the path +- **AND** the command exits with non-zero status diff --git a/openspec/changes/implement-backlog-refine-import-from-tmp/tasks.md b/openspec/changes/implement-backlog-refine-import-from-tmp/tasks.md new file mode 100644 index 0000000..10e310f --- /dev/null +++ b/openspec/changes/implement-backlog-refine-import-from-tmp/tasks.md @@ -0,0 +1,38 @@ +# Tasks: Implement backlog refine --import-from-tmp + +## 1. Create git branch + +- [ ] 1.1.1 Ensure we're on dev and up to date: `git checkout dev && git pull origin dev` +- [ ] 1.1.2 Create branch: `git checkout -b feature/implement-backlog-refine-import-from-tmp` +- [ ] 1.1.3 Verify branch: `git branch --show-current` + +## 2. Parser for refined export format + +- [ ] 2.1.1 Add function to parse refined markdown (e.g. `_parse_refined_export_markdown(content: str) -> dict[str, dict]` returning id → {body_markdown, acceptance_criteria, title?, ...}) in `backlog_commands.py` or new module `src/specfact_cli/backlog/refine_export_parser.py` +- [ ] 2.1.2 Split content by `## Item` or `---` to get per-item blocks +- [ ] 2.1.3 From each block extract **ID** (required), **Body** (from ```markdown ... ```), **Acceptance Criteria** (optional), optionally **title** and metrics +- [ ] 2.1.4 Add unit tests for parser (export-format sample, multiple items, missing optional fields) +- [ ] 2.1.5 Run `hatch run format` and `hatch run type-check` + +## 3. Import branch in backlog refine command + +- [ ] 3.1.1 In the `if import_from_tmp:` block, after file-exists check: read file content, call parser, build map id → parsed fields +- [ ] 3.1.2 For each item in `items`, if item.id in map: set item.body_markdown, item.acceptance_criteria (and optionally title/metrics) from parsed fields +- [ ] 3.1.3 If `--write` is not set: print preview ("Would update N items") and return +- [ ] 3.1.4 If `--write` is set: get adapter via _build_adapter_kwargs and adapter_registry.get_adapter; for each updated item call adapter.update_backlog_item(item, update_fields=[...]) with same update_fields logic as interactive refine +- [ ] 3.1.5 Print success summary (e.g. "Updated N backlog items") +- [ ] 3.1.6 Remove "Import functionality pending implementation" message and TODO +- [ ] 3.1.7 Run `hatch run format` and `hatch run type-check` + +## 4. Tests and quality + +- [ ] 4.1.1 Add or extend test for refine --import-from-tmp (unit: parser; integration or unit with mock: import flow with --tmp-file and --write) +- [ ] 4.1.2 Run `hatch run contract-test` (or `hatch run smart-test`) +- [ ] 4.1.3 Run `hatch run lint` +- [ ] 4.1.4 Run `openspec validate implement-backlog-refine-import-from-tmp --strict` + +## 5. Documentation and PR + +- [ ] 5.1.1 Update CHANGELOG.md with fix entry +- [ ] 5.1.2 Ensure help text for --import-from-tmp and --tmp-file is accurate +- [ ] 5.1.3 Create Pull Request from feature/implement-backlog-refine-import-from-tmp to dev diff --git a/pyproject.toml b/pyproject.toml index aa03499..5d8ec3c 100644 --- a/pyproject.toml +++ b/pyproject.toml @@ -4,7 +4,7 @@ build-backend = "hatchling.build" [project] name = "specfact-cli" -version = "0.26.10" +version = "0.26.11" description = "Brownfield-first CLI: Reverse engineer legacy Python → specs → enforced contracts. Automate legacy code documentation and prevent modernization regressions." readme = "README.md" requires-python = ">=3.11" diff --git a/setup.py b/setup.py index 33c82e7..3ab50a8 100644 --- a/setup.py +++ b/setup.py @@ -7,7 +7,7 @@ if __name__ == "__main__": _setup = setup( name="specfact-cli", - version="0.26.10", + version="0.26.11", description="SpecFact CLI - Spec -> Contract -> Sentinel tool for contract-driven development", packages=find_packages(where="src"), package_dir={"": "src"}, diff --git a/src/__init__.py b/src/__init__.py index 1684611..e56d979 100644 --- a/src/__init__.py +++ b/src/__init__.py @@ -3,4 +3,4 @@ """ # Package version: keep in sync with pyproject.toml, setup.py, src/specfact_cli/__init__.py -__version__ = "0.26.10" +__version__ = "0.26.11" diff --git a/src/specfact_cli/__init__.py b/src/specfact_cli/__init__.py index d839440..582d1d6 100644 --- a/src/specfact_cli/__init__.py +++ b/src/specfact_cli/__init__.py @@ -9,6 +9,6 @@ - Validating reproducibility """ -__version__ = "0.26.10" +__version__ = "0.26.11" __all__ = ["__version__"] diff --git a/src/specfact_cli/commands/backlog_commands.py b/src/specfact_cli/commands/backlog_commands.py index 25f517d..36dd3a7 100644 --- a/src/specfact_cli/commands/backlog_commands.py +++ b/src/specfact_cli/commands/backlog_commands.py @@ -14,6 +14,7 @@ from __future__ import annotations import os +import re import sys from datetime import datetime from pathlib import Path @@ -202,6 +203,97 @@ def _build_adapter_kwargs( return kwargs +def _extract_body_from_block(block: str) -> str: + """ + Extract **Body** content from a refined export block, handling nested fenced code. + + The body is wrapped in ```markdown ... ```. If the body itself contains fenced + code blocks (e.g. ```python ... ```), the closing fence is matched by tracking + depth: a line that is exactly ``` closes the current fence (body or inner). + """ + start_marker = "**Body**:" + fence_open = "```markdown" + if start_marker not in block or fence_open not in block: + return "" + idx = block.find(start_marker) + rest = block[idx + len(start_marker) :].lstrip() + if not rest.startswith("```"): + return "" + if not rest.startswith(fence_open + "\n") and not rest.startswith(fence_open + "\r\n"): + return "" + after_open = rest[len(fence_open) :].lstrip("\n\r") + if not after_open: + return "" + lines = after_open.split("\n") + body_lines: list[str] = [] + depth = 1 + for line in lines: + stripped = line.rstrip() + if stripped == "```": + if depth == 1: + break + depth -= 1 + body_lines.append(line) + elif stripped.startswith("```") and stripped != "```": + depth += 1 + body_lines.append(line) + else: + body_lines.append(line) + return "\n".join(body_lines).strip() + + +def _parse_refined_export_markdown(content: str) -> dict[str, dict[str, Any]]: + """ + Parse refined export markdown (same format as --export-to-tmp) into id -> fields. + + Splits by ## Item blocks, extracts **ID**, **Body** (from ```markdown ... ```), + **Acceptance Criteria**, and optionally title and **Metrics** (story_points, + business_value, priority). Body extraction is fence-aware so bodies containing + nested code blocks are parsed correctly. Returns a dict mapping item id to + parsed fields (body_markdown, acceptance_criteria, title?, story_points?, + business_value?, priority?). + """ + result: dict[str, dict[str, Any]] = {} + blocks = re.split(r"\n## Item \d+:", content) + for block in blocks: + block = block.strip() + if not block or block.startswith("# SpecFact") or "**ID**:" not in block: + continue + id_match = re.search(r"\*\*ID\*\*:\s*(.+?)(?:\n|$)", block) + if not id_match: + continue + item_id = id_match.group(1).strip() + fields: dict[str, Any] = {} + + fields["body_markdown"] = _extract_body_from_block(block) + + ac_match = re.search(r"\*\*Acceptance Criteria\*\*:\s*\n(.*?)(?=\n\*\*|\n---|\Z)", block, re.DOTALL) + if ac_match: + fields["acceptance_criteria"] = ac_match.group(1).strip() or None + else: + fields["acceptance_criteria"] = None + + first_line = block.split("\n")[0].strip() if block else "" + if first_line and not first_line.startswith("**"): + fields["title"] = first_line + + if "Story Points:" in block: + sp_match = re.search(r"Story Points:\s*(\d+)", block) + if sp_match: + fields["story_points"] = int(sp_match.group(1)) + if "Business Value:" in block: + bv_match = re.search(r"Business Value:\s*(\d+)", block) + if bv_match: + fields["business_value"] = int(bv_match.group(1)) + if "Priority:" in block: + pri_match = re.search(r"Priority:\s*(\d+)", block) + if pri_match: + fields["priority"] = int(pri_match.group(1)) + + result[item_id] = fields + return result + + def _fetch_backlog_items( adapter_name: str, search_query: str | None = None, @@ -680,9 +772,65 @@ def refine( raise typer.Exit(1) console.print(f"[bold cyan]Importing refined content from: {import_file}[/bold cyan]") - # TODO: Implement import logic to parse refined content and apply to items - console.print("[yellow]⚠ Import functionality pending implementation[/yellow]") - console.print("[dim]For now, use interactive refinement with --write flag[/dim]") + raw = import_file.read_text(encoding="utf-8") + parsed_by_id = _parse_refined_export_markdown(raw) + if not parsed_by_id: + console.print( + "[yellow]No valid item blocks found in import file (expected ## Item N: and **ID**:)[/yellow]" + ) + raise typer.Exit(1) + + updated_items: list[BacklogItem] = [] + for item in items: + if item.id not in parsed_by_id: + continue + data = parsed_by_id[item.id] + item.body_markdown = data.get("body_markdown", item.body_markdown or "") + if "acceptance_criteria" in data: + item.acceptance_criteria = data["acceptance_criteria"] + if data.get("title"): + item.title = data["title"] + if "story_points" in data: + item.story_points = data["story_points"] + if "business_value" in data: + item.business_value = data["business_value"] + if "priority" in data: + item.priority = data["priority"] + updated_items.append(item) + + if not write: + console.print(f"[green]Would update {len(updated_items)} item(s)[/green]") + console.print("[dim]Run with --write to apply changes to the backlog[/dim]") + return + + writeback_kwargs = _build_adapter_kwargs( + adapter, + repo_owner=repo_owner, + repo_name=repo_name, + github_token=github_token, + ado_org=ado_org, + ado_project=ado_project, + ado_team=ado_team, + ado_token=ado_token, + ) + adapter_instance = adapter_registry.get_adapter(adapter, **writeback_kwargs) + if not isinstance(adapter_instance, BacklogAdapter): + console.print("[bold red]✗[/bold red] Adapter does not support backlog updates") + raise typer.Exit(1) + + for item in updated_items: + update_fields_list = ["title", "body_markdown"] + if item.acceptance_criteria: + update_fields_list.append("acceptance_criteria") + if item.story_points is not None: + update_fields_list.append("story_points") + if item.business_value is not None: + update_fields_list.append("business_value") + if item.priority is not None: + update_fields_list.append("priority") + adapter_instance.update_backlog_item(item, update_fields=update_fields_list) + console.print(f"[green]✓ Updated backlog item: {item.url}[/green]") + console.print(f"[green]✓ Updated {len(updated_items)} backlog item(s)[/green]") return # Apply limit if specified @@ -1231,7 +1379,7 @@ def map_fields( import re import sys - import questionary + import questionary # type: ignore[reportMissingImports] import requests from specfact_cli.backlog.mappers.template_config import FieldMappingConfig diff --git a/tests/unit/commands/test_backlog_commands.py b/tests/unit/commands/test_backlog_commands.py index e6ba5f7..6cfd531 100644 --- a/tests/unit/commands/test_backlog_commands.py +++ b/tests/unit/commands/test_backlog_commands.py @@ -11,6 +11,7 @@ from typer.testing import CliRunner from specfact_cli.cli import app +from specfact_cli.commands.backlog_commands import _parse_refined_export_markdown from specfact_cli.models.backlog_item import BacklogItem @@ -210,3 +211,121 @@ def test_map_fields_requires_token(self) -> None: # Should fail with error about missing token assert result.exit_code != 0 assert "token required" in result.stdout.lower() or "error" in result.stdout.lower() + + +class TestParseRefinedExportMarkdown: + """Tests for _parse_refined_export_markdown (refine --import-from-tmp parser).""" + + def test_parses_single_item_with_body_and_id(self) -> None: + """Parser extracts ID and body from export-format block.""" + content = """ +# SpecFact Backlog Refinement Export + +**Export Date**: 2026-01-27 +**Adapter**: github +**Items**: 1 + +--- + +## Item 1: My Title + +**ID**: issue-42 +**URL**: https://github.com/org/repo/issues/42 +**State**: open +**Provider**: github + +**Body**: +```markdown +Refined body text here. +``` +""" + result = _parse_refined_export_markdown(content) + assert "issue-42" in result + assert result["issue-42"]["body_markdown"] == "Refined body text here." + assert result["issue-42"].get("title") == "My Title" + + def test_parses_acceptance_criteria_and_metrics(self) -> None: + """Parser extracts acceptance criteria and metrics when present.""" + content = """ +## Item 1: Story title + +**ID**: 123 +**URL**: u +**State**: open +**Provider**: ado + +**Metrics**: +- Story Points: 5 +- Business Value: 8 +- Priority: 1 (1=highest) + +**Acceptance Criteria**: +- AC one +- AC two + +**Body**: +```markdown +Body content +``` +--- +""" + result = _parse_refined_export_markdown(content) + assert "123" in result + assert result["123"]["acceptance_criteria"] == "- AC one\n- AC two" + assert result["123"]["story_points"] == 5 + assert result["123"]["business_value"] == 8 + assert result["123"]["priority"] == 1 + assert result["123"]["body_markdown"] == "Body content" + + def test_returns_empty_for_header_only(self) -> None: + """Parser returns empty dict when no ## Item blocks.""" + content = "# SpecFact Backlog Refinement Export\n\n**Items**: 0\n\n---\n\n" + result = _parse_refined_export_markdown(content) + assert result == {} + + def test_skips_blocks_without_id(self) -> None: + """Parser skips blocks that do not contain **ID**:.""" + content = """ +## Item 1: No ID here + +**URL**: x +**Body**: +```markdown +nope +``` +""" + result = _parse_refined_export_markdown(content) + assert result == {} + + def test_body_with_nested_fenced_code_blocks(self) -> None: + """Parser preserves full body when it contains fenced code blocks.""" + content = """ +## Item 1: Bug with code sample + +**ID**: issue-99 +**URL**: https://github.com/org/repo/issues/99 +**State**: open +**Provider**: github + +**Body**: +```markdown +Reproduction: run this: + +```python +def foo(): + return 42 +``` + +Then we see the error. +``` +--- +""" + result = _parse_refined_export_markdown(content) + assert "issue-99" in result + body = result["issue-99"]["body_markdown"] + assert "Reproduction: run this:" in body + assert "```python" in body + assert "def foo():" in body + assert "return 42" in body + assert "```" in body + assert "Then we see the error." in body diff --git a/tests/unit/sync/test_bridge_probe.py b/tests/unit/sync/test_bridge_probe.py index 42b25ac..4b518c0 100644 --- a/tests/unit/sync/test_bridge_probe.py +++ b/tests/unit/sync/test_bridge_probe.py @@ -164,9 +164,9 @@ def test_auto_generate_bridge_unknown(self, tmp_path): capabilities = ToolCapabilities(tool="unknown") # Unknown tool should raise ViolationError (contract precondition fails before method body) # The @require decorator checks capabilities.tool != "unknown" before the method executes - import icontract + from icontract import ViolationError - with pytest.raises(icontract.errors.ViolationError, match="Tool must be detected"): + with pytest.raises(ViolationError, match="Tool must be detected"): probe.auto_generate_bridge(capabilities) def test_detect_openspec(self, tmp_path):