Conversation
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
WalkthroughAdds a new interactive and non-interactive Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant CLI as "dfetch add"
participant Remote as "VCS Remote"
participant Terminal as "Terminal UI"
participant Manifest
User->>CLI: dfetch add -i <url>
CLI->>Remote: probe default branch / fetch tree metadata
Remote-->>CLI: default branch, branches/tags, ls callable
CLI->>Terminal: prompt name / dst / version / src / ignore
Terminal-->>CLI: user responses
CLI->>Manifest: validate_name / validate_destination / find_remote_for_url
Manifest-->>CLI: validation result
CLI->>Manifest: append_entry_manifest_file(project entry)
CLI->>Terminal: confirm run update?
Terminal-->>CLI: confirmation
alt run update
CLI->>Remote: fetch project content (update)
end
CLI-->>User: complete
sequenceDiagram
participant User
participant CLI as "dfetch add"
participant Remote as "VCS Remote"
participant Manifest
User->>CLI: dfetch add <url> (non-interactive)
CLI->>Remote: probe default branch
Remote-->>CLI: branch detected
CLI->>Manifest: check_name_uniqueness() / guess_destination()
Manifest-->>CLI: name/dst results
CLI->>Manifest: append_entry_manifest_file()
CLI-->>User: complete
Estimated code review effort🎯 4 (Complex) | ⏱️ ~50 minutes Possibly related PRs
Suggested labels
Suggested reviewers
🚥 Pre-merge checks | ✅ 5✅ Passed checks (5 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
c294ece to
25c0f00
Compare
25c0f00 to
80ad14c
Compare
80ad14c to
70d18b2
Compare
70d18b2 to
8c84341
Compare
Fixes #25
Implements the -i/--interactive flag for `dfetch add` that guides the user step-by-step through each manifest field: - name (default: repo name from URL) - dst (default: guessed from common prefix of existing projects) - version type: branch / tag / revision (with list of available branches/tags) - src (optional sub-path or glob) Also includes: - list_of_branches() method added to GitRemote, GitSubProject and SubProject base - list_of_tags() public wrapper on SubProject base class - Smarter destination guessing: works with a single existing project too - Comprehensive behave feature tests (6 scenarios, including interactive flow) - 21 pytest unit tests covering all code paths - Updated user documentation in manual.rst and CLI cheatsheet https://claude.ai/code/session_01XnrSn9ar6cLpL6Y2qDDGXd
UX improvements inspired by inquirer.js / carbon-cli style:
- Version selection is now a single step: show a short numbered list
(max 5 branches + 5 tags) instead of the previous two-step
"choose type then value" flow. User can pick by number, type a known
branch/tag name, or enter any SHA revision directly.
- Branches are listed default-first; tags are sorted newest semver first
with non-semver tags appended, so the most likely choices appear at top.
- Confirmation uses rich.prompt.Confirm (y/n keys) instead of the old
Prompt.ask("y/n") text prompt.
- Name and destination inputs now validate on entry:
- Name: rejects YAML-unsafe characters (# : [ ] { } & * etc.)
- Destination: rejects path traversal (.. components)
- Add semver import for tag ordering.
- Update 22 unit tests and 6 behave scenarios to match new interface.
- Add test_add_command_interactive_branch_by_number covering numeric pick.
https://claude.ai/code/session_01XnrSn9ar6cLpL6Y2qDDGXd
…st-add update Interactive mode now uses a real scrollable list for version selection (arrow keys, PgUp/PgDn, viewport shifts at edges) and a keyboard-driven tree browser for src/ignore selection (expand/collapse dirs, single- or multi-select, Space to toggle). After confirming the add the user is offered to run dfetch update immediately. Changes: - add.py: _scrollable_pick() with 10-item viewport and ANSI in-place redraw via _Screen; _tree_browser() with _TreeNode dataclass, lazy child loading, expand/collapse, multi-select; _ask_src(ls_fn) and _ask_ignore(ls_fn) wire through the tree; post-add Update() call - git.py: GitRemote.clone_minimal() and ls_tree() for remote tree listing - gitsubproject.py: browse_tree() context manager (shallow blobless clone) - subproject.py: browse_tree() base implementation (empty ls_fn) - feature tests: two-Confirm handling (add + update); ignore row added - unit tests: side_effect=[True, False] for Confirm; new ignore test and run-update test; _make_subproject() helper with browse_tree mock https://claude.ai/code/session_01XnrSn9ar6cLpL6Y2qDDGXd
…wser Addresses separation of concerns, primitive obsession, and abstraction level consistency: - dfetch/util/terminal.py (new): Screen, read_key, is_tty, scrollable_pick, and named ANSI constants (RESET/BOLD/DIM/CYAN/MAGENTA/GREEN/YELLOW/REVERSE). Pure I/O with no dfetch domain knowledge. - VersionRef dataclass (frozen): replaces tuple[str, str] for version results; carries kind (Literal branch/tag/revision) + value and owns apply_to(entry_dict) so callers never switch on the string kind. - _run_tree_browser() + _tree_single_pick()/multi_pick(): replaces _tree_browser(…, multi: bool) -> list|str|None union return. Public API has unambiguous types; shared navigation loop is one private function. - _build_entry() extracted from _interactive_flow() so that flow stays at a uniform "ask questions" abstraction level. - _non_interactive_entry() extracts the else-branch of Add.__call__ for the same reason. - LsFn = Callable[[str], list[tuple[str, bool]]] defined in subproject.py (the base class that declares browse_tree) and imported by gitsubproject.py and add.py, replacing the repeated verbose annotation. https://claude.ai/code/session_01XnrSn9ar6cLpL6Y2qDDGXd
8c84341 to
526dbec
Compare
There was a problem hiding this comment.
Actionable comments posted: 6
🧹 Nitpick comments (8)
dfetch/log.py (2)
106-115: Consider escaping values to prevent markup injection.Unlike
print_report_lineandprint_info_linewhich usemarkup_escape, this method outputsvaluedirectly. If a value contains Rich markup characters (e.g.,[blue]), they will be interpreted as formatting rather than literal text.♻️ Proposed fix
def print_overview(self, name: str, title: str, info: dict[str, Any]) -> None: """Print an overview of fields.""" self.print_info_line(name, title) for key, value in info.items(): if isinstance(value, list): self.info(f" [blue]{key + ':':20s}[/blue]") for item in value: - self.info(f" {'':20s}[white]- {item}[/white]") + self.info(f" {'':20s}[white]- {markup_escape(str(item))}[/white]") else: - self.info(f" [blue]{key + ':':20s}[/blue][white] {value}[/white]") + self.info(f" [blue]{key + ':':20s}[/blue][white] {markup_escape(str(value))}[/white]")🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@dfetch/log.py` around lines 106 - 115, The print_overview method currently emits values and list items without escaping, allowing Rich markup injection; update print_overview (in particular the function print_overview and any lines that print list elements and scalar values) to call the same markup_escape utility used by print_report_line and print_info_line for both keys and values before passing them to self.info, and ensure list-item entries (the inner loop over value in list) also use markup_escape so all printed content is rendered literally.
133-148: Consider escaping values for consistency with other logging methods.Similar to the
print_overviewmethod, values here are output directly withoutmarkup_escape. For consistency with other DLogger methods and to prevent accidental markup interpretation, consider escaping values.♻️ Proposed fix
def print_yaml_field( self, key: str, value: str | list[str], *, first: bool = False ) -> None: ... prefix = " - " if first else " " if isinstance(value, list): self.info(f"{prefix}[blue]{key}:[/blue]") for item in value: - self.info(f" - {item}") + self.info(f" - {markup_escape(str(item))}") else: - self.info(f"{prefix}[blue]{key}:[/blue] {value}") + self.info(f"{prefix}[blue]{key}:[/blue] {markup_escape(str(value))}")🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@dfetch/log.py` around lines 133 - 148, The YAML field printer print_yaml_field currently writes values directly to self.info and should use the same escaping as print_overview to avoid accidental markup; update print_yaml_field to call self.markup_escape on scalar values and on each list item before passing them to self.info (keep the existing prefixes and formatting) so values are consistently escaped across DLogger methods.doc/generate-casts/add-demo.sh (1)
8-10: Harden directory transitions and cleanup flow.Line 9 and Line 23 currently assume
pushd/popdalways succeed. If navigation fails, the script can execute in the wrong directory and cleanup becomes unsafe. Consider adding strict mode + guarded directory changes.Suggested hardening patch
#!/usr/bin/env bash +set -euo pipefail source ./demo-magic/demo-magic.sh PROMPT_TIMEOUT=1 # Copy example manifest -mkdir add -pushd add +mkdir -p add +pushd add >/dev/null || exit 1 dfetch init clear @@ -pei "" +pei "" -popd +popd >/dev/null || exit 1 rm -rf addAlso applies to: 23-24
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@doc/generate-casts/add-demo.sh` around lines 8 - 10, Enable strict shell error handling (set -euo pipefail) and add a guarded directory-change and cleanup pattern: create a cleanup function and trap EXIT/ERR, only call popd if pushd succeeded, and check the result of pushd (the pushd/popd pair referenced in the script as pushd and popd around the mkdir add / dfetch init sequence). Ensure mkdir add is checked for success before entering the directory and that any early exit triggers the cleanup function which safely removes the created directory only when it was actually created and we successfully changed into it.doc/generate-casts/interactive-add-demo.sh (1)
12-12: Guardpushd/popdand use strict mode for safer demo runs.Line 12 and Line 39 currently assume directory stack ops always succeed. If they fail, subsequent commands/cleanup can run in the wrong place.
Suggested hardening patch
#!/usr/bin/env bash +set -euo pipefail # Demo of dfetch add -i (interactive wizard mode). @@ -mkdir interactive-add -pushd interactive-add +mkdir -p interactive-add +pushd interactive-add >/dev/null || exit 1 @@ -popd +popd >/dev/null || exit 1 rm -rf interactive-addAlso applies to: 39-40
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@doc/generate-casts/interactive-add-demo.sh` at line 12, Enable strict mode (set -euo pipefail) at the top of the script and guard pushd/popd calls so failures abort the demo: add "set -euo pipefail" near the script start, replace bare "pushd interactive-add" with "pushd interactive-add || { echo 'pushd failed' >&2; exit 1; }", and protect the corresponding "popd" with "popd || { echo 'popd failed' >&2; exit 1; }" (or ensure a safe cleanup trap) so both the pushd/popd operations and the overall script will fail fast and not run in the wrong directory.doc/asciicasts/interactive-add.cast (1)
2-2: Consider normalizing machine-specific absolute paths in generated cast output.The captured
/home/ben/...paths add environment-specific noise and may leak local identity details in docs artifacts. If feasible, post-process or record from a stable path prefix for more reproducible casts.Also applies to: 90-90, 115-115
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@doc/asciicasts/interactive-add.cast` at line 2, Replace machine-specific absolute paths captured in the asciicast (e.g., "/home/ben/Programming/dfetch/doc/generate-casts" in interactive-add.cast) with a stable, reproducible prefix (for example "$PROJECT_ROOT" or a relative path) by updating the cast generation/post-processing step used by generate-casts: detect absolute paths in recorded output and rewrite them before writing interactive-add.cast (and other cast files) so the recorded command/output lines are normalized and do not leak local user/home information.doc/generate-casts/interactive_add_helper.py (1)
38-52: Consider documenting queue exhaustion behavior.If more prompts occur than answers are queued (e.g., due to code changes),
popleft()on an empty deque will raiseIndexError. The current code guards withif _PROMPT_ANSWERS else None, but for maintainability, a comment noting this design choice would help future editors.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@doc/generate-casts/interactive_add_helper.py` around lines 38 - 52, The test helper uses deque objects _PROMPT_ANSWERS and _CONFIRM_ANSWERS and calls popleft() to supply mock responses; if code changes generate more prompts than queued answers, popleft() will raise IndexError, so add a short comment next to the _PROMPT_ANSWERS and _CONFIRM_ANSWERS declarations (and/or where popleft() is called) documenting that empty deque consumption will raise IndexError and that callers should ensure the queue is populated (or the guard if _PROMPT_ANSWERS else None is relied upon), so future maintainers understand the intended exhaustion behavior and why the simple guard is used.dfetch/commands/add.py (1)
85-108: Consider abstracting the remote access pattern.Line 93 accesses the protected member
_remote_repodirectly. While acceptable for internal module use, consider adding a public property or method onSubProjectto provide the remote repository access cleanly, especially since this pattern may be reused elsewhere.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@dfetch/commands/add.py` around lines 85 - 108, The code reads the protected member _remote_repo inside browse_tree when handling GitSubProject/SvnSubProject; add a public accessor on SubProject (e.g., a property or method named remote_repo or get_remote_repo) that returns the underlying remote object, implement/override it on GitSubProject and SvnSubProject to return the same instance currently exposed as _remote_repo, and update browse_tree to call subproject.remote_repo (or subproject.get_remote_repo()) instead of accessing _remote_repo directly; keep the existing inner ls and _empty functions unchanged.dfetch/manifest/manifest.py (1)
377-394: Duplicate uniqueness validation logic.Both
check_name_uniqueness(line 379) andvalidate_project_name(line 393) check if the project name already exists in the manifest. Consider consolidating by havingvalidate_project_namecallcheck_name_uniquenessinternally, or documenting when to use each method.♻️ Proposed consolidation
def validate_project_name(self, name: str) -> None: """Raise ValueError if *name* is not valid for use in this manifest.""" if not name: raise ValueError("Name cannot be empty.") if self._UNSAFE_NAME_RE.search(name): raise ValueError( f"Name '{name}' contains characters not allowed in a manifest name. " "Avoid: # : [ ] { } & * ! | > ' \" % @ `" ) - if name in {p.name for p in self.projects}: - raise ValueError(f"Project with name '{name}' already exists in manifest!") + try: + self.check_name_uniqueness(name) + except RuntimeError as exc: + raise ValueError(str(exc)) from exc🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@dfetch/manifest/manifest.py` around lines 377 - 394, Consolidate the duplicate uniqueness check by having validate_project_name call check_name_uniqueness instead of re-checking the projects set: remove the "if name in {p.name for p in self.projects}" block from validate_project_name and replace it with a call to self.check_name_uniqueness(name); also change check_name_uniqueness to raise ValueError (not RuntimeError) so the validation functions raise consistent exception types (update the error message there if needed). This uses the existing functions check_name_uniqueness and validate_project_name to centralize uniqueness logic.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@dfetch/terminal/pick.py`:
- Around line 104-145: The loop doesn't handle an empty display_items list and
can render misleading "more" messages; add an early guard in scrollable_pick
after computing n (len(display_items)) to handle n == 0: call screen.clear() and
return [] if multi is True (multi-select returns a list) or return None for
single-select, ensuring you reference the existing Screen instance (screen) and
variables n and multi so the rest of the render/scroll logic
(_render_pick_lines, screen.draw, selected) is skipped.
In `@dfetch/terminal/tree_browser.py`:
- Around line 107-115: The collapse implementation in _collapse currently
deletes descendant nodes and clears children_loaded, which loses nested
selection state; instead, stop removing entries from self._nodes—mark the node
at idx as expanded = False and mark all descendant nodes (those with depth >
parent_depth until end) as hidden/visible=False (or set a flag like
collapsed_ancestor=True) while leaving children_loaded True so selection state
is preserved; adjust tree rendering logic (used by
run_tree_browser/deselected_paths) to skip rendering nodes with a
hidden/collapsed_ancestor flag, and ensure any existing selection/deselection
logic reads selections from self._nodes irrespective of visibility.
In `@dfetch/vcs/git.py`:
- Around line 236-244: The current try/except around fetch_for_tree_browse()
swallows all exceptions and leaves ls() returning [] which hides errors; change
the except to capture the exception as e, log the failure with context and error
details (using the module/class logger), and set up ls (the inner function) to
raise a clear runtime error (or re-raise the original exception) when called
instead of silently returning [], while preserving the successful path that
calls GitRemote.ls_tree(tmpdir, path=path) when cloned is True; reference
fetch_for_tree_browse, cloned, ls, and GitRemote.ls_tree to locate the change.
In `@features/steps/add_steps.py`:
- Around line 81-94: Replace the fragile line-by-line substring check that
builds missing from expected/actual with a YAML-aware comparison: parse actual
with yaml.safe_load and read manifest = data.get("manifest", {}) and projects =
manifest.get("projects", []), parse the expected snippet into a dict
(yaml.safe_load on expected) and assert that at least one item in projects
equals that expected dict; update the logic around the existing variables
expected, actual, missing to collect a clear error if no project matches rather
than checking scattered substrings.
In `@tests/test_add.py`:
- Around line 715-743: The test only checked that append_entry_manifest_file was
called; instead capture the appended ProjectEntry from mock_append.call_args
(e.g. entry = mock_append.call_args.args[0]) and assert that the entry uses the
manifest remote/path form rather than the full HTTP URL: check that the entry's
URL-like attribute (e.g. getattr(entry, "url", "") or getattr(entry, "repo",
"")) contains the fake_remote.name ("github") and the repo path segment
("org/myrepo") and does not start with "http"; update
test_add_command_matches_existing_remote to perform these assertions after the
Add() invocation.
In `@tests/test_tree_browser.py`:
- Around line 25-27: settings.load_profile("dev") is being called
unconditionally which overrides the CI profile; update the module initialization
to load the "ci" profile when running in CI by checking the CI environment
variable before calling settings.load_profile, keeping the existing
settings.register_profile("ci", ...) and settings.register_profile("dev", ...);
specifically, replace the unconditional settings.load_profile("dev") call with a
conditional branch that calls settings.load_profile("ci") if os.getenv("CI") is
truthy, otherwise calls settings.load_profile("dev") (or alternatively apply
`@settings`(max_examples=..., deadline=None) to individual tests instead of
changing global profile).
---
Nitpick comments:
In `@dfetch/commands/add.py`:
- Around line 85-108: The code reads the protected member _remote_repo inside
browse_tree when handling GitSubProject/SvnSubProject; add a public accessor on
SubProject (e.g., a property or method named remote_repo or get_remote_repo)
that returns the underlying remote object, implement/override it on
GitSubProject and SvnSubProject to return the same instance currently exposed as
_remote_repo, and update browse_tree to call subproject.remote_repo (or
subproject.get_remote_repo()) instead of accessing _remote_repo directly; keep
the existing inner ls and _empty functions unchanged.
In `@dfetch/log.py`:
- Around line 106-115: The print_overview method currently emits values and list
items without escaping, allowing Rich markup injection; update print_overview
(in particular the function print_overview and any lines that print list
elements and scalar values) to call the same markup_escape utility used by
print_report_line and print_info_line for both keys and values before passing
them to self.info, and ensure list-item entries (the inner loop over value in
list) also use markup_escape so all printed content is rendered literally.
- Around line 133-148: The YAML field printer print_yaml_field currently writes
values directly to self.info and should use the same escaping as print_overview
to avoid accidental markup; update print_yaml_field to call self.markup_escape
on scalar values and on each list item before passing them to self.info (keep
the existing prefixes and formatting) so values are consistently escaped across
DLogger methods.
In `@dfetch/manifest/manifest.py`:
- Around line 377-394: Consolidate the duplicate uniqueness check by having
validate_project_name call check_name_uniqueness instead of re-checking the
projects set: remove the "if name in {p.name for p in self.projects}" block from
validate_project_name and replace it with a call to
self.check_name_uniqueness(name); also change check_name_uniqueness to raise
ValueError (not RuntimeError) so the validation functions raise consistent
exception types (update the error message there if needed). This uses the
existing functions check_name_uniqueness and validate_project_name to centralize
uniqueness logic.
In `@doc/asciicasts/interactive-add.cast`:
- Line 2: Replace machine-specific absolute paths captured in the asciicast
(e.g., "/home/ben/Programming/dfetch/doc/generate-casts" in
interactive-add.cast) with a stable, reproducible prefix (for example
"$PROJECT_ROOT" or a relative path) by updating the cast
generation/post-processing step used by generate-casts: detect absolute paths in
recorded output and rewrite them before writing interactive-add.cast (and other
cast files) so the recorded command/output lines are normalized and do not leak
local user/home information.
In `@doc/generate-casts/add-demo.sh`:
- Around line 8-10: Enable strict shell error handling (set -euo pipefail) and
add a guarded directory-change and cleanup pattern: create a cleanup function
and trap EXIT/ERR, only call popd if pushd succeeded, and check the result of
pushd (the pushd/popd pair referenced in the script as pushd and popd around the
mkdir add / dfetch init sequence). Ensure mkdir add is checked for success
before entering the directory and that any early exit triggers the cleanup
function which safely removes the created directory only when it was actually
created and we successfully changed into it.
In `@doc/generate-casts/interactive_add_helper.py`:
- Around line 38-52: The test helper uses deque objects _PROMPT_ANSWERS and
_CONFIRM_ANSWERS and calls popleft() to supply mock responses; if code changes
generate more prompts than queued answers, popleft() will raise IndexError, so
add a short comment next to the _PROMPT_ANSWERS and _CONFIRM_ANSWERS
declarations (and/or where popleft() is called) documenting that empty deque
consumption will raise IndexError and that callers should ensure the queue is
populated (or the guard if _PROMPT_ANSWERS else None is relied upon), so future
maintainers understand the intended exhaustion behavior and why the simple guard
is used.
In `@doc/generate-casts/interactive-add-demo.sh`:
- Line 12: Enable strict mode (set -euo pipefail) at the top of the script and
guard pushd/popd calls so failures abort the demo: add "set -euo pipefail" near
the script start, replace bare "pushd interactive-add" with "pushd
interactive-add || { echo 'pushd failed' >&2; exit 1; }", and protect the
corresponding "popd" with "popd || { echo 'popd failed' >&2; exit 1; }" (or
ensure a safe cleanup trap) so both the pushd/popd operations and the overall
script will fail fast and not run in the wrong directory.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 746b40a9-7eb2-4179-aa09-6e469b2e02e8
📒 Files selected for processing (39)
CHANGELOG.rstdfetch/__main__.pydfetch/commands/add.pydfetch/commands/report.pydfetch/log.pydfetch/manifest/manifest.pydfetch/manifest/project.pydfetch/manifest/version.pydfetch/project/gitsubproject.pydfetch/project/subproject.pydfetch/project/svnsubproject.pydfetch/terminal/__init__.pydfetch/terminal/ansi.pydfetch/terminal/keys.pydfetch/terminal/pick.pydfetch/terminal/prompt.pydfetch/terminal/screen.pydfetch/terminal/tree_browser.pydfetch/terminal/types.pydfetch/util/license.pydfetch/util/util.pydfetch/util/versions.pydfetch/vcs/archive.pydfetch/vcs/git.pydfetch/vcs/svn.pydoc/asciicasts/add.castdoc/asciicasts/interactive-add.castdoc/generate-casts/add-demo.shdoc/generate-casts/generate-casts.shdoc/generate-casts/interactive-add-demo.shdoc/generate-casts/interactive_add_helper.pydoc/manual.rstfeatures/add-project-through-cli.featurefeatures/steps/add_steps.pypyproject.tomlscript/package.pytests/manifest_mock.pytests/test_add.pytests/test_tree_browser.py
| def _collapse(self, idx: int) -> None: | ||
| """Collapse the directory node at *idx* and remove all descendants.""" | ||
| parent_depth = self._nodes[idx].depth | ||
| end = idx + 1 | ||
| while end < len(self._nodes) and self._nodes[end].depth > parent_depth: | ||
| end += 1 | ||
| del self._nodes[idx + 1 : end] | ||
| self._nodes[idx].expanded = False | ||
| self._nodes[idx].children_loaded = False |
There was a problem hiding this comment.
Collapsing a directory drops descendant selection state.
Deleting the loaded subtree and resetting children_loaded means any nested selections vanish as soon as the user collapses that directory. That breaks callers that rely on run_tree_browser(...)[1] / deselected_paths() reflecting the final tree state, e.g. ignore selection after a partial deselect. Preserve descendant state across collapse/re-expand, or store it outside the visible node list.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@dfetch/terminal/tree_browser.py` around lines 107 - 115, The collapse
implementation in _collapse currently deletes descendant nodes and clears
children_loaded, which loses nested selection state; instead, stop removing
entries from self._nodes—mark the node at idx as expanded = False and mark all
descendant nodes (those with depth > parent_depth until end) as
hidden/visible=False (or set a flag like collapsed_ancestor=True) while leaving
children_loaded True so selection state is preserved; adjust tree rendering
logic (used by run_tree_browser/deselected_paths) to skip rendering nodes with a
hidden/collapsed_ancestor flag, and ensure any existing selection/deselection
logic reads selections from self._nodes irrespective of visibility.
| cloned = False | ||
| try: | ||
| self.fetch_for_tree_browse(tmpdir, version or self.get_default_branch()) | ||
| cloned = True | ||
| except Exception: # pylint: disable=broad-exception-caught # nosec B110 | ||
| pass | ||
|
|
||
| def ls(path: str = "") -> list[tuple[str, bool]]: | ||
| return GitRemote.ls_tree(tmpdir, path=path) if cloned else [] |
There was a problem hiding this comment.
Don't turn browse failures into an empty tree.
If fetch_for_tree_browse() fails, this silently falls back to cloned = False and the yielded ls() returns []. In the interactive add flow that is indistinguishable from “nothing to browse”, so auth/network/version errors quietly disable src/ignore browsing. Please log and surface an explicit fallback path instead of swallowing the exception.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@dfetch/vcs/git.py` around lines 236 - 244, The current try/except around
fetch_for_tree_browse() swallows all exceptions and leaves ls() returning []
which hides errors; change the except to capture the exception as e, log the
failure with context and error details (using the module/class logger), and set
up ls (the inner function) to raise a clear runtime error (or re-raise the
original exception) when called instead of silently returning [], while
preserving the successful path that calls GitRemote.ls_tree(tmpdir, path=path)
when cloned is True; reference fetch_for_tree_browse, cloned, ls, and
GitRemote.ls_tree to locate the change.
| # Check that every line of the expected snippet is present somewhere in | ||
| # the manifest (order-insensitive substring check per line). | ||
| missing = [] | ||
| for line in expected.splitlines(): | ||
| stripped = line.strip() | ||
| if stripped and stripped not in actual: | ||
| missing.append(line) | ||
|
|
||
| if missing: | ||
| print("Actual manifest:") | ||
| print(actual) | ||
| assert not missing, "Expected lines not found in manifest:\n" + "\n".join( | ||
| missing | ||
| ) |
There was a problem hiding this comment.
Parse the manifest entry instead of matching scattered substrings.
This passes as soon as each expected line appears somewhere in the file, even if those lines come from different project blocks or comments. That can miss a broken dfetch add result once the manifest contains multiple similar entries. Parse the YAML and assert that one item under manifest.projects matches the expected entry.
Based on learnings, the manifest YAML schema requires a top-level manifest: key with project entries nested under manifest.projects.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@features/steps/add_steps.py` around lines 81 - 94, Replace the fragile
line-by-line substring check that builds missing from expected/actual with a
YAML-aware comparison: parse actual with yaml.safe_load and read manifest =
data.get("manifest", {}) and projects = manifest.get("projects", []), parse the
expected snippet into a dict (yaml.safe_load on expected) and assert that at
least one item in projects equals that expected dict; update the logic around
the existing variables expected, actual, missing to collect a clear error if no
project matches rather than checking scattered substrings.
There was a problem hiding this comment.
♻️ Duplicate comments (2)
features/steps/add_steps.py (1)
81-94:⚠️ Potential issue | 🟡 MinorFragile line-by-line substring matching for manifest assertions.
The current implementation checks if each expected line appears anywhere in the manifest file. This can produce false positives when multiple similar project blocks exist, as matching lines could come from different entries.
Consider parsing the YAML and asserting that one item under
manifest.projectsmatches the expected entry structure.Based on learnings, the manifest YAML schema requires a top-level
manifest:key with project entries nested undermanifest.projects.
,💡 Suggested approach
import yaml `@then`("the manifest '{name}' contains entry") def step_impl(context, name): expected_snippet = apply_manifest_substitutions(context, context.text) expected_entry = yaml.safe_load(expected_snippet) with open(name, "r", encoding="utf-8") as fh: actual = yaml.safe_load(fh.read()) projects = actual.get("manifest", {}).get("projects", []) # Check if any project matches all fields in expected_entry for project in projects: if all(project.get(k) == v for k, v in expected_entry.items()): return # Found matching project assert False, f"Expected entry not found in manifest projects:\n{expected_snippet}"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@features/steps/add_steps.py` around lines 81 - 94, The current line-by-line substring check (variables expected, actual, missing) is fragile; replace it by parsing the manifest YAML and asserting that at least one item under manifest.projects matches the expected entry structure. In the step implementation for "the manifest '{name}' contains entry" load expected_snippet via apply_manifest_substitutions and parse it with yaml.safe_load to get expected_entry, read and yaml.safe_load the file into actual, then iterate actual.get("manifest", {}).get("projects", []) and check if any project contains all key/value pairs from expected_entry (use project.get(k) == v for each k,v); if none match, raise an assertion with the expected_snippet. Ensure you import yaml and remove the old missing/substring logic.dfetch/terminal/tree_browser.py (1)
107-115:⚠️ Potential issue | 🟠 MajorCollapsing a directory loses descendant selection state.
When
_collapseis called, it deletes all descendant nodes (line 113) and resetschildren_loaded = False(line 115). If a user expands a directory, toggles selection on some children, then collapses and re-expands it, all their selections are lost because the children are re-created withselected=node.selected(inheriting from the parent) in_expand.This affects the
ignorepicker inadd.pywhich relies ondeselected_paths()reflecting the final selection state.Consider preserving descendant nodes but hiding them from rendering (e.g., via a
visibleflag), or storing selections in a separateSet[str]keyed by path.
,🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@dfetch/terminal/tree_browser.py` around lines 107 - 115, _collapsing in _collapse currently deletes descendant nodes and resets children_loaded, which loses per-child selection state when re-expanding; instead, preserve selection by not deleting descendant nodes and merely hiding them (e.g., add a visible flag on Node and set visible=False for descendants) or maintain a separate Set[str] (e.g., selected_paths) keyed by node.path that _expand and rendering consult; modify _collapse to mark descendants hidden and keep children_loaded True (or persist selected state into selected_paths), and update _expand, rendering logic, and deselected_paths() to use visible/selected_paths so selections survive collapse/expand cycles (refer to methods _collapse, _expand, and any places that construct children with selected=node.selected).
🧹 Nitpick comments (3)
dfetch/commands/add.py (1)
95-96: Accessing protected member_remote_repo.Line 96 accesses
subproject._remote_repo, which is a protected attribute. Consider adding a public accessor method toGitSubProject/SvnSubProjectif this access pattern is needed elsewhere.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@dfetch/commands/add.py` around lines 95 - 96, Replace direct access to the protected attribute subproject._remote_repo with a public accessor: add a method like get_remote_repo() (or property remote_repo) to both GitSubProject and SvnSubProject, return the underlying remote repo, and update the usage in add.py to call subproject.get_remote_repo() (or subproject.remote_repo) instead of reading the protected member; ensure the new accessor is documented and used everywhere the protected attribute was previously referenced..github/workflows/run.yml (1)
42-42:dfetch addis tested only intest-cygwin, not in the mainrunmatrix.The new
dfetch addstep was added to thetest-cygwinjob but not to therunjob (lines 114-121), which tests acrossubuntu-latest,macos-latest, andwindows-latestwith multiple Python versions. Consider adding the same step to therunjob for broader coverage.♻️ Suggested addition after line 115
- run: dfetch environment - run: dfetch validate + - run: dfetch add https://github.com/dfetch-org/test-repo - run: dfetch check🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/run.yml at line 42, The CI is missing the "dfetch add https://github.com/dfetch-org/test-repo" step in the run job, so add the same step (the dfetch add step that exists in the test-cygwin job) into the run job's steps so it runs for the ubuntu-latest / macos-latest / windows-latest matrix and all configured Python versions; locate the run job in the workflow and insert a step named or matching the existing "dfetch add" invocation (same arguments) in the steps list so it mirrors the test-cygwin coverage.dfetch/terminal/tree_browser.py (1)
393-404: Quadratic complexity in_segmentsmay slow down large branch/tag lists.For each segment under a prefix, line 403 scans all names to check
n.startswith(full + "/"). With thousands of refs (common in large repos), this becomes O(n²).For typical repositories this is likely acceptable, but consider caching or using a trie-based structure if performance issues are reported.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@dfetch/terminal/tree_browser.py` around lines 393 - 404, The _segments method performs an O(n²) scan because for each segment it re-scans self._all_names to test any(n.startswith(full + "/")), so change _segments to do a single pass over self._all_names: iterate once, for each name that startswith(prefix) compute rest = name[len(prefix):], take the first seg = rest.split("/")[0], mark seen[seg] = True if rest contains "/" (or set False otherwise) or update an existing seen[seg] to True if a child is detected; this eliminates the inner any(...) loop and keeps the method name _segments and the data source _all_names unchanged.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Duplicate comments:
In `@dfetch/terminal/tree_browser.py`:
- Around line 107-115: _collapsing in _collapse currently deletes descendant
nodes and resets children_loaded, which loses per-child selection state when
re-expanding; instead, preserve selection by not deleting descendant nodes and
merely hiding them (e.g., add a visible flag on Node and set visible=False for
descendants) or maintain a separate Set[str] (e.g., selected_paths) keyed by
node.path that _expand and rendering consult; modify _collapse to mark
descendants hidden and keep children_loaded True (or persist selected state into
selected_paths), and update _expand, rendering logic, and deselected_paths() to
use visible/selected_paths so selections survive collapse/expand cycles (refer
to methods _collapse, _expand, and any places that construct children with
selected=node.selected).
In `@features/steps/add_steps.py`:
- Around line 81-94: The current line-by-line substring check (variables
expected, actual, missing) is fragile; replace it by parsing the manifest YAML
and asserting that at least one item under manifest.projects matches the
expected entry structure. In the step implementation for "the manifest '{name}'
contains entry" load expected_snippet via apply_manifest_substitutions and parse
it with yaml.safe_load to get expected_entry, read and yaml.safe_load the file
into actual, then iterate actual.get("manifest", {}).get("projects", []) and
check if any project contains all key/value pairs from expected_entry (use
project.get(k) == v for each k,v); if none match, raise an assertion with the
expected_snippet. Ensure you import yaml and remove the old missing/substring
logic.
---
Nitpick comments:
In @.github/workflows/run.yml:
- Line 42: The CI is missing the "dfetch add
https://github.com/dfetch-org/test-repo" step in the run job, so add the same
step (the dfetch add step that exists in the test-cygwin job) into the run job's
steps so it runs for the ubuntu-latest / macos-latest / windows-latest matrix
and all configured Python versions; locate the run job in the workflow and
insert a step named or matching the existing "dfetch add" invocation (same
arguments) in the steps list so it mirrors the test-cygwin coverage.
In `@dfetch/commands/add.py`:
- Around line 95-96: Replace direct access to the protected attribute
subproject._remote_repo with a public accessor: add a method like
get_remote_repo() (or property remote_repo) to both GitSubProject and
SvnSubProject, return the underlying remote repo, and update the usage in add.py
to call subproject.get_remote_repo() (or subproject.remote_repo) instead of
reading the protected member; ensure the new accessor is documented and used
everywhere the protected attribute was previously referenced.
In `@dfetch/terminal/tree_browser.py`:
- Around line 393-404: The _segments method performs an O(n²) scan because for
each segment it re-scans self._all_names to test any(n.startswith(full + "/")),
so change _segments to do a single pass over self._all_names: iterate once, for
each name that startswith(prefix) compute rest = name[len(prefix):], take the
first seg = rest.split("/")[0], mark seen[seg] = True if rest contains "/" (or
set False otherwise) or update an existing seen[seg] to True if a child is
detected; this eliminates the inner any(...) loop and keeps the method name
_segments and the data source _all_names unchanged.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 0f611ed4-f5c6-405e-b415-0749779716c9
📒 Files selected for processing (6)
.github/workflows/build.yml.github/workflows/run.ymldfetch/commands/add.pydfetch/terminal/tree_browser.pyfeatures/add-project-through-cli.featurefeatures/steps/add_steps.py
✅ Files skipped from review due to trivial changes (1)
- features/add-project-through-cli.feature
980b6be to
9a39bd3
Compare
There was a problem hiding this comment.
Actionable comments posted: 2
🧹 Nitpick comments (6)
doc/generate-casts/interactive-add-demo.sh (2)
8-8: Makesourcepath independent of the caller’s working directory.
source ./demo-magic/demo-magic.shonly works when invoked from a specific CWD. Resolve paths from the script directory for robustness.Suggested patch
-source ./demo-magic/demo-magic.sh +SCRIPT_DIR="$(cd -- "$(dirname -- "${BASH_SOURCE[0]}")" && pwd)" +source "${SCRIPT_DIR}/demo-magic/demo-magic.sh"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@doc/generate-casts/interactive-add-demo.sh` at line 8, Replace the relative source line so the demo-magic script is resolved from the script's own directory instead of the caller's CWD: compute the script directory using ${BASH_SOURCE[0]} and dirname (e.g., assign DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"), then source the demo-magic script via that DIR (source "$DIR/demo-magic/demo-magic.sh") to make the path independent of the working directory.
12-14: Harden cleanup with a trap to make reruns reliable.If the script exits early (error/Ctrl-C),
interactive-addcan be left behind and the next run fails atmkdir interactive-add. Please switch to trap-based cleanup.Suggested patch
mkdir interactive-add pushd interactive-add || { echo 'pushd failed' >&2; exit 1; } +cleanup() { + popd >/dev/null 2>&1 || true + rm -rf interactive-add +} +trap cleanup EXIT INT TERM @@ -popd || { echo 'popd failed' >&2; exit 1; } -rm -rf interactive-add +trap - EXIT INT TERM +cleanupAlso applies to: 40-41
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@doc/generate-casts/interactive-add-demo.sh` around lines 12 - 14, Add an EXIT trap to reliably clean up the interactive-add working directory and restore the stack if the script exits early: after creating the directory and doing pushd (refer to the mkdir interactive-add and pushd interactive-add || { ... } block), register a trap on EXIT that does a safe popd (ignore errors) and removes the interactive-add directory (if it exists); ensure the trap is set before any operations that can fail so reruns won't fail at mkdir interactive-add..github/workflows/run.yml (1)
114-121: Consider addingdfetch addto therunjob for consistency.The
test-cygwinjob now tests thedfetch addcommand, but the mainrunjob (which covers multiple platforms and Python versions) does not include it. If the intent is to exercise the newaddcommand across all CI environments, consider adding it here as well with the correct ordering (addbeforevalidate).Proposed change
- run: dfetch environment + - run: dfetch add https://github.com/dfetch-org/test-repo - run: dfetch validate - run: dfetch check🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/run.yml around lines 114 - 121, The CI run job is missing the new dfetch add step; add a "dfetch add" invocation into the run job's sequence before "dfetch validate" so the main matrix also exercises the new add command across platforms and Python versions, ensuring consistency with the test-cygwin job and keeping the ordering: dfetch add → dfetch validate → dfetch check → dfetch update → dfetch update → dfetch update-patch → dfetch format-patch → dfetch report -t sbom.dfetch/manifest/manifest.py (1)
426-431: URL prefix matching may miss valid remotes due to trailing slash differences.
remote_url.startswith(remote.url)is sensitive to trailing slashes. Ifremote.urlis"https://github.com/"butremote_urlis"https://github.com/org/repo"(no trailing slash after domain), the match fails. Consider normalizing by stripping trailing slashes before comparison.♻️ Suggested normalization
def find_remote_for_url(self, remote_url: str) -> Remote | None: """Return the first remote whose base URL is a prefix of *remote_url*.""" + normalized_url = remote_url.rstrip("/") for remote in self.remotes: - if remote_url.startswith(remote.url): + base = remote.url.rstrip("/") + if normalized_url.startswith(base + "/") or normalized_url == base: return remote return None🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@dfetch/manifest/manifest.py` around lines 426 - 431, The prefix check in find_remote_for_url uses remote_url.startswith(remote.url) which fails when trailing slashes differ; update the function to normalize both values (e.g., use remote_base = remote.url.rstrip('/') and target = remote_url.rstrip('/')) and then check target.startswith(remote_base) (or target.startswith(remote_base + '/') if you want to require a path boundary) so trailing slashes won’t prevent valid matches; keep the rest of the method (returning the first matching Remote) unchanged.dfetch/commands/add.py (1)
314-321: Version resolution logic is correct but could be clearer.The ternary with
orchain works but requires careful reading. The intent is: ifoverrides.versionis provided, resolve it (falling back to default branch if resolution fails); otherwise use default branch directly.♻️ Clearer alternative
def _non_interactive_entry(ctx: _AddContext, overrides: _Overrides) -> ProjectEntry: """Build a ``ProjectEntry`` using inferred defaults (no user interaction).""" - version = ( - _resolve_raw_version(overrides.version, []) - or Version(branch=ctx.default_branch) - if overrides.version - else Version(branch=ctx.default_branch) - ) + if overrides.version: + version = _resolve_raw_version(overrides.version, []) or Version( + branch=ctx.default_branch + ) + else: + version = Version(branch=ctx.default_branch)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@dfetch/commands/add.py` around lines 314 - 321, The ternary/or chain in _non_interactive_entry for computing version is hard to read; instead make the control flow explicit: check if overrides.version is truthy, call _resolve_raw_version(overrides.version, []) and if that returns falsy substitute Version(branch=ctx.default_branch), otherwise when overrides.version is falsy set Version(branch=ctx.default_branch) directly—update the computation of version in _non_interactive_entry to follow that clear if/else flow using _resolve_raw_version and Version.dfetch/project/svnsubproject.py (1)
187-189: Consider explicit string conversion for type consistency.
GitSubProject.list_of_branches()uses[str(branch) for branch in self._remote_repo.list_of_branches()]to ensure type consistency. WhileSvnRepo.DEFAULT_BRANCHis likely a string constant andSvnRemote.list_of_branches()returnslist[str], applying the same pattern would make the return type guarantees explicit and consistent across VCS implementations.♻️ Suggested change for consistency
def list_of_branches(self) -> list[str]: """Return trunk plus any branches found under ``branches/``.""" - return [SvnRepo.DEFAULT_BRANCH, *self._remote_repo.list_of_branches()] + return [SvnRepo.DEFAULT_BRANCH] + [ + str(branch) for branch in self._remote_repo.list_of_branches() + ]🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@dfetch/project/svnsubproject.py` around lines 187 - 189, SvnSubProject.list_of_branches currently returns [SvnRepo.DEFAULT_BRANCH, *self._remote_repo.list_of_branches()] — update it to explicitly convert remote branch objects to strings for consistency with GitSubProject by returning a list that starts with SvnRepo.DEFAULT_BRANCH followed by [str(branch) for branch in self._remote_repo.list_of_branches()]; keep the same ordering and return type list[str].
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.github/workflows/run.yml:
- Around line 41-43: The workflow step ordering is wrong: the dfetch add step
must run before dfetch validate so the added repository is actually validated;
update the steps so the sequence is "dfetch add
https://github.com/dfetch-org/test-repo" then "dfetch validate" then "dfetch
check" (i.e., reorder the dfetch add, dfetch validate, dfetch check steps
accordingly).
In `@dfetch/manifest/manifest.py`:
- Around line 410-416: The call to os.path.commonpath in the code that builds
destinations = [p.destination for p in self.projects if p.destination] can raise
ValueError for mixed absolute/relative paths or different drives; update the
logic around computing common_path (the variable/commonpath call) to guard
against this by either normalizing paths first or wrapping
os.path.commonpath(...) in a try/except ValueError and returning an empty string
on error—modify the block that sets common_path to catch ValueError and handle
it identically to the existing checks (return ""), referencing the destinations
list and common_path variables and the self.projects source.
---
Nitpick comments:
In @.github/workflows/run.yml:
- Around line 114-121: The CI run job is missing the new dfetch add step; add a
"dfetch add" invocation into the run job's sequence before "dfetch validate" so
the main matrix also exercises the new add command across platforms and Python
versions, ensuring consistency with the test-cygwin job and keeping the
ordering: dfetch add → dfetch validate → dfetch check → dfetch update → dfetch
update → dfetch update-patch → dfetch format-patch → dfetch report -t sbom.
In `@dfetch/commands/add.py`:
- Around line 314-321: The ternary/or chain in _non_interactive_entry for
computing version is hard to read; instead make the control flow explicit: check
if overrides.version is truthy, call _resolve_raw_version(overrides.version, [])
and if that returns falsy substitute Version(branch=ctx.default_branch),
otherwise when overrides.version is falsy set Version(branch=ctx.default_branch)
directly—update the computation of version in _non_interactive_entry to follow
that clear if/else flow using _resolve_raw_version and Version.
In `@dfetch/manifest/manifest.py`:
- Around line 426-431: The prefix check in find_remote_for_url uses
remote_url.startswith(remote.url) which fails when trailing slashes differ;
update the function to normalize both values (e.g., use remote_base =
remote.url.rstrip('/') and target = remote_url.rstrip('/')) and then check
target.startswith(remote_base) (or target.startswith(remote_base + '/') if you
want to require a path boundary) so trailing slashes won’t prevent valid
matches; keep the rest of the method (returning the first matching Remote)
unchanged.
In `@dfetch/project/svnsubproject.py`:
- Around line 187-189: SvnSubProject.list_of_branches currently returns
[SvnRepo.DEFAULT_BRANCH, *self._remote_repo.list_of_branches()] — update it to
explicitly convert remote branch objects to strings for consistency with
GitSubProject by returning a list that starts with SvnRepo.DEFAULT_BRANCH
followed by [str(branch) for branch in self._remote_repo.list_of_branches()];
keep the same ordering and return type list[str].
In `@doc/generate-casts/interactive-add-demo.sh`:
- Line 8: Replace the relative source line so the demo-magic script is resolved
from the script's own directory instead of the caller's CWD: compute the script
directory using ${BASH_SOURCE[0]} and dirname (e.g., assign DIR="$(cd "$(dirname
"${BASH_SOURCE[0]}")" && pwd)"), then source the demo-magic script via that DIR
(source "$DIR/demo-magic/demo-magic.sh") to make the path independent of the
working directory.
- Around line 12-14: Add an EXIT trap to reliably clean up the interactive-add
working directory and restore the stack if the script exits early: after
creating the directory and doing pushd (refer to the mkdir interactive-add and
pushd interactive-add || { ... } block), register a trap on EXIT that does a
safe popd (ignore errors) and removes the interactive-add directory (if it
exists); ensure the trap is set before any operations that can fail so reruns
won't fail at mkdir interactive-add.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: f8489a54-bfcb-4937-af4a-dbcc3e125159
📒 Files selected for processing (18)
.coderabbit.yaml.github/workflows/build.yml.github/workflows/run.ymldfetch/commands/add.pydfetch/log.pydfetch/manifest/manifest.pydfetch/project/gitsubproject.pydfetch/project/svnsubproject.pydfetch/terminal/pick.pydfetch/vcs/git.pydoc/generate-casts/add-demo.shdoc/generate-casts/interactive-add-demo.shdoc/generate-casts/interactive_add_helper.pydoc/manual.rstfeatures/add-project-through-cli.featurefeatures/steps/add_steps.pytests/test_add.pytests/test_tree_browser.py
✅ Files skipped from review due to trivial changes (3)
- .coderabbit.yaml
- doc/manual.rst
- tests/test_add.py
🚧 Files skipped from review as they are similar to previous changes (5)
- .github/workflows/build.yml
- dfetch/terminal/pick.py
- dfetch/log.py
- features/add-project-through-cli.feature
- doc/generate-casts/interactive_add_helper.py
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Fixes #25
Summary by CodeRabbit
New Features
Enhancements
Bug Fixes
Tests & Documentation
Chores