diff --git a/.gitignore b/.gitignore index 30cdd48002..e94bb721cf 100644 --- a/.gitignore +++ b/.gitignore @@ -75,6 +75,7 @@ !/rms/ !/renderdoc/ !/cloudcompare/ +!/calibre !/openscreen/ !/QGIS/ !/n8n/ @@ -83,6 +84,7 @@ !/nsight-graphics/ !/lldb/ + # Step 5: Inside each software dir, ignore everything (including dotfiles) /gimp/* /gimp/.* diff --git a/README.md b/README.md index 30397567c8..54d3ee0b6b 100644 --- a/README.md +++ b/README.md @@ -869,6 +869,13 @@ Each application received complete, production-ready CLI interfaces โ€” not demo โœ… New +๐Ÿ“š Calibre +Ebook Library Management +cli-anything-calibre +calibredb + ebook-meta + ebook-convert +โœ… New + + ๐Ÿ“ Mubu Knowledge Management & Outlining cli-anything-mubu @@ -1156,6 +1163,7 @@ cli-anything/ โ”œโ”€โ”€ ๐ŸŒ browser/agent-harness/ # Browser CLI (DOMShell MCP, new) โ”œโ”€โ”€ ๐Ÿ“„ libreoffice/agent-harness/ # LibreOffice CLI (158 tests) โ”œโ”€โ”€ ๐Ÿ“š zotero/agent-harness/ # Zotero CLI (new, write import support) +โ”œโ”€โ”€ ๐Ÿ“š calibre/agent-harness/ # Calibre CLI (new, ebook workflow support) โ”œโ”€โ”€ ๐Ÿ“ mubu/agent-harness/ # Mubu CLI (96 tests) โ”œโ”€โ”€ ๐Ÿ“น obs-studio/agent-harness/ # OBS Studio CLI (153 tests) โ”œโ”€โ”€ ๐ŸŽž๏ธ kdenlive/agent-harness/ # Kdenlive CLI (155 tests) diff --git a/README_CN.md b/README_CN.md index 785f30f33b..99e600e9dd 100644 --- a/README_CN.md +++ b/README_CN.md @@ -548,6 +548,13 @@ CLI-Anything ้€‚็”จไบŽไปปไฝ•ๆœ‰ไปฃ็ ๅบ“็š„่ฝฏไปถ โ€”โ€” ไธ้™้ข†ๅŸŸ๏ผŒไธ้™ โœ… ๆ–ฐๅขž +๐Ÿ“š Calibre +็”ตๅญไนฆๅบ“็ฎก็† +cli-anything-calibre +calibredb + ebook-meta + ebook-convert +โœ… ๆ–ฐๅขž + + ๐Ÿ“น OBS Studio ็›ดๆ’ญไธŽๅฝ•ๅˆถ cli-anything-obs-studio @@ -724,6 +731,7 @@ cli-anything/ โ”œโ”€โ”€ ๐ŸŽต audacity/agent-harness/ # Audacity CLI๏ผˆ161 ้กนๆต‹่ฏ•๏ผ‰ โ”œโ”€โ”€ ๐Ÿ“„ libreoffice/agent-harness/ # LibreOffice CLI๏ผˆ158 ้กนๆต‹่ฏ•๏ผ‰ โ”œโ”€โ”€ ๐Ÿ“š zotero/agent-harness/ # Zotero CLI๏ผˆๆ–ฐๅขž๏ผŒๆ”ฏๆŒๆ–‡็Œฎๅฏผๅ…ฅ๏ผ‰ +โ”œโ”€โ”€ ๐Ÿ“š calibre/agent-harness/ # Calibre CLI๏ผˆๆ–ฐๅขž๏ผŒ็”ตๅญไนฆๅทฅไฝœๆต๏ผ‰ โ”œโ”€โ”€ ๐Ÿ“น obs-studio/agent-harness/ # OBS Studio CLI๏ผˆ153 ้กนๆต‹่ฏ•๏ผ‰ โ”œโ”€โ”€ ๐ŸŽž๏ธ kdenlive/agent-harness/ # Kdenlive CLI๏ผˆ155 ้กนๆต‹่ฏ•๏ผ‰ โ”œโ”€โ”€ ๐ŸŽฌ shotcut/agent-harness/ # Shotcut CLI๏ผˆ154 ้กนๆต‹่ฏ•๏ผ‰ diff --git a/README_JA.md b/README_JA.md index b80405841f..f21a6fd842 100644 --- a/README_JA.md +++ b/README_JA.md @@ -508,6 +508,13 @@ CLI-Anythingใฏใ‚ณใƒผใƒ‰ใƒ™ใƒผใ‚นใ‚’ๆŒใคใ‚ใ‚‰ใ‚†ใ‚‹ใ‚ฝใƒ•ใƒˆใ‚ฆใ‚งใ‚ขใงๅ‹• โœ… 158 +๐Ÿ“š Calibre +้›ปๅญๆ›ธ็ฑใƒฉใ‚คใƒ–ใƒฉใƒช็ฎก็† +cli-anything-calibre +calibredb + ebook-meta + ebook-convert +โœ… ๆ–ฐ่ฆ + + ๐Ÿ“น OBS Studio ใƒฉใ‚คใƒ–ใ‚นใƒˆใƒชใƒผใƒŸใƒณใ‚ฐ & ้Œฒ็”ป cli-anything-obs-studio @@ -646,6 +653,7 @@ cli-anything/ โ”œโ”€โ”€ โœ๏ธ inkscape/agent-harness/ # Inkscape CLI (202ใƒ†ใ‚นใƒˆ) โ”œโ”€โ”€ ๐ŸŽต audacity/agent-harness/ # Audacity CLI (161ใƒ†ใ‚นใƒˆ) โ”œโ”€โ”€ ๐Ÿ“„ libreoffice/agent-harness/ # LibreOffice CLI (158ใƒ†ใ‚นใƒˆ) +โ”œโ”€โ”€ ๐Ÿ“š calibre/agent-harness/ # Calibre CLI๏ผˆๆ–ฐ่ฆใ€้›ปๅญๆ›ธ็ฑใƒฏใƒผใ‚ฏใƒ•ใƒญใƒผ๏ผ‰ โ”œโ”€โ”€ ๐Ÿ“น obs-studio/agent-harness/ # OBS Studio CLI (153ใƒ†ใ‚นใƒˆ) โ”œโ”€โ”€ ๐ŸŽž๏ธ kdenlive/agent-harness/ # Kdenlive CLI (155ใƒ†ใ‚นใƒˆ) โ”œโ”€โ”€ ๐ŸŽฌ shotcut/agent-harness/ # Shotcut CLI (154ใƒ†ใ‚นใƒˆ) diff --git a/calibre/agent-harness/AGENT_TEST_PROMPT.md b/calibre/agent-harness/AGENT_TEST_PROMPT.md new file mode 100644 index 0000000000..eab77cd6d2 --- /dev/null +++ b/calibre/agent-harness/AGENT_TEST_PROMPT.md @@ -0,0 +1,164 @@ +# Calibre Agent Test Prompt (Bilingual) + +> Purpose: reproducible **Agent test** prompt for OpenCode/Cursor/Claude Code in CLI-only mode. +> ็›ฎ็š„๏ผšๆไพ›ๅฏๅค็”จ็š„ **Agent test** ๆ็คบ่ฏ๏ผˆไป… CLI๏ผ‰๏ผŒ็”จไบŽๅผ€ๆบๅค็Žฐใ€‚ +> Note / ่ฏดๆ˜Ž๏ผšThis file is an **example agent-test prompt/keyword template** for reproducible validation and can be adapted per environment. +> ๆณจ๏ผš่ฏฅๆ–‡ไปถๆ˜ฏ็”จไบŽๅค็Žฐๅฎž้ชŒ็š„**็คบไพ‹ๆ™บ่ƒฝไฝ“ๆต‹่ฏ•ๆ็คบ่ฏ/ๅ…ณ้”ฎ่ฏๆจกๆฟ**๏ผŒๅฏๆŒ‰็Žฏๅขƒๆ›ฟๆข่ทฏๅพ„ไธŽๅ‚ๆ•ฐใ€‚ +> Status / ็Šถๆ€๏ผš**Example template (not a normative spec)**. +> ็Šถๆ€๏ผš**็คบไพ‹ๆจกๆฟ๏ผˆ้ž่ง„่Œƒๆ€งๆ ‡ๅ‡†ๆ–‡ๆกฃ๏ผ‰**ใ€‚ + +--- + +## Quick Notes / ไฝฟ็”จ่ฏดๆ˜Ž + +- **System / ็ณป็ปŸ**: Windows (PowerShell examples below) +- **Terminal / ็ปˆ็ซฏ็ฑปๅž‹**: PowerShell (if using CMD, adjust line continuation syntax) +- **Agent mode / ไปฃ็†ๆจกๅผ**: CLI-only (no GUI actions during Agent test) +- **Path placeholders / ่ทฏๅพ„ๅ˜้‡ๅฏๆ›ฟๆข**: + - `{{LIB}}` = calibre library root (must contain `metadata.db`) + - `{{EPUB}}` = input epub path + - `{{OUT}}` = export output directory + - `{{CONVERTED_FILE}}` = converted mobi output path + +Recommended defaults: + +```text +{{LIB}} = D:\Books\Calibre Library +{{EPUB}} = D:\AgentTest\sample.epub +{{OUT}} = D:\AgentTest\out +{{CONVERTED_FILE}} = D:\AgentTest\out\converted\agent-test.mobi +``` + +--- + +## Prompt (ZH + EN) + +```text +[ไธญๆ–‡] +ไฝ ๆ˜ฏไธ€ไธชๅชๅ…่ฎธไฝฟ็”จ CLI ็š„ๆ‰ง่กŒไปฃ็†ใ€‚ๅฝ“ๅ‰็›ฎๆ ‡๏ผšๅฎŒๆˆ calibre ็š„ Agent test๏ผˆๅŠ ๅˆ†/้ซ˜้˜ถ้ชŒ่ฏ๏ผ‰ใ€‚ไธ่ฆ็”Ÿๆˆๆ–ฐ harness๏ผŒไธ่ฆไฝฟ็”จไปปไฝ• GUIใ€‚ + +ใ€็ŽฏๅขƒไธŽๅ›บๅฎš่ทฏๅพ„ใ€‘ +- OS: Windows (PowerShell) +- calibre ๅทฒๅฎ‰่ฃ… +- cli-anything-calibre ๅทฒๅฎ‰่ฃ… +- LIB = {{LIB}} +- EPUB = {{EPUB}} +- OUT = {{OUT}} +- CONVERTED_FILE = {{CONVERTED_FILE}} + +ใ€็กฌๆ€ง่ง„ๅˆ™ใ€‘ +1) ็ฆๆญข็Œœๅ‚ๆ•ฐ๏ผ›ไธ็กฎๅฎšๅฟ…้กปๅ…ˆ่ฟ่กŒๅฏนๅบ” --helpใ€‚ +2) ๆฏไธ€ๆญฅ้ƒฝ่พ“ๅ‡บ๏ผšๆ‰ง่กŒๅ‘ฝไปคใ€exit codeใ€ๅ…ณ้”ฎ่พ“ๅ‡บใ€‚ +3) ๅคฑ่ดฅๆ—ถๅ…ˆๆ‰“ๅฐ้”™่ฏฏ๏ผŒๅ†่‡ชไฟฎๅค็ปง็ปญใ€‚ +4) ๅ…จ็จ‹ไป… CLI๏ผŒไธๅพ—่ฐƒ็”จ GUIใ€‚ +5) ่ทฏๅพ„ๅฟ…้กปไฝฟ็”จ็ปๅฏน่ทฏๅพ„ใ€‚ + +ใ€Preflight๏ผˆๅฟ…้กปๅ…ˆๆ‰ง่กŒ๏ผ‰ใ€‘ +A. ๆฃ€ๆŸฅๅ‘ฝไปคๅฏ็”จ๏ผˆ้€ๆกๆ‰ง่กŒ๏ผ‰๏ผš +- cli-anything-calibre --help +- calibredb --version +- ebook-convert --version +- ebook-meta --version + +B. ่‹ฅ `cli-anything-calibre` ไธๅฏ่ฐƒ็”จ๏ผš +- ่‡ชๅŠจๅฎšไฝๅนถไฝฟ็”จ Python ็”จๆˆท Scripts ไธ‹็š„ `cli-anything-calibre.exe`๏ผˆๆˆ–ไธดๆ—ถไฟฎๆญฃ PATH๏ผ‰ๅŽ้‡่ฏ•๏ผŒ็›ดๅˆฐๅฏ็”จใ€‚ + +C. ่‹ฅ `EPUB` ไธๅญ˜ๅœจ๏ผš +- ่‡ชๅŠจๅˆ›ๅปบไธ€ไธชๅˆๆณ• EPUB ๅŽ็ปง็ปญ๏ผˆๅฏ็”จ html -> ebook-convert๏ผŒๆˆ–ๆœ€ๅฐๅˆๆณ• zip ็ป“ๆž„๏ผŒ็กฎไฟ mimetype ๆœชๅŽ‹็ผฉ๏ผ‰ใ€‚ + +D. ่‹ฅ `LIB` ไธๆ˜ฏๆœ‰ๆ•ˆ calibre ไนฆๅบ“๏ผˆ็ผบๅฐ‘ metadata.db๏ผ‰๏ผš +- ๅ…ˆๅˆๅง‹ๅŒ–ไนฆๅบ“๏ผˆไพ‹ๅฆ‚ `calibredb --with-library "" list`๏ผ‰๏ผŒๅ†็ปง็ปญใ€‚ + +E. ๆŽขๆต‹ๅญๅ‘ฝไปคๅธฎๅŠฉ๏ผˆ้€ๆกๆ‰ง่กŒ๏ผ‰๏ผš +- cli-anything-calibre library --help +- cli-anything-calibre book --help +- cli-anything-calibre export --help +- cli-anything-calibre convert --help +- cli-anything-calibre meta --help + +ใ€ไธปไปปๅŠก้“พ๏ผˆไธฅๆ ผๆŒ‰้กบๅบ๏ผ‰ใ€‘ +1) `library stats`๏ผˆJSON๏ผ‰ +2) `book add`๏ผšๅฐ† `EPUB` ๅ…ฅๅบ“๏ผŒ`--title "Agent Test Book"`๏ผŒ`--authors "OpenCode Bot"` +3) `book search "title:Agent Test Book"` ๅนถๆๅ– `book_id`๏ผˆๅญ—ๆฎต้€šๅธธไธบ `id`๏ผ‰ +4) `export book --to-dir "" --single-dir` +5) ๅœจ `OUT` ไธ‹้€’ๅฝ’ๅ‘็Žฐๅฏผๅ‡บ็š„ `.epub`๏ผˆไธ่ฆๅ†™ๆญปๆ–‡ไปถๅ๏ผ‰๏ผŒ่ฎฐไธบ `exported_epub` +6) `convert run "" "" --preset kindle` +7) `meta show ""` +8) ๆ ก้ชŒ `CONVERTED_FILE` ๅญ˜ๅœจไธ”ๅคงๅฐ > 0๏ผŒๅนถ่พ“ๅ‡บๅญ—่Š‚ๆ•ฐ + +ใ€่พ“ๅ‡บๆ ผๅผ่ฆๆฑ‚ใ€‘ +- ๅ…ˆ่พ“ๅ‡บๅฎŒๆ•ดๅˆ†ๆญฅๅค็›˜๏ผˆๅ‘ฝไปค + exit code + ๅ…ณ้”ฎ่พ“ๅ‡บ๏ผ‰ +- ๆœ€ๅŽไธ€่กŒๅฟ…้กป่พ“ๅ‡บๅ•่กŒ JSON๏ผš + FINAL_RESULT={"library_path":"...","book_id":...,"export_dir":"...","exported_epub":"...","converted_file":"...","converted_file_size_bytes":...,"all_exit_zero":true/false} + +------------------------------------------------------------ + +[English] +You are a CLI-only execution agent. Goal: complete a calibre Agent test (bonus/advanced verification). Do not generate a new harness. Do not use GUI. + +[Environment and fixed paths] +- OS: Windows (PowerShell) +- calibre is installed +- cli-anything-calibre is installed +- LIB = {{LIB}} +- EPUB = {{EPUB}} +- OUT = {{OUT}} +- CONVERTED_FILE = {{CONVERTED_FILE}} + +[Hard rules] +1) Do not guess arguments; run --help first when unsure. +2) For every step, print command, exit code, and key output. +3) On failure, print error first, then self-correct and continue. +4) CLI-only flow; no GUI actions. +5) Use absolute paths only. + +[Preflight (required)] +A. Verify command availability: +- cli-anything-calibre --help +- calibredb --version +- ebook-convert --version +- ebook-meta --version + +B. If `cli-anything-calibre` is not callable: +- Auto-locate `cli-anything-calibre.exe` from Python user Scripts (or temporarily fix PATH), then retry. + +C. If `EPUB` does not exist: +- Auto-create a valid EPUB (html -> ebook-convert, or minimal valid zip EPUB with uncompressed mimetype). + +D. If `LIB` is not a valid calibre library (missing metadata.db): +- Initialize it first (e.g., `calibredb --with-library "" list`). + +E. Probe subcommand helps: +- cli-anything-calibre library --help +- cli-anything-calibre book --help +- cli-anything-calibre export --help +- cli-anything-calibre convert --help +- cli-anything-calibre meta --help + +[Main task chain (strict order)] +1) `library stats` (JSON) +2) `book add` with `--title "Agent Test Book"` and `--authors "OpenCode Bot"` +3) `book search "title:Agent Test Book"` and extract `book_id` (typically `id`) +4) `export book --to-dir "" --single-dir` +5) Recursively discover exported `.epub` under `OUT` (do not hardcode filename), call it `exported_epub` +6) `convert run "" "" --preset kindle` +7) `meta show ""` +8) Validate `CONVERTED_FILE` exists and size > 0, print file size in bytes + +[Output requirements] +- First provide step-by-step recap (command + exit code + key output) +- Final line must be a one-line JSON: + FINAL_RESULT={"library_path":"...","book_id":...,"export_dir":"...","exported_epub":"...","converted_file":"...","converted_file_size_bytes":...,"all_exit_zero":true/false} +``` + +--- + +## Optional: GUI Round-trip Checklist / ๅฏ้€‰ GUI ๅพ€่ฟ”ๆฃ€ๆŸฅ + +After CLI flow succeeds, verify consistency in Calibre GUI: + +1. Open the same library path (`{{LIB}}`). +2. Confirm book row exists (`Agent Test Book` / `OpenCode Bot`). +3. Open metadata editor and verify title/author. +4. Confirm exported and converted files exist and are non-empty. + diff --git a/calibre/agent-harness/CALIBRE.md b/calibre/agent-harness/CALIBRE.md new file mode 100644 index 0000000000..413982bf09 --- /dev/null +++ b/calibre/agent-harness/CALIBRE.md @@ -0,0 +1,133 @@ +# Calibre: Project-Specific Analysis & SOP + +## Architecture Summary + +calibre is an ebook management suite covering library operations, metadata editing, +export, and format conversion. Unlike many GUI-first tools, calibre already +ships mature native CLI binaries, so the harness strategy is to compose these +commands into an agent-friendly, stateful interface. + +``` +โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ” +โ”‚ Calibre GUI โ”‚ +โ”‚ โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ” โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ” โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ” โ”‚ +โ”‚ โ”‚ Library โ”‚ โ”‚ Metadata โ”‚ โ”‚ Convert โ”‚ โ”‚ +โ”‚ โ””โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”˜ โ””โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”˜ โ””โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”˜ โ”‚ +โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ +โ”‚ โ”Œโ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ” โ”‚ +โ”‚ โ”‚ Calibre backend (db + metadata) โ”‚ โ”‚ +โ”‚ โ”‚ SQLite library + conversion stack โ”‚ โ”‚ +โ”‚ โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜ โ”‚ +โ”‚ โ”‚ โ”‚ +โ”‚ โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ” โ”‚ +โ”‚ โ”‚ Native CLI binaries โ”‚ โ”‚ +โ”‚ โ”‚ calibredb | ebook-meta | โ”‚ โ”‚ +โ”‚ โ”‚ ebook-convert โ”‚ โ”‚ +โ”‚ โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜ โ”‚ +โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜ +``` + +Primary backend components: +- Library database: `src/calibre/db/backend.py` +- Database cache layer: `src/calibre/db/cache.py` +- Legacy library API: `src/calibre/db/legacy.py` +- Search/query layer: `src/calibre/db/search.py` +- Metadata readers/writers: `src/calibre/ebooks/metadata/` +- Conversion pipeline: `src/calibre/ebooks/conversion/` + +The GUI is primarily a PyQt6 frontend over these backend capabilities. + +## CLI Strategy: Native CLI Composition + Session Layer + +The harness wraps real calibre binaries and adds: +1. stable command groups for agents (`library`, `book`, `meta`, `convert`, `export`, `session`) +2. consistent machine-readable output via `--json` +3. REPL-first interactive flow with undo/redo session history +4. explicit validation and clearer error reporting for automation + +### Core Domains + +| Domain | Module | Native Tool(s) | Key Operations | +|--------|--------|----------------|----------------| +| Library | `core/library.py` | `calibredb` | open/info/list-fields/stats | +| Books | `core/books.py` | `calibredb` | add/list/get/search/remove/set-field | +| Metadata | `core/metadata.py` | `ebook-meta`, `calibredb` | show/set/clear/set-cover | +| Convert | `core/convert.py` | `ebook-convert` | formats/presets/run | +| Export | `core/export.py` | `calibredb` | export book/catalog/backup | +| Session | `core/session.py` | harness layer | status/undo/redo/history/save | + +### Native Tool Registry + +From `src/calibre/linux.py`, calibre ships these console tools: +- `calibredb` +- `ebook-convert` +- `ebook-meta` +- `ebook-polish` +- `calibre-server` +- `calibre-debug` +- `calibre-customize` +- `fetch-ebook-metadata` +- `calibre-smtp` +- `calibre-parallel` +- `calibre-complete` +- `web2disk` + +Harness-critical tools: +- `calibredb` for library inspection and mutation +- `ebook-meta` for per-file metadata inspection/update +- `ebook-convert` for format conversion + +### Conversion Presets + +Current harness-facing presets: +- `kindle` +- `tablet` +- `generic-epub` + +These are mapped to curated `ebook-convert` argument bundles for stable agent use. + +### Translation Gap: Low Risk + +There is minimal translation gap because the harness delegates core behavior to +calibre's own binaries rather than reimplementing internals. The main harness +responsibility is orchestration, validation, and normalized output. + +## Data Model and Mapping + +calibre libraries are directory-based and centered on `metadata.db` (SQLite). +Book records map to folders containing one or more format files plus metadata. +This allows direct CLI operations without emulating GUI state. + +### Library/book mapping (`calibredb`) +- list books -> `calibredb list --for-machine` +- search books -> `calibredb list --search ... --for-machine` +- add book -> `calibredb add` +- remove book -> `calibredb remove` +- show metadata -> `calibredb show_metadata` +- export books -> `calibredb export` +- backup metadata -> `calibredb backup_metadata` + +### File metadata mapping (`ebook-meta`) +- inspect file metadata -> `ebook-meta ` +- mutate file metadata -> `ebook-meta --title ... --authors ...` +- library-record metadata update -> `calibredb set_metadata ` + +### Conversion mapping (`ebook-convert`) +- convert formats -> `ebook-convert input.epub output.mobi ...` + +## GUI Toolkit + +The GUI uses PyQt6 through calibre's lazy-loading `qt` shim. + +Relevant paths: +- `src/qt/` +- `src/calibre/gui2/` +- `pyproject.toml` + +## Constraints + +- Real calibre tools are hard dependencies for E2E workflows +- The harness must fail clearly if required binaries are not on PATH +- REPL should remain the default entry behavior +- All commands should support `--json` +- Session persistence should use locked JSON writes diff --git a/calibre/agent-harness/cli_anything/calibre/README.md b/calibre/agent-harness/cli_anything/calibre/README.md new file mode 100644 index 0000000000..ca193f6e0d --- /dev/null +++ b/calibre/agent-harness/cli_anything/calibre/README.md @@ -0,0 +1,166 @@ +# cli-anything-calibre + +A stateful command-line interface for calibre library management, metadata +editing, export, and format conversion. + +This harness wraps real calibre tools (`calibredb`, `ebook-meta`, +`ebook-convert`) and adds: +- a unified Click CLI +- REPL mode by default +- machine-readable JSON output via `--json` +- lightweight session state with undo/redo history + +## Requirements + +- Python 3.10+ +- calibre installed and available on PATH + +Typical backend binaries used by this harness: +- `calibredb` +- `ebook-meta` +- `ebook-convert` + +## Installation + +```bash +# From calibre/agent-harness +pip install -e . +``` + +## Quick Start + +```bash +# Show help +cli-anything-calibre --help + +# Enter REPL mode +cli-anything-calibre + +# Open a library and inspect +cli-anything-calibre --json --library "D:/Books/Calibre Library" library stats +cli-anything-calibre --json --library "D:/Books/Calibre Library" book list --limit 5 + +# Add, search, export, and convert +cli-anything-calibre --json --library "D:/Books/Calibre Library" book add "D:/tmp/book.epub" --title "My Book" --authors "Me" +cli-anything-calibre --json --library "D:/Books/Calibre Library" book search "title:My Book" --limit 5 +cli-anything-calibre --json --library "D:/Books/Calibre Library" export book 1 --to-dir "D:/tmp/exported" --single-dir +cli-anything-calibre --json convert run "D:/tmp/exported/My Book.epub" "D:/tmp/converted/My Book.mobi" --preset kindle +``` + +## JSON Output Mode + +All commands support `--json` for machine-readable output: + +```bash +cli-anything-calibre --json --library "D:/Books/Calibre Library" library info +cli-anything-calibre --json --library "D:/Books/Calibre Library" book get 1 +``` + +## Interactive REPL + +```bash +# Starts REPL when no subcommand is provided +cli-anything-calibre +``` + +Inside REPL you can run grouped commands and use session operations (`undo`, +`redo`, `history`) through the `session` group. + +## Command Groups + +### Library +``` +library open - Open a calibre library +library info - Show current library metadata +library list-fields - List supported library fields +library stats - Show book/author/format statistics +``` + +### Book +``` +book add - Add an ebook to library +book list - List books +book get - Show one book metadata +book search - Search books by calibre query syntax +book set-field - Update selected fields (title/authors/tags...) +book remove - Remove a book +``` + +### Meta +``` +meta show - Show file metadata +meta set [--title --authors] - Update file metadata +meta set-cover - Set cover image +meta clear - Clear selected metadata fields +``` + +### Convert +``` +convert formats - List common output formats +convert presets - List preset argument bundles +convert run - Convert ebook format +``` + +### Export +``` +export book --to-dir - Export book files +export catalog - Build catalog output +export backup - Backup OPF metadata +``` + +### Session +``` +session status - Show session context +session undo - Undo last state change +session redo - Redo last undone change +session history - Show recorded snapshots +session save - Persist session to JSON +``` + +## Running Tests + +```bash +# From calibre/agent-harness + +# Unit tests +python -m pytest cli_anything/calibre/tests/test_core.py -v + +# E2E tests (requires calibre installed) +python -m pytest cli_anything/calibre/tests/test_full_e2e.py -v -s + +# Full suite +python -m pytest cli_anything/calibre/tests/ -v +``` + +## Architecture + +``` +cli_anything/calibre/ +โ”œโ”€โ”€ __main__.py +โ”œโ”€โ”€ calibre_cli.py # Click CLI entry point + REPL +โ”œโ”€โ”€ core/ +โ”‚ โ”œโ”€โ”€ library.py # Library open/info/stats/fields +โ”‚ โ”œโ”€โ”€ books.py # Book add/list/get/search/remove/set-field +โ”‚ โ”œโ”€โ”€ metadata.py # ebook-meta wrappers +โ”‚ โ”œโ”€โ”€ convert.py # Conversion presets + run +โ”‚ โ”œโ”€โ”€ export.py # Export/catalog/backup +โ”‚ โ””โ”€โ”€ session.py # Stateful context + undo/redo +โ”œโ”€โ”€ utils/ +โ”‚ โ”œโ”€โ”€ calibre_backend.py # subprocess wrappers + parsing +โ”‚ โ””โ”€โ”€ repl_skin.py # Interactive REPL UX +โ”œโ”€โ”€ skills/ +โ”‚ โ””โ”€โ”€ SKILL.md # Agent-discoverable usage guide +โ””โ”€โ”€ tests/ + โ”œโ”€โ”€ test_core.py + โ”œโ”€โ”€ test_full_e2e.py + โ””โ”€โ”€ TEST.md # Test plan + results + agent test notes +``` + +## Agent Test Prompt + +For reproducible CLI-only agent validation (OpenCode/Cursor/Claude Code), use: + +- [`../../AGENT_TEST_PROMPT.md`](../../AGENT_TEST_PROMPT.md) + +This prompt file is an example template for reproducible agent testing and can +be adapted to your environment. diff --git a/calibre/agent-harness/cli_anything/calibre/__init__.py b/calibre/agent-harness/cli_anything/calibre/__init__.py new file mode 100644 index 0000000000..c14ca20d4f --- /dev/null +++ b/calibre/agent-harness/cli_anything/calibre/__init__.py @@ -0,0 +1 @@ +"""cli-anything calibre package.""" diff --git a/calibre/agent-harness/cli_anything/calibre/__main__.py b/calibre/agent-harness/cli_anything/calibre/__main__.py new file mode 100644 index 0000000000..48438b83d8 --- /dev/null +++ b/calibre/agent-harness/cli_anything/calibre/__main__.py @@ -0,0 +1,4 @@ +from cli_anything.calibre.calibre_cli import main + +if __name__ == "__main__": + main() diff --git a/calibre/agent-harness/cli_anything/calibre/calibre_cli.py b/calibre/agent-harness/cli_anything/calibre/calibre_cli.py new file mode 100644 index 0000000000..86a9bfe486 --- /dev/null +++ b/calibre/agent-harness/cli_anything/calibre/calibre_cli.py @@ -0,0 +1,461 @@ +"""Stateful CLI harness for calibre.""" + +from __future__ import annotations + +import json +import os +import shlex +import sys +from typing import Optional + +import click + +from cli_anything.calibre.core.session import Session +from cli_anything.calibre.core import library as library_mod +from cli_anything.calibre.core import books as books_mod +from cli_anything.calibre.core import metadata as metadata_mod +from cli_anything.calibre.core import convert as convert_mod +from cli_anything.calibre.core import export as export_mod + + +_session: Optional[Session] = None +_json_output = False +_repl_mode = False + + +def get_session() -> Session: + global _session + if _session is None: + _session = Session() + return _session + + +def output(data, message: str = ""): + if _json_output: + click.echo(json.dumps(data, indent=2, default=str)) + return + if message: + click.echo(message) + if isinstance(data, dict): + _print_dict(data) + elif isinstance(data, list): + _print_list(data) + elif data is not None: + click.echo(str(data)) + + +def _print_dict(d: dict, indent: int = 0): + prefix = " " * indent + for k, v in d.items(): + if isinstance(v, dict): + click.echo(f"{prefix}{k}:") + _print_dict(v, indent + 1) + elif isinstance(v, list): + click.echo(f"{prefix}{k}:") + _print_list(v, indent + 1) + else: + click.echo(f"{prefix}{k}: {v}") + + +def _print_list(items: list, indent: int = 0): + prefix = " " * indent + for i, item in enumerate(items): + if isinstance(item, dict): + click.echo(f"{prefix}[{i}]") + _print_dict(item, indent + 1) + else: + click.echo(f"{prefix}- {item}") + + +def handle_error(func): + def wrapper(*args, **kwargs): + try: + return func(*args, **kwargs) + except FileNotFoundError as e: + _emit_error(str(e), "file_not_found") + except (ValueError, IndexError, RuntimeError) as e: + _emit_error(str(e), type(e).__name__) + except click.ClickException: + raise + wrapper.__name__ = func.__name__ + wrapper.__doc__ = func.__doc__ + return wrapper + + +def _emit_error(message: str, error_type: str): + if _json_output: + click.echo(json.dumps({"error": message, "type": error_type})) + else: + click.echo(f"Error: {message}", err=True) + if not _repl_mode: + raise SystemExit(1) + + +@click.group(invoke_without_command=True) +@click.option("--json", "use_json", is_flag=True, help="Output as JSON") +@click.option("--library", "library_path", type=str, default=None, help="Path to a calibre library") +@click.pass_context +@handle_error +def cli(ctx, use_json, library_path): + """calibre CLI โ€” stateful ebook library operations from the command line.""" + global _json_output + _json_output = use_json + + if library_path: + info = library_mod.open_library(library_path) + get_session().open_library(info["library_path"]) + + if ctx.invoked_subcommand is None: + ctx.invoke(repl) + + +@cli.group() +def library(): + """Library management commands.""" + pass + + +@library.command("open") +@click.argument("path") +@handle_error +def library_open(path): + """Open a calibre library.""" + info = library_mod.open_library(path) + get_session().open_library(info["library_path"]) + output(info, f"Opened library: {info['library_path']}") + + +@library.command("info") +@handle_error +def library_info(): + """Show library information.""" + sess = get_session() + info = library_mod.get_library_info(sess.require_library()) + output(info) + + +@library.command("list-fields") +@handle_error +def library_list_fields(): + """List supported fields.""" + output(library_mod.list_fields(), "Available fields:") + + +@library.command("stats") +@handle_error +def library_stats(): + """Show library statistics.""" + sess = get_session() + output(library_mod.stats(sess.require_library())) + + +@cli.group() +def book(): + """Book management commands.""" + pass + + +@book.command("list") +@click.option("--search", type=str, default=None, help="Search query") +@click.option("--limit", type=int, default=50, help="Maximum results") +@click.option("--sort-by", type=str, default=None, help="Sort field") +@click.option("--ascending", is_flag=True, help="Sort ascending") +@handle_error +def book_list(search, limit, sort_by, ascending): + """List books in the current library.""" + sess = get_session() + books = books_mod.list_books(sess.require_library(), search=search, limit=limit, sort_by=sort_by, ascending=ascending) + output(books, f"Found {len(books)} books") + + +@book.command("get") +@click.argument("book_id", type=int) +@handle_error +def book_get(book_id): + """Show metadata for a library book.""" + sess = get_session() + sess.select_book(book_id) + output(books_mod.get_book(sess.require_library(), book_id)) + + +@book.command("add") +@click.argument("input_path") +@click.option("--title", type=str, default=None) +@click.option("--authors", type=str, default=None) +@click.option("--tags", type=str, default=None) +@click.option("--series", type=str, default=None) +@click.option("--duplicate", is_flag=True, help="Allow duplicates") +@handle_error +def book_add(input_path, title, authors, tags, series, duplicate): + """Add a book file to the current library.""" + sess = get_session() + sess.snapshot("Add book") + result = books_mod.add_book(sess.require_library(), input_path, title=title, authors=authors, tags=tags, series=series, duplicate=duplicate) + added_id = books_mod.parse_added_id(result.get("stdout", "")) + if added_id is not None: + sess.select_book(added_id) + result["book_id"] = added_id + output(result, f"Added book: {os.path.abspath(input_path)}") + + +@book.command("remove") +@click.argument("book_id", type=int) +@click.option("--permanent", is_flag=True, help="Delete without placing in trash") +@handle_error +def book_remove(book_id, permanent): + """Remove a book from the current library.""" + sess = get_session() + sess.snapshot(f"Remove book {book_id}") + result = books_mod.remove_book(sess.require_library(), book_id, permanent=permanent) + if sess.current_book_id == book_id: + sess.select_book(None) + output(result, f"Removed book {book_id}") + + +@book.command("search") +@click.argument("query") +@click.option("--limit", type=int, default=50) +@handle_error +def book_search(query, limit): + """Search books in the current library.""" + sess = get_session() + result = books_mod.search_books(sess.require_library(), query, limit=limit) + output(result, f"Found {len(result)} books") + + +@book.command("set-field") +@click.argument("book_id", type=int) +@click.option("--title", type=str, default=None) +@click.option("--authors", type=str, default=None) +@click.option("--tags", type=str, default=None) +@handle_error +def book_set_field(book_id, title, authors, tags): + """Set selected book fields using an OPF payload.""" + sess = get_session() + sess.snapshot(f"Set metadata for book {book_id}") + result = books_mod.set_field(sess.require_library(), book_id, title=title, authors=authors, tags=tags) + sess.select_book(book_id) + output(result, f"Updated fields for book {book_id}") + + +@cli.group() +def meta(): + """Standalone ebook metadata commands.""" + pass + + +@meta.command("show") +@click.argument("book_path") +@handle_error +def meta_show(book_path): + """Show metadata from an ebook file.""" + output(metadata_mod.show_metadata(book_path)) + + +@meta.command("set") +@click.argument("book_path") +@click.option("--title", type=str, default=None) +@click.option("--authors", type=str, default=None) +@click.option("--cover", type=str, default=None) +@click.option("--language", type=str, default=None) +@click.option("--publisher", type=str, default=None) +@click.option("--tags", type=str, default=None) +@click.option("--comments", type=str, default=None) +@handle_error +def meta_set(book_path, title, authors, cover, language, publisher, tags, comments): + """Set metadata on an ebook file.""" + output(metadata_mod.set_metadata(book_path, title=title, authors=authors, cover=cover, language=language, publisher=publisher, tags=tags, comments=comments), f"Updated metadata: {os.path.abspath(book_path)}") + + +@meta.command("set-cover") +@click.argument("book_path") +@click.argument("cover_path") +@handle_error +def meta_set_cover(book_path, cover_path): + """Set the cover image for an ebook file.""" + output(metadata_mod.set_metadata(book_path, cover=cover_path), f"Updated cover: {os.path.abspath(book_path)}") + + +@meta.command("clear") +@click.argument("book_path") +@click.option("--comments", "clear_comments", is_flag=True) +@click.option("--tags", "clear_tags", is_flag=True) +@handle_error +def meta_clear(book_path, clear_comments, clear_tags): + """Clear selected metadata fields.""" + output(metadata_mod.clear_metadata_fields(book_path, clear_comments=clear_comments, clear_tags=clear_tags)) + + +@cli.group() +def convert(): + """Format conversion commands.""" + pass + + +@convert.command("formats") +@handle_error +def convert_formats(): + """List common output formats.""" + output(convert_mod.list_formats(), "Formats:") + + +@convert.command("presets") +@handle_error +def convert_presets(): + """List conversion presets.""" + output(convert_mod.list_presets()) + + +@convert.command("run") +@click.argument("input_path") +@click.argument("output_path") +@click.option("--preset", type=str, default=None) +@click.option("--extra-arg", "extra_args", multiple=True, help="Extra ebook-convert argument") +@handle_error +def convert_run(input_path, output_path, preset, extra_args): + """Convert an ebook file.""" + result = convert_mod.convert_book(input_path, output_path, preset=preset, extra_args=list(extra_args)) + output(result, f"Converted: {result['output']}") + + +@cli.group() +def export(): + """Export and backup commands.""" + pass + + +@export.command("book") +@click.argument("book_ids", nargs=-1, type=int) +@click.option("--to-dir", required=True, type=str) +@click.option("--single-dir", is_flag=True) +@click.option("--formats", type=str, default=None) +@handle_error +def export_book(book_ids, to_dir, single_dir, formats): + """Export one or more books from the current library.""" + sess = get_session() + if not book_ids: + raise ValueError("At least one book id is required") + result = export_mod.export_books(sess.require_library(), list(book_ids), to_dir, single_dir=single_dir, formats=formats) + output(result, f"Exported to: {result['output_dir']}") + + +@export.command("catalog") +@click.argument("output_path") +@click.option("--search", type=str, default=None) +@handle_error +def export_catalog(output_path, search): + """Build a catalog file.""" + sess = get_session() + result = export_mod.build_catalog(sess.require_library(), output_path, search=search) + output(result, f"Catalog created: {result['output']}") + + +@export.command("backup") +@click.option("--all", "all_records", is_flag=True) +@handle_error +def export_backup(all_records): + """Backup metadata in the current library.""" + sess = get_session() + result = export_mod.backup_metadata(sess.require_library(), all_records=all_records) + output(result, "Metadata backup complete") + + +@cli.group() +def session(): + """Session management commands.""" + pass + + +@session.command("status") +@handle_error +def session_status(): + """Show session status.""" + output(get_session().status()) + + +@session.command("undo") +@handle_error +def session_undo(): + """Undo the last session state change.""" + desc = get_session().undo() + output({"undone": desc}, f"Undone: {desc}") + + +@session.command("redo") +@handle_error +def session_redo(): + """Redo the last undone session state change.""" + desc = get_session().redo() + output({"redone": desc}, f"Redone: {desc}") + + +@session.command("history") +@handle_error +def session_history(): + """Show session history.""" + output(get_session().list_history(), "History:") + + +@session.command("save") +@handle_error +def session_save(): + """Persist the session file.""" + saved = get_session().save() + output({"saved": saved}, f"Saved session: {saved}") + + +@cli.command() +@handle_error +def repl(): + """Start interactive REPL session.""" + from cli_anything.calibre.utils.repl_skin import ReplSkin + + global _repl_mode + _repl_mode = True + + skin = ReplSkin("calibre", version="1.0.0") + skin.print_banner() + pt_session = skin.create_prompt_session() + + commands = { + "library": "open|info|list-fields|stats", + "book": "list|get|add|remove|search|set-field", + "meta": "show|set|set-cover|clear", + "convert": "formats|presets|run", + "export": "book|catalog|backup", + "session": "status|undo|redo|history|save", + "help": "Show this help", + "quit": "Exit REPL", + } + + while True: + sess = get_session() + context = sess.library_path or "no-library" + line = skin.get_input(pt_session, project_name=context, modified=sess.status()["modified"]) + if line is None: + break + line = line.strip() + if not line: + continue + if line.lower() in {"quit", "exit"}: + break + if line.lower() == "help": + skin.help(commands) + continue + try: + args = shlex.split(line) + except ValueError as e: + skin.error(str(e)) + continue + try: + cli.main(args=args, standalone_mode=False) + except SystemExit: + pass + except Exception as e: + skin.error(str(e)) + + skin.print_goodbye() + + +def main(): + cli() diff --git a/calibre/agent-harness/cli_anything/calibre/core/__init__.py b/calibre/agent-harness/cli_anything/calibre/core/__init__.py new file mode 100644 index 0000000000..99853e6144 --- /dev/null +++ b/calibre/agent-harness/cli_anything/calibre/core/__init__.py @@ -0,0 +1 @@ +"""cli-anything calibre core package.""" diff --git a/calibre/agent-harness/cli_anything/calibre/core/books.py b/calibre/agent-harness/cli_anything/calibre/core/books.py new file mode 100644 index 0000000000..0e00a07b77 --- /dev/null +++ b/calibre/agent-harness/cli_anything/calibre/core/books.py @@ -0,0 +1,102 @@ +"""Book operations for cli-anything-calibre.""" + +from __future__ import annotations + +import os +import re +from typing import Any + +from cli_anything.calibre.utils import calibre_backend as backend + + +def _book_to_summary(book: dict[str, Any]) -> dict[str, Any]: + data = dict(book) + if isinstance(data.get("authors"), list): + data["authors_text"] = ", ".join(data["authors"]) + if isinstance(data.get("formats"), list): + data["formats_text"] = ", ".join(data["formats"]) + return data + + +def list_books( + library_path: str, + search: str | None = None, + limit: int | None = 100, + sort_by: str | None = None, + ascending: bool = False, +) -> list[dict[str, Any]]: + books = backend.calibredb_list( + library_path, + fields="id,title,authors,formats,series,tags,publisher,languages", + search=search, + limit=limit, + sort_by=sort_by, + ascending=ascending, + ) + return [_book_to_summary(book) for book in books] + + +def get_book(library_path: str, book_id: int) -> dict[str, Any]: + meta = backend.calibredb_show_metadata(library_path, book_id, as_opf=False) + return { + "book_id": book_id, + "metadata": meta["metadata"], + } + + +def add_book( + library_path: str, + input_path: str, + title: str | None = None, + authors: str | None = None, + tags: str | None = None, + series: str | None = None, + duplicate: bool = False, +) -> dict[str, Any]: + return backend.calibredb_add( + library_path, + input_path, + title=title, + authors=authors, + tags=tags, + series=series, + duplicate=duplicate, + ) + + +def remove_book(library_path: str, book_id: int, permanent: bool = False) -> dict[str, Any]: + return backend.calibredb_remove(library_path, book_id, permanent=permanent) + + +def search_books(library_path: str, query: str, limit: int | None = 100) -> list[dict[str, Any]]: + return list_books(library_path, search=query, limit=limit) + + +def set_field( + library_path: str, + book_id: int, + title: str | None = None, + authors: str | None = None, + tags: str | None = None, +) -> dict[str, Any]: + opf_path = backend.write_opf_temp(title=title, authors=authors, tags=tags) + try: + result = backend.calibredb_set_metadata(library_path, book_id, opf_path) + finally: + try: + os.remove(opf_path) + except OSError: + pass + return result + + +def parse_added_id(stdout: str) -> int | None: + match = re.search(r"id:\s*(\d+)", stdout, re.IGNORECASE) + if match: + return int(match.group(1)) + match = re.search(r"book ids?:\s*([^\n]+)", stdout, re.IGNORECASE) + if match: + digits = re.findall(r"\d+", match.group(1)) + if digits: + return int(digits[0]) + return None diff --git a/calibre/agent-harness/cli_anything/calibre/core/convert.py b/calibre/agent-harness/cli_anything/calibre/core/convert.py new file mode 100644 index 0000000000..ab12d8e9c9 --- /dev/null +++ b/calibre/agent-harness/cli_anything/calibre/core/convert.py @@ -0,0 +1,35 @@ +"""Conversion helpers for cli-anything-calibre.""" + +from __future__ import annotations + +from typing import Any + +from cli_anything.calibre.utils import calibre_backend as backend + + +PRESETS = { + "kindle": ["--output-profile", "kindle_pw3"], + "tablet": ["--output-profile", "tablet"], + "generic-epub": ["--output-profile", "generic_eink"], +} + + +def list_formats() -> list[str]: + return backend.detect_available_formats() + + +def list_presets() -> dict[str, list[str]]: + return PRESETS + + +def convert_book(input_path: str, output_path: str, preset: str | None = None, extra_args: list[str] | None = None) -> dict[str, Any]: + args = [] + if preset: + if preset not in PRESETS: + raise ValueError(f"Unknown preset: {preset}") + args.extend(PRESETS[preset]) + if extra_args: + args.extend(extra_args) + result = backend.ebook_convert(input_path, output_path, extra_args=args) + result["preset"] = preset + return result diff --git a/calibre/agent-harness/cli_anything/calibre/core/export.py b/calibre/agent-harness/cli_anything/calibre/core/export.py new file mode 100644 index 0000000000..eb8e49fbf0 --- /dev/null +++ b/calibre/agent-harness/cli_anything/calibre/core/export.py @@ -0,0 +1,19 @@ +"""Export helpers for cli-anything-calibre.""" + +from __future__ import annotations + +from typing import Any + +from cli_anything.calibre.utils import calibre_backend as backend + + +def export_books(library_path: str, book_ids: list[int], to_dir: str, single_dir: bool = False, formats: str | None = None) -> dict[str, Any]: + return backend.calibredb_export(library_path, book_ids, to_dir, single_dir=single_dir, formats=formats) + + +def build_catalog(library_path: str, output_path: str, search: str | None = None) -> dict[str, Any]: + return backend.calibredb_catalog(library_path, output_path, search=search) + + +def backup_metadata(library_path: str, all_records: bool = False) -> dict[str, Any]: + return backend.calibredb_backup_metadata(library_path, all_records=all_records) diff --git a/calibre/agent-harness/cli_anything/calibre/core/library.py b/calibre/agent-harness/cli_anything/calibre/core/library.py new file mode 100644 index 0000000000..33006e434f --- /dev/null +++ b/calibre/agent-harness/cli_anything/calibre/core/library.py @@ -0,0 +1,62 @@ +"""Library operations for cli-anything-calibre.""" + +from __future__ import annotations + +import os +from typing import Any + +from cli_anything.calibre.utils import calibre_backend as backend + + +def open_library(path: str) -> dict[str, Any]: + abs_path = os.path.abspath(path) + metadata_db = os.path.join(abs_path, "metadata.db") + if not os.path.isdir(abs_path): + raise FileNotFoundError(f"Library path not found: {abs_path}") + if not os.path.exists(metadata_db): + raise FileNotFoundError(f"Not a calibre library (missing metadata.db): {abs_path}") + return { + "library_path": abs_path, + "metadata_db": metadata_db, + "exists": True, + } + + +def get_library_info(path: str) -> dict[str, Any]: + books = backend.calibredb_list(path, fields="id,title,authors,formats", limit=100000) + author_names = set() + format_names = set() + for book in books: + for author in (book.get("authors") or []): + author_names.add(author) + formats = book.get("formats") or [] + if isinstance(formats, str): + formats = [x.strip() for x in formats.split(",") if x.strip()] + for fmt in formats: + format_names.add(str(fmt).lower()) + return { + "library_path": os.path.abspath(path), + "book_count": len(books), + "author_count": len(author_names), + "formats": sorted(format_names), + } + + +def list_fields() -> list[str]: + return [ + "id", "title", "authors", "author_sort", "series", "series_index", "tags", + "formats", "publisher", "rating", "languages", "pubdate", "timestamp", + "last_modified", "identifiers", "comments", "size", "path", "uuid", + ] + + +def stats(path: str) -> dict[str, Any]: + info = get_library_info(path) + books = backend.calibredb_list(path, fields="id,title,formats,authors", limit=100000) + with_formats = sum(1 for book in books if book.get("formats")) + without_formats = len(books) - with_formats + return { + **info, + "with_formats": with_formats, + "without_formats": without_formats, + } diff --git a/calibre/agent-harness/cli_anything/calibre/core/metadata.py b/calibre/agent-harness/cli_anything/calibre/core/metadata.py new file mode 100644 index 0000000000..69305c3c71 --- /dev/null +++ b/calibre/agent-harness/cli_anything/calibre/core/metadata.py @@ -0,0 +1,39 @@ +"""Metadata operations for standalone ebook files.""" + +from __future__ import annotations + +from typing import Any + +from cli_anything.calibre.utils import calibre_backend as backend + + +def show_metadata(book_path: str) -> dict[str, Any]: + return backend.ebook_meta_show(book_path) + + +def set_metadata( + book_path: str, + title: str | None = None, + authors: str | None = None, + cover: str | None = None, + language: str | None = None, + publisher: str | None = None, + tags: str | None = None, + comments: str | None = None, +) -> dict[str, Any]: + return backend.ebook_meta_set( + book_path, + title=title, + authors=authors, + cover=cover, + language=language, + publisher=publisher, + tags=tags, + comments=comments, + ) + + +def clear_metadata_fields(book_path: str, clear_comments: bool = False, clear_tags: bool = False) -> dict[str, Any]: + comments = "" if clear_comments else None + tags = "" if clear_tags else None + return backend.ebook_meta_set(book_path, comments=comments, tags=tags) diff --git a/calibre/agent-harness/cli_anything/calibre/core/session.py b/calibre/agent-harness/cli_anything/calibre/core/session.py new file mode 100644 index 0000000000..c73bb170c9 --- /dev/null +++ b/calibre/agent-harness/cli_anything/calibre/core/session.py @@ -0,0 +1,145 @@ +"""Session management for cli-anything-calibre.""" + +from __future__ import annotations + +import copy +import json +import os +from datetime import datetime +from pathlib import Path +from typing import Any + + +def _locked_save_json(path, data, **dump_kwargs) -> None: + try: + f = open(path, "r+") + except FileNotFoundError: + os.makedirs(os.path.dirname(os.path.abspath(path)), exist_ok=True) + f = open(path, "w+") + with f: + locked = False + try: + import fcntl + fcntl.flock(f.fileno(), fcntl.LOCK_EX) + locked = True + except (ImportError, OSError): + pass + try: + f.seek(0) + f.truncate() + json.dump(data, f, **dump_kwargs) + f.flush() + finally: + if locked: + import fcntl + fcntl.flock(f.fileno(), fcntl.LOCK_UN) + + +class Session: + MAX_UNDO = 50 + + def __init__(self, session_path: str | None = None): + self.library_path: str | None = None + self.current_book_id: int | None = None + self._undo_stack: list[dict[str, Any]] = [] + self._redo_stack: list[dict[str, Any]] = [] + self._modified = False + if session_path is None: + session_path = str(Path.home() / ".cli-anything-calibre" / "session.json") + self.session_path = session_path + + def has_library(self) -> bool: + return self.library_path is not None + + def require_library(self) -> str: + if not self.library_path: + raise RuntimeError("No library selected. Use 'library open ' first.") + return self.library_path + + def open_library(self, library_path: str) -> None: + self.library_path = os.path.abspath(library_path) + self.current_book_id = None + self._undo_stack.clear() + self._redo_stack.clear() + self._modified = False + + def snapshot(self, description: str = "") -> None: + state = { + "library_path": self.library_path, + "current_book_id": self.current_book_id, + "description": description, + "timestamp": datetime.now().isoformat(), + } + self._undo_stack.append(copy.deepcopy(state)) + if len(self._undo_stack) > self.MAX_UNDO: + self._undo_stack.pop(0) + self._redo_stack.clear() + self._modified = True + + def select_book(self, book_id: int | None) -> None: + self.current_book_id = book_id + self._modified = True + + def undo(self) -> str: + if not self._undo_stack: + raise RuntimeError("Nothing to undo.") + current = { + "library_path": self.library_path, + "current_book_id": self.current_book_id, + "description": "redo point", + "timestamp": datetime.now().isoformat(), + } + self._redo_stack.append(current) + state = self._undo_stack.pop() + self.library_path = state["library_path"] + self.current_book_id = state["current_book_id"] + self._modified = True + return state.get("description", "") + + def redo(self) -> str: + if not self._redo_stack: + raise RuntimeError("Nothing to redo.") + current = { + "library_path": self.library_path, + "current_book_id": self.current_book_id, + "description": "undo point", + "timestamp": datetime.now().isoformat(), + } + self._undo_stack.append(current) + state = self._redo_stack.pop() + self.library_path = state["library_path"] + self.current_book_id = state["current_book_id"] + self._modified = True + return state.get("description", "") + + def status(self) -> dict[str, Any]: + return { + "has_library": self.library_path is not None, + "library_path": self.library_path, + "current_book_id": self.current_book_id, + "modified": self._modified, + "undo_count": len(self._undo_stack), + "redo_count": len(self._redo_stack), + } + + def list_history(self) -> list[dict[str, Any]]: + return [ + { + "index": i, + "description": state.get("description", ""), + "timestamp": state.get("timestamp", ""), + } + for i, state in enumerate(reversed(self._undo_stack)) + ] + + def save(self) -> str: + payload = { + "library_path": self.library_path, + "current_book_id": self.current_book_id, + "history": self._undo_stack, + "history_index": len(self._undo_stack) - 1, + "saved_at": datetime.now().isoformat(), + } + _locked_save_json(self.session_path, payload, indent=2, sort_keys=True) + self._modified = False + return self.session_path diff --git a/calibre/agent-harness/cli_anything/calibre/skills/SKILL.md b/calibre/agent-harness/cli_anything/calibre/skills/SKILL.md new file mode 100644 index 0000000000..f8866cd9a3 --- /dev/null +++ b/calibre/agent-harness/cli_anything/calibre/skills/SKILL.md @@ -0,0 +1,155 @@ +--- +name: "cli-anything-calibre" +description: "้ขๅ‘ Agent ็š„ calibre ๅ‘ฝไปค่กŒ๏ผš็ฎก็†ไนฆๅบ“ใ€็ผ–่พ‘ๅ…ƒๆ•ฐๆฎใ€ๅฏผๅ‡บไธŽๆ ผๅผ่ฝฌๆข๏ผˆๅŸบไบŽ calibredb / ebook-meta / ebook-convert๏ผ‰ใ€‚" +--- + +# cli-anything-calibre + +Stateful CLI harness for calibre. + +## Installation + +This CLI is installed as part of the cli-anything-calibre package: + +```bash +pip install cli-anything-calibre +``` + +**Prerequisites:** +- Python 3.10+ +- Calibre must be installed on your system + +## Usage + +### Basic Commands + +```bash +# Show help +cli-anything-calibre --help + +# Start interactive REPL mode +cli-anything-calibre +``` + +### JSON mode (for agents) + +Use `--json` to get machine-readable output for all commands. + +```bash +cli-anything-calibre --json --library "D:/Books/Calibre Library" library info +cli-anything-calibre --json --library "D:/Books/Calibre Library" book list --search "title:Python" --limit 5 +``` + +## Command Groups + +### Library + +Library management commands. + +Common subcommands: +- `library open ` +- `library info` +- `library list-fields` +- `library stats` + +### Book + +Book management commands. + +Common subcommands: +- `book add [--title ...] [--authors ...] [--tags ...] [--series ...] [--duplicate]` +- `book list [--search ...] [--limit ...] [--sort-by ...] [--ascending]` +- `book get ` +- `book search [--limit ...]` +- `book set-field [--title ...] [--authors ...] [--tags ...]` +- `book remove [--permanent]` + +### Meta + +Standalone ebook metadata commands. + +Common subcommands: +- `meta show ` +- `meta set [--title ...] [--authors ...] [--tags ...] [--comments ...] [--language ...] [--publisher ...] [--cover ...]` +- `meta set-cover ` +- `meta clear [--comments] [--tags]` + +### Convert + +Format conversion commands. + +Common subcommands: +- `convert formats` +- `convert presets` +- `convert run [--preset kindle|tablet|generic-epub] [--extra-arg ...]` + +### Export + +Export and backup commands. + +Common subcommands: +- `export book --to-dir [--single-dir] [--formats ...]` +- `export catalog [--search ...]` +- `export backup [--all]` + +### Session + +Session management commands. + +Common subcommands: +- `session status` +- `session undo` +- `session redo` +- `session history` +- `session save` + +## Examples + +### Open a library and inspect + +```bash +cli-anything-calibre library open "D:/Books/Calibre Library" +cli-anything-calibre --json library stats +cli-anything-calibre --json book list --limit 5 +``` + +### Interactive REPL Session + +Start an interactive session with undo/redo support. + +```bash +cli-anything-calibre +# Enter commands interactively +# Use 'help' to see available commands +# Use 'undo' and 'redo' for history navigation +``` + +### Ingest โ†’ search โ†’ export โ†’ convert (workflow) + +```bash +# Add a book file into the library +cli-anything-calibre --json --library "D:/Books/Calibre Library" book add "D:/tmp/book.epub" --title "My Book" --authors "Me" + +# Search and pick a book id +cli-anything-calibre --json --library "D:/Books/Calibre Library" book search "title:My Book" --limit 5 + +# Export the book files +cli-anything-calibre --json --library "D:/Books/Calibre Library" export book 1 --to-dir "D:/tmp/exported" --single-dir + +# Convert EPUB to MOBI +cli-anything-calibre --json convert run "D:/tmp/exported/My Book.epub" "D:/tmp/converted/My Book.mobi" --preset kindle +``` + +## For AI Agents + +When using this CLI programmatically: + +1. **Always use `--json` flag** for parseable output +2. **Check return codes** - 0 for success, non-zero for errors +3. **Parse stderr** for error messages on failure +4. **Use absolute paths** for all file operations (recommended on Windows) +5. **Verify outputs exist** after export operations + +## Version + +1.0.0 \ No newline at end of file diff --git a/calibre/agent-harness/cli_anything/calibre/tests/TEST.md b/calibre/agent-harness/cli_anything/calibre/tests/TEST.md new file mode 100644 index 0000000000..403a223180 --- /dev/null +++ b/calibre/agent-harness/cli_anything/calibre/tests/TEST.md @@ -0,0 +1,218 @@ +# TEST.md + +## Test Inventory Plan + +- `test_core.py`: 8 unit tests planned +- `test_full_e2e.py`: 16 E2E tests planned + +## Unit Test Plan + +### `session.py` +- session initialization +- open library +- snapshot / undo / redo lifecycle +- save session JSON + +### `library.py` +- open valid library +- reject missing library +- list fields + +### `books.py` +- parse added book id +- field update plumbing via temp OPF + +### `convert.py` +- preset lookup +- invalid preset rejection + +## E2E Test Plan + +Real workflows to test with installed calibre binaries: +- create temp calibre library +- add a sample EPUB into the library +- list/search books in JSON mode +- list books with sort/limit combinations +- get library stats after mutation +- export a book to a temp directory +- export a catalog file +- backup metadata to OPF +- remove a book and verify it disappears +- convert EPUB to another format +- verify output artifacts exist and are non-empty +- session status/save after library operations +- convert presets/formats introspection and invalid preset error handling + +## Realistic Workflow Scenarios + +### Workflow: ingest and inspect +- Simulates: adding a book to a library and inspecting it +- Operations chained: library open โ†’ book add โ†’ book list โ†’ book get +- Verified: JSON output shape, resulting book visibility + +### Workflow: export and convert +- Simulates: operational automation for a reading-device pipeline +- Operations chained: export book โ†’ convert format +- Verified: files exist, non-zero size, expected extension/magic where possible + +### Workflow: library mutation +- Simulates: full mutation chain on an existing book โ€” field update followed by export +- Operations chained: book add โ†’ book set-field โ†’ book get โ†’ export book +- Verified: book get returns updated title/author after set-field; old title no longer present; export dir exists; epub non-empty; file header is valid ZIP magic bytes; exported filename contains updated title + +### Workflow: metadata edit and verify +- Simulates: ๅฏนๅ•ๆœฌ็”ตๅญไนฆๆ–‡ไปถๅš metadata ่‡ชๅŠจๅŒ–ไฟฎ่ฎข๏ผˆๆ ‡้ข˜/ไฝœ่€…๏ผ‰ +- Operations chained: meta show โ†’ meta set (--title/--authors) โ†’ meta show +- Verified: JSON ่พ“ๅ‡บๅฏ่งฃๆž๏ผ›metadata ๆ–‡ๆœฌๅŒ…ๅซๆ–ฐ title/author๏ผ›ๆ–‡ไปถ่ทฏๅพ„ไธ€่‡ด + +--- + +## Test Results + +Run date: 2026-04-01 + +``` +============================= test session starts ============================= +platform win32 -- Python 3.13.7, pytest-9.0.2, pluggy-1.6.0 +rootdir: D:\AAA_work\openP\cli-anything\calibre\agent-harness + +cli_anything/calibre/tests/test_core.py::test_session_initial_state PASSED +cli_anything/calibre/tests/test_core.py::test_session_open_library_and_status PASSED +cli_anything/calibre/tests/test_core.py::test_session_snapshot_undo_redo PASSED +cli_anything/calibre/tests/test_core.py::test_session_save_writes_json PASSED +cli_anything/calibre/tests/test_core.py::test_open_library_validates_metadata_db PASSED +cli_anything/calibre/tests/test_core.py::test_open_library_rejects_missing_metadata_db PASSED +cli_anything/calibre/tests/test_core.py::test_parse_added_id PASSED +cli_anything/calibre/tests/test_core.py::test_convert_presets_and_invalid_preset PASSED +cli_anything/calibre/tests/test_full_e2e.py::TestCLISubprocess::test_help PASSED +cli_anything/calibre/tests/test_full_e2e.py::test_calibredb_available PASSED +cli_anything/calibre/tests/test_full_e2e.py::test_ebook_convert_available PASSED +cli_anything/calibre/tests/test_full_e2e.py::test_json_library_command_requires_valid_library PASSED +cli_anything/calibre/tests/test_full_e2e.py::test_meta_show_missing_file_errors PASSED +cli_anything/calibre/tests/test_full_e2e.py::test_workflow_ingest_and_inspect PASSED +cli_anything/calibre/tests/test_full_e2e.py::test_workflow_export_and_convert PASSED + +============================= 15 passed in 16.25s ============================= +``` + +**15 / 15 passed.** + +Run date: 2026-04-14 (full E2E suite) + +Command: +- `python -m pytest -v -s cli_anything\calibre\tests\test_full_e2e.py` + +``` +============================ test session starts ============================= +platform win32 -- Python 3.13.7, pytest-9.0.2, pluggy-1.6.0 -- D:\AAA_work\openP\.venv\Scripts\python.exe +rootdir: D:\AAA_work\openP\CLI-Anything\calibre\agent-harness +collected 8 items + +cli_anything/calibre/tests/test_full_e2e.py::TestCLISubprocess::test_help PASSED +cli_anything/calibre/tests/test_full_e2e.py::test_calibredb_available PASSED +cli_anything/calibre/tests/test_full_e2e.py::test_ebook_convert_available PASSED +cli_anything/calibre/tests/test_full_e2e.py::test_json_library_command_requires_valid_library PASSED +cli_anything/calibre/tests/test_full_e2e.py::test_meta_show_missing_file_errors PASSED +cli_anything/calibre/tests/test_full_e2e.py::test_workflow_meta_set_then_show_reflects_changes PASSED +cli_anything/calibre/tests/test_full_e2e.py::test_workflow_ingest_and_inspect PASSED +cli_anything/calibre/tests/test_full_e2e.py::test_workflow_export_and_convert PASSED + +============================= 8 passed in 20.62s ============================= +``` +**8 / 8 passed.** + +Run date: 2026-04-16 (library mutation workflow added) + +Command: +- `python -m pytest cli_anything/calibre/tests/test_full_e2e.py::test_workflow_library_mutation -v -s` + +``` +============================ test session starts ============================= +platform win32 -- Python 3.13.9, pytest-9.0.3, pluggy-1.6.0 -- C:\Users\17614\AppData\Local\Programs\Python\Python313\python.exe +rootdir: D:\VSCODE-CODE\CLI-Anything-main\calibre\agent-harness +collected 1 item + +cli_anything/calibre/tests/test_full_e2e.py::test_workflow_library_mutation PASSED + +============================= 1 passed in 6.83s ============================== +``` +**1 / 1 passed.** + +Run date: 2026-04-22 (HARNESS-compliant, force-installed CLI) + +Command: +- PowerShell: `$env:CLI_ANYTHING_FORCE_INSTALLED="1"; python -m pytest -q` + +Result: +``` +22 passed in 44.02s +``` + +Run date: 2026-04-26 (extended workflow coverage) + +Command: +- PowerShell: `$env:CLI_ANYTHING_FORCE_INSTALLED="1"; python -m pytest -q` + +Result: +``` +25 passed in 61.35s +``` + +### Workflow coverage added + +- ingest and inspect: create temp calibre library โ†’ add sample EPUB โ†’ list/search/get in JSON mode +- export and convert: export added book to temp directory โ†’ verify exported EPUB magic bytes โ†’ convert to MOBI โ†’ verify `BOOKMOBI` signature +- metadata edit and verify: meta show โ†’ meta set (--title/--authors) โ†’ meta show (verify updated values in `ebook-meta` output) +- library mutation: book add โ†’ book set-field โ†’ book get (verify updated title/author, old title absent) โ†’ export book (verify dir structure, epub magic bytes, filename contains updated title) +- Windows stability: tests use short temp paths and explicit subprocess decoding to avoid calibre path-length and console-encoding failures + +### CLI verification + +``` +$ cli-anything-calibre --help +Usage: cli-anything-calibre [OPTIONS] COMMAND [ARGS]... + + calibre CLI โ€” stateful ebook library operations from the command line. + +Commands: + book Book management commands. + convert Format conversion commands. + export Export and backup commands. + library Library management commands. + meta Standalone ebook metadata commands. + repl Start interactive REPL session. + session Session management commands. +``` + +Installed entry point: `D:\AAA_work\openP\.venv\Scripts\cli-anything-calibre.EXE` + +--- + +## Agent Test (CLI-only) + +Prompt specification: +- `../../../AGENT_TEST_PROMPT.md` + +### Agent test result + +- Scope: CLI-only execution by an AI agent (no GUI operations during task execution) +- Task chain: `library stats` -> `book add` -> `book search` -> `export book` -> `convert run` -> `meta show` +- Result: all task-chain commands returned exit code `0` +- Output artifact: `D:\AgentTest\out\converted\agent-test.mobi` (non-zero size) + +Final structured output: + +```json +FINAL_RESULT={"book_id":1,"export_dir":"D:\\AgentTest\\out","exported_epub":"D:\\AgentTest\\out\\Agent Test Book - OpenCode Bot.epub","converted_file":"D:\\AgentTest\\out\\converted\\agent-test.mobi","all_exit_zero":true} +``` + +### GUI round-trip validation + +- Opened the same library path in Calibre GUI: `D:\Books\Calibre Library` +- Verified library record consistency for the CLI-created book: + - title: `Agent Test Book` + - author: `OpenCode Bot` +- Verified exported/converted files are readable artifacts: + - exported EPUB exists and is non-empty + - converted MOBI exists and is non-empty +- Conclusion: CLI mutations and GUI-visible library state are consistent for this workflow. diff --git a/calibre/agent-harness/cli_anything/calibre/tests/__init__.py b/calibre/agent-harness/cli_anything/calibre/tests/__init__.py new file mode 100644 index 0000000000..c8b2b9c4af --- /dev/null +++ b/calibre/agent-harness/cli_anything/calibre/tests/__init__.py @@ -0,0 +1 @@ +"""cli-anything calibre tests package.""" diff --git a/calibre/agent-harness/cli_anything/calibre/tests/test_core.py b/calibre/agent-harness/cli_anything/calibre/tests/test_core.py new file mode 100644 index 0000000000..d4b7614f83 --- /dev/null +++ b/calibre/agent-harness/cli_anything/calibre/tests/test_core.py @@ -0,0 +1,87 @@ +from __future__ import annotations + +import json +from pathlib import Path + +import pytest + +from cli_anything.calibre.core.session import Session +from cli_anything.calibre.core import library as library_mod +from cli_anything.calibre.core import books as books_mod +from cli_anything.calibre.core import convert as convert_mod + + +def test_session_initial_state(tmp_path): + sess = Session(session_path=str(tmp_path / "session.json")) + status = sess.status() + assert status["has_library"] is False + assert status["current_book_id"] is None + assert status["undo_count"] == 0 + + +def test_session_open_library_and_status(tmp_path): + sess = Session(session_path=str(tmp_path / "session.json")) + lib = tmp_path / "Calibre Library" + lib.mkdir() + (lib / "metadata.db").write_bytes(b"sqlite") + sess.open_library(str(lib)) + status = sess.status() + assert status["has_library"] is True + assert status["library_path"] == str(lib.resolve()) + + +def test_session_snapshot_undo_redo(tmp_path): + sess = Session(session_path=str(tmp_path / "session.json")) + lib = tmp_path / "lib" + lib.mkdir() + (lib / "metadata.db").write_bytes(b"sqlite") + sess.open_library(str(lib)) + sess.snapshot("Select book") + sess.select_book(42) + desc = sess.undo() + assert desc == "Select book" + assert sess.current_book_id is None + redone = sess.redo() + assert redone == "redo point" + + +def test_session_save_writes_json(tmp_path): + session_file = tmp_path / "session.json" + sess = Session(session_path=str(session_file)) + lib = tmp_path / "lib" + lib.mkdir() + (lib / "metadata.db").write_bytes(b"sqlite") + sess.open_library(str(lib)) + saved = sess.save() + assert saved == str(session_file) + payload = json.loads(session_file.read_text(encoding="utf-8")) + assert payload["library_path"] == str(lib.resolve()) + + +def test_open_library_validates_metadata_db(tmp_path): + lib = tmp_path / "My Library" + lib.mkdir() + (lib / "metadata.db").write_bytes(b"sqlite") + info = library_mod.open_library(str(lib)) + assert info["exists"] is True + assert info["library_path"] == str(lib.resolve()) + + +def test_open_library_rejects_missing_metadata_db(tmp_path): + lib = tmp_path / "broken" + lib.mkdir() + with pytest.raises(FileNotFoundError): + library_mod.open_library(str(lib)) + + +def test_parse_added_id(): + assert books_mod.parse_added_id("Added book ids: 12, 13") == 12 + assert books_mod.parse_added_id("Added book with id: 9") == 9 + assert books_mod.parse_added_id("nothing useful") is None + + +def test_convert_presets_and_invalid_preset(): + presets = convert_mod.list_presets() + assert "kindle" in presets + with pytest.raises(ValueError): + convert_mod.convert_book("in.epub", "out.mobi", preset="nope") diff --git a/calibre/agent-harness/cli_anything/calibre/tests/test_full_e2e.py b/calibre/agent-harness/cli_anything/calibre/tests/test_full_e2e.py new file mode 100644 index 0000000000..3276ff47c5 --- /dev/null +++ b/calibre/agent-harness/cli_anything/calibre/tests/test_full_e2e.py @@ -0,0 +1,799 @@ +from __future__ import annotations + +import json +import locale +import os +import shutil +import subprocess +import sys +import sysconfig +import site +import tempfile +import zipfile +from pathlib import Path + +import pytest + + +def _require_binary(name: str) -> str: + path = shutil.which(name) + if path: + return path + raise RuntimeError( + f"Required calibre dependency not found on PATH: {name}. " + "Install calibre and ensure these commands are available: calibredb, ebook-convert, ebook-meta" + ) + + +def _resolve_cli(name): + force = os.environ.get("CLI_ANYTHING_FORCE_INSTALLED", "").strip() == "1" + path = shutil.which(name) + if path: + print(f"[_resolve_cli] Using installed command: {path}") + return [path] + # On Windows (especially Store Python), console_scripts may be installed in a + # user Scripts directory that is not on PATH. If force-installed mode is + # enabled, try to resolve that location explicitly before failing. + if os.name == "nt": + try: + scripts_dir = Path(sysconfig.get_path("scripts") or "") + candidate = scripts_dir / f"{name}.exe" + if scripts_dir and candidate.exists(): + print(f"[_resolve_cli] Using installed command (Scripts): {candidate}") + return [str(candidate)] + except Exception: + pass + try: + user_base = site.getuserbase() + if user_base: + candidates = [ + Path(user_base) / "Scripts", + Path(user_base) / "Python312" / "Scripts", + Path(user_base) / "python312" / "Scripts", + ] + for scripts_dir in candidates: + candidate = scripts_dir / f"{name}.exe" + if candidate.exists(): + print(f"[_resolve_cli] Using installed command (user Scripts): {candidate}") + return [str(candidate)] + except Exception: + pass + if force: + raise RuntimeError(f"{name} not found in PATH. Install with: pip install -e .") + module = name.replace("cli-anything-", "cli_anything.") + print(f"[_resolve_cli] Falling back to: {sys.executable} -m {module}") + return [sys.executable, "-m", module] + + +def _make_sample_epub(path: Path, title: str = "Sample Book", author: str = "Test Author") -> Path: + with zipfile.ZipFile(path, "w") as zf: + zf.writestr("mimetype", "application/epub+zip", compress_type=zipfile.ZIP_STORED) + zf.writestr( + "META-INF/container.xml", + """ + + + + +""", + ) + zf.writestr( + "OEBPS/chapter.xhtml", + """ + + Sample +

Sample

Hello calibre.

+""", + ) + zf.writestr( + "OEBPS/toc.ncx", + """ + + + + + Sample Book + + + Chapter 1 + + + +""", + ) + zf.writestr( + "OEBPS/content.opf", + f""" + + + {title} + {author} + en + urn:uuid:12345678-1234-1234-1234-123456789abc + + + + + + + + +""", + ) + return path + + +def _run_raw(cmd, env=None): + encoding = locale.getpreferredencoding(False) or "utf-8" + return subprocess.run( + cmd, + capture_output=True, + text=True, + env=env, + encoding=encoding, + errors="replace", + ) + + +def _run_cli(cli_base, args, env=None): + return _run_raw(cli_base + args, env=env) + + +@pytest.fixture(scope="module") +def cli_base(): + return _resolve_cli("cli-anything-calibre") + + +@pytest.fixture +def workflow_root(): + root = Path(tempfile.mkdtemp(prefix="ccal-")) + try: + yield root + finally: + shutil.rmtree(root, ignore_errors=True) + + +@pytest.fixture +def workflow_env(workflow_root): + env = os.environ.copy() + env["USERPROFILE"] = str(workflow_root / "home") + return env + + +@pytest.fixture +def real_library(workflow_root): + library = workflow_root / "lib" + result = _run_raw( + [_require_binary("calibredb"), "list", "--for-machine", "--fields", "id,title", "--with-library", str(library)] + ) + assert result.returncode == 0, result.stderr or result.stdout + assert (library / "metadata.db").exists() + return library + + +@pytest.fixture +def sample_epub(workflow_root): + return _make_sample_epub(workflow_root / "workflow-sample.epub", title="Workflow Sample", author="Workflow Fixture") + + +class TestCLISubprocess: + def test_help(self, cli_base): + result = _run_cli(cli_base, ["--help"]) + assert result.returncode == 0 + assert "library" in result.stdout + + +def test_calibredb_available(): + _require_binary("calibredb") + result = _run_raw([shutil.which("calibredb"), "--version"]) + assert result.returncode == 0 + + +def test_ebook_convert_available(): + _require_binary("ebook-convert") + result = _run_raw([shutil.which("ebook-convert"), "--version"]) + assert result.returncode == 0 + + +def test_json_library_command_requires_valid_library(tmp_path, cli_base, workflow_env): + _require_binary("calibredb") + fake_lib = tmp_path / "fake" + fake_lib.mkdir() + result = _run_cli( + cli_base, + ["--json", "--library", str(fake_lib), "library", "info"], + env=workflow_env, + ) + assert result.returncode != 0 + data = json.loads(result.stdout) + assert "error" in data + + +def test_meta_show_missing_file_errors(cli_base, workflow_env): + _require_binary("ebook-meta") + result = _run_cli( + cli_base, + ["--json", "meta", "show", "definitely-missing.epub"], + env=workflow_env, + ) + assert result.returncode != 0 + data = json.loads(result.stdout) + assert data["type"] in {"file_not_found", "RuntimeError", "FileNotFoundError"} + +# ๆ–ฐๅขžebook-metaๅทฅไฝœๆต๏ผš +def test_workflow_meta_set_then_show_reflects_changes(cli_base, workflow_env, sample_epub): + _require_binary("ebook-meta") + # 1) show before (JSON mode) + before = _run_cli(cli_base, ["--json", "meta", "show", str(sample_epub)], env=workflow_env) + assert before.returncode == 0 + before_data = json.loads(before.stdout) + assert before_data["path"] == str(sample_epub) + assert isinstance(before_data["metadata"], str) + # 2) set title/authors + new_title = "Workflow Meta Title" + new_authors = "Workflow Meta Author" + set_result = _run_cli( + cli_base, + ["--json", "meta", "set", str(sample_epub), "--title", new_title, "--authors", new_authors], + env=workflow_env, + ) + assert set_result.returncode == 0 + set_data = json.loads(set_result.stdout) + assert set_data["path"] == str(sample_epub) + # 3) show after and assert new metadata appears + after = _run_cli(cli_base, ["--json", "meta", "show", str(sample_epub)], env=workflow_env) + assert after.returncode == 0 + after_data = json.loads(after.stdout) + meta_text = after_data["metadata"] + assert new_title in meta_text + # authors formatting varies across calibre versions; keep it lenient: + assert "Workflow Meta Author" in meta_text + +def test_workflow_ingest_and_inspect(cli_base, workflow_env, real_library, sample_epub): + _require_binary("calibredb") + _require_binary("ebook-meta") + add_result = _run_cli( + cli_base, + [ + "--json", + "--library", + str(real_library), + "book", + "add", + str(sample_epub), + "--title", + "Workflow Book", + "--authors", + "Workflow Author", + "--tags", + "workflow,test", + ], + env=workflow_env, + ) + assert add_result.returncode == 0 + add_data = json.loads(add_result.stdout) + assert "input" in add_data + + list_result = _run_cli( + cli_base, + ["--json", "--library", str(real_library), "book", "list"], + env=workflow_env, + ) + assert list_result.returncode == 0 + books = json.loads(list_result.stdout) + assert len(books) == 1 + book = books[0] + assert book["title"] == "Workflow Book" + assert book["id"] == 1 + assert "Workflow Author" in str(book.get("authors")) + assert book.get("formats") + book_id = book["id"] + + search_result = _run_cli( + cli_base, + ["--json", "--library", str(real_library), "book", "search", "Workflow"], + env=workflow_env, + ) + assert search_result.returncode == 0 + search_books = json.loads(search_result.stdout) + assert any(item["id"] == book_id for item in search_books) + + get_result = _run_cli( + cli_base, + ["--json", "--library", str(real_library), "book", "get", str(book_id)], + env=workflow_env, + ) + assert get_result.returncode == 0 + get_data = json.loads(get_result.stdout) + assert get_data["book_id"] == book_id + assert "Workflow Book" in get_data["metadata"] + assert "Workflow Author" in get_data["metadata"] + + meta_result = _run_cli( + cli_base, + ["--json", "meta", "show", str(sample_epub)], + env=workflow_env, + ) + assert meta_result.returncode == 0 + meta_data = json.loads(meta_result.stdout) + assert meta_data["path"] == str(sample_epub) + assert "Workflow Sample" in meta_data["metadata"] + + +def test_workflow_export_and_convert(cli_base, workflow_env, real_library, sample_epub, workflow_root): + _require_binary("calibredb") + _require_binary("ebook-convert") + add_result = _run_cli( + cli_base, + [ + "--json", + "--library", + str(real_library), + "book", + "add", + str(sample_epub), + "--title", + "Export Workflow Book", + "--authors", + "Export Workflow Author", + ], + env=workflow_env, + ) + assert add_result.returncode == 0 + + list_result = _run_cli( + cli_base, + ["--json", "--library", str(real_library), "book", "list"], + env=workflow_env, + ) + books = json.loads(list_result.stdout) + assert books + book_id = books[0]["id"] + + export_dir = workflow_root / "exported" + export_result = _run_cli( + cli_base, + [ + "--json", + "--library", + str(real_library), + "export", + "book", + str(book_id), + "--to-dir", + str(export_dir), + "--single-dir", + ], + env=workflow_env, + ) + assert export_result.returncode == 0 + export_data = json.loads(export_result.stdout) + assert export_data["book_ids"] == [book_id] + exported_files = [p for p in export_dir.rglob("*") if p.is_file()] + assert exported_files + exported_epubs = [p for p in exported_files if p.suffix.lower() == ".epub"] + assert exported_epubs + exported_epub = exported_epubs[0] + assert exported_epub.stat().st_size > 0 + assert exported_epub.read_bytes()[:4] == b"PK\x03\x04" + + converted = workflow_root / "converted" / "workflow-output.mobi" + convert_result = _run_cli( + cli_base, + ["--json", "convert", "run", str(exported_epub), str(converted), "--preset", "kindle"], + env=workflow_env, + ) + assert convert_result.returncode == 0 + convert_data = json.loads(convert_result.stdout) + assert convert_data["output"] == str(converted.resolve()) + assert convert_data["exists"] is True + assert convert_data["file_size"] > 0 + assert converted.exists() + header = converted.read_bytes()[:128] + assert b"BOOKMOBI" in header + + +def test_workflow_library_mutation(cli_base, workflow_env, real_library, sample_epub, workflow_root): + _require_binary("calibredb") + """library mutation ๅทฅไฝœๆต: book add โ†’ book set-field โ†’ book get ้ชŒ่ฏๅญ—ๆฎตๅ˜ๆ›ด โ†’ export book ้ชŒ่ฏๅฏผๅ‡บ็›ฎๅฝ•็ป“ๆž„""" + + # โ”€โ”€ Step 1: book add โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ + add_result = _run_cli( + cli_base, + [ + "--json", + "--library", str(real_library), + "book", "add", str(sample_epub), + "--title", "Mutation Original Title", + "--authors", "Mutation Original Author", + ], + env=workflow_env, + ) + assert add_result.returncode == 0, f"book add failed: {add_result.stderr}" + add_data = json.loads(add_result.stdout) + assert "input" in add_data + + # ๅ–ๅพ—ๅˆšๆทปๅŠ ็š„ book_id + list_result = _run_cli( + cli_base, + ["--json", "--library", str(real_library), "book", "list"], + env=workflow_env, + ) + assert list_result.returncode == 0 + books = json.loads(list_result.stdout) + assert books, "library should have at least one book after add" + book_id = books[0]["id"] + assert books[0]["title"] == "Mutation Original Title" + print(f"\nโœ“ [Step 1] book add ๆˆๅŠŸ โ€” book_id={book_id}, ๆ ‡้ข˜='{books[0]['title']}', ๆ–‡ไปถ={add_data['input']}") + + # โ”€โ”€ Step 2: book set-field ไฟฎๆ”นๆ ‡้ข˜ๅ’Œไฝœ่€… โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ + set_result = _run_cli( + cli_base, + [ + "--json", + "--library", str(real_library), + "book", "set-field", str(book_id), + "--title", "Mutation Updated Title", + "--authors", "Mutation Updated Author", + ], + env=workflow_env, + ) + assert set_result.returncode == 0, f"book set-field failed: {set_result.stderr}" + set_data = json.loads(set_result.stdout) + assert set_data["book_id"] == book_id + print(f"โœ“ [Step 2] book set-field ๆˆๅŠŸ โ€” book_id={book_id}, ๆ–ฐๆ ‡้ข˜='Mutation Updated Title', ๆ–ฐไฝœ่€…='Mutation Updated Author'") + + # โ”€โ”€ Step 3: book get ้ชŒ่ฏๅญ—ๆฎต็œŸ็š„ๅ˜ไบ† โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ + get_result = _run_cli( + cli_base, + ["--json", "--library", str(real_library), "book", "get", str(book_id)], + env=workflow_env, + ) + assert get_result.returncode == 0, f"book get failed: {get_result.stderr}" + get_data = json.loads(get_result.stdout) + assert get_data["book_id"] == book_id + assert "Mutation Updated Title" in get_data["metadata"], \ + f"Updated title not found in metadata: {get_data['metadata']}" + assert "Mutation Updated Author" in get_data["metadata"], \ + f"Updated author not found in metadata: {get_data['metadata']}" + # ๆ—งๅ€ผไธๅบ”ๅ†ๅ‡บ็Žฐ + assert "Mutation Original Title" not in get_data["metadata"], \ + "Old title should have been replaced" + print(f"โœ“ [Step 3] book get ้ชŒ่ฏ้€š่ฟ‡ โ€” ๆ–ฐๆ ‡้ข˜/ไฝœ่€…ๅทฒๅ†™ๅ…ฅ๏ผŒๆ—งๆ ‡้ข˜ๅทฒๆ›ฟๆข") + + # โ”€โ”€ Step 4: export book ้ชŒ่ฏๅฏผๅ‡บ็›ฎๅฝ•็ป“ๆž„ โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ + export_dir = workflow_root / "mutation_export" + export_result = _run_cli( + cli_base, + [ + "--json", + "--library", str(real_library), + "export", "book", str(book_id), + "--to-dir", str(export_dir), + "--single-dir", + ], + env=workflow_env, + ) + assert export_result.returncode == 0, f"export book failed: {export_result.stderr}" + export_data = json.loads(export_result.stdout) + assert export_data["book_ids"] == [book_id] + assert export_data["output_dir"] == str(export_dir.resolve()) + + # ้ชŒ่ฏๅฏผๅ‡บ็›ฎๅฝ•ๅญ˜ๅœจไธ”ๅŒ…ๅซๆ–‡ไปถ + assert export_dir.exists(), "export directory should exist" + exported_files = [p for p in export_dir.rglob("*") if p.is_file()] + assert exported_files, "export directory should contain at least one file" + + # ้ชŒ่ฏๅฏผๅ‡บ็š„ epub ๆ–‡ไปถ้ž็ฉบไธ”ๆ˜ฏๅˆๆณ• ZIP๏ผˆepub ๆœฌ่ดจๆ˜ฏ ZIP๏ผ‰ + exported_epubs = [p for p in exported_files if p.suffix.lower() == ".epub"] + assert exported_epubs, "should have at least one exported epub file" + exported_epub = exported_epubs[0] + assert exported_epub.stat().st_size > 0, "exported epub should not be empty" + assert exported_epub.read_bytes()[:4] == b"PK\x03\x04", "exported epub should be a valid ZIP/epub" + # ้ชŒ่ฏๅฏผๅ‡บๆ–‡ไปถๅๅŒ…ๅซๆ›ดๆ–ฐๅŽ็š„ๆ ‡้ข˜๏ผˆ่ฏๆ˜Ž set-field ็š„ไฟฎๆ”น็กฎๅฎž็”Ÿๆ•ˆ๏ผ‰ + assert "Mutation Updated Title" in exported_epub.name, \ + f"Exported filename should contain updated title, got: {exported_epub.name}" + print(f"โœ“ [Step 4] export book ๆˆๅŠŸ โ€” ๅฏผๅ‡บ็›ฎๅฝ•: {export_dir}") + print(f" ๅฏผๅ‡บๆ–‡ไปถๅˆ—่กจ:") + for f in exported_files: + print(f" - {f.name} ({f.stat().st_size:,} bytes)") + + +def test_workflow_export_catalog_creates_file(cli_base, workflow_env, real_library, sample_epub, workflow_root): + _require_binary("calibredb") + add_result = _run_cli( + cli_base, + [ + "--json", + "--library", + str(real_library), + "book", + "add", + str(sample_epub), + "--title", + "Catalog Book", + "--authors", + "Catalog Author", + ], + env=workflow_env, + ) + assert add_result.returncode == 0, add_result.stderr or add_result.stdout + + output_path = workflow_root / "catalog.csv" + result = _run_cli( + cli_base, + ["--json", "--library", str(real_library), "export", "catalog", str(output_path)], + env=workflow_env, + ) + assert result.returncode == 0, result.stderr or result.stdout + data = json.loads(result.stdout) + assert data["output"] == str(output_path.resolve()) + assert output_path.exists() + assert output_path.stat().st_size > 0 + + +def test_workflow_export_backup_metadata_creates_opf(cli_base, workflow_env, real_library, sample_epub): + _require_binary("calibredb") + add_result = _run_cli( + cli_base, + [ + "--json", + "--library", + str(real_library), + "book", + "add", + str(sample_epub), + "--title", + "Backup Book", + "--authors", + "Backup Author", + ], + env=workflow_env, + ) + assert add_result.returncode == 0, add_result.stderr or add_result.stdout + + result = _run_cli( + cli_base, + ["--json", "--library", str(real_library), "export", "backup", "--all"], + env=workflow_env, + ) + assert result.returncode == 0, result.stderr or result.stdout + data = json.loads(result.stdout) + assert "stdout" in data + + opf_files = [p for p in Path(real_library).rglob("*.opf") if p.is_file()] + assert opf_files, "backup_metadata should create at least one .opf file in the library" + + +def test_workflow_add_then_remove_book_disappears_from_list(cli_base, workflow_env, real_library, sample_epub): + _require_binary("calibredb") + add_result = _run_cli( + cli_base, + [ + "--json", + "--library", + str(real_library), + "book", + "add", + str(sample_epub), + "--title", + "Remove Me", + "--authors", + "Removable Author", + ], + env=workflow_env, + ) + assert add_result.returncode == 0, add_result.stderr or add_result.stdout + + list_result = _run_cli(cli_base, ["--json", "--library", str(real_library), "book", "list"], env=workflow_env) + assert list_result.returncode == 0 + books = json.loads(list_result.stdout) + assert books + book_id = books[0]["id"] + + remove_result = _run_cli( + cli_base, + ["--json", "--library", str(real_library), "book", "remove", str(book_id)], + env=workflow_env, + ) + assert remove_result.returncode == 0, remove_result.stderr or remove_result.stdout + + list_after = _run_cli(cli_base, ["--json", "--library", str(real_library), "book", "list"], env=workflow_env) + assert list_after.returncode == 0 + after_books = json.loads(list_after.stdout) + assert all(item["id"] != book_id for item in after_books) + + +def test_workflow_library_stats_matches_book_count(cli_base, workflow_env, real_library, sample_epub, workflow_root): + _require_binary("calibredb") + sample2 = _make_sample_epub(workflow_root / "workflow-sample-2.epub", title="Workflow Two", author="Workflow Fixture") + + for title in ["Stats Book 1", "Stats Book 2"]: + add_result = _run_cli( + cli_base, + ["--json", "--library", str(real_library), "book", "add", str(sample_epub), "--title", title, "--authors", "Stats Author"], + env=workflow_env, + ) + assert add_result.returncode == 0, add_result.stderr or add_result.stdout + sample_epub = sample2 + + stats_result = _run_cli(cli_base, ["--json", "--library", str(real_library), "library", "stats"], env=workflow_env) + assert stats_result.returncode == 0, stats_result.stderr or stats_result.stdout + stats = json.loads(stats_result.stdout) + assert stats["book_count"] == 2 + assert stats["with_formats"] >= 1 + assert isinstance(stats.get("formats"), list) + + +def test_book_list_sort_by_title_and_limit(cli_base, workflow_env, real_library, sample_epub, workflow_root): + _require_binary("calibredb") + sample2 = _make_sample_epub(workflow_root / "workflow-sample-b.epub", title="Workflow B", author="Workflow Fixture") + books_to_add = [ + ("B Title", sample2), + ("A Title", sample_epub), + ] + for title, epub in books_to_add: + add_result = _run_cli( + cli_base, + ["--json", "--library", str(real_library), "book", "add", str(epub), "--title", title, "--authors", "Sort Author"], + env=workflow_env, + ) + assert add_result.returncode == 0, add_result.stderr or add_result.stdout + + list_result = _run_cli( + cli_base, + [ + "--json", + "--library", + str(real_library), + "book", + "list", + "--sort-by", + "id", + "--ascending", + "--limit", + "1", + ], + env=workflow_env, + ) + assert list_result.returncode == 0, list_result.stderr or list_result.stdout + data = json.loads(list_result.stdout) + assert isinstance(data, list) + assert len(data) == 1 + assert data[0]["id"] == 1 + + +def test_workflow_session_management_status_and_save(cli_base, workflow_env, real_library, sample_epub): + _require_binary("calibredb") + + add_result = _run_cli( + cli_base, + [ + "--json", + "--library", + str(real_library), + "book", + "add", + str(sample_epub), + "--title", + "Session Test Book", + "--authors", + "Session Author", + ], + env=workflow_env, + ) + assert add_result.returncode == 0, add_result.stderr or add_result.stdout + + status_result = _run_cli( + cli_base, + ["--json", "--library", str(real_library), "session", "status"], + env=workflow_env, + ) + assert status_result.returncode == 0, status_result.stderr or status_result.stdout + status_data = json.loads(status_result.stdout) + assert status_data["has_library"] is True + assert status_data["library_path"] + + save_result = _run_cli( + cli_base, + ["--json", "--library", str(real_library), "session", "save"], + env=workflow_env, + ) + assert save_result.returncode == 0, save_result.stderr or save_result.stdout + save_data = json.loads(save_result.stdout) + assert "saved" in save_data + + +def test_workflow_book_set_field_updates_metadata(cli_base, workflow_env, real_library, sample_epub): + _require_binary("calibredb") + + add_result = _run_cli( + cli_base, + [ + "--json", + "--library", + str(real_library), + "book", + "add", + str(sample_epub), + "--title", + "Field Test Book", + "--authors", + "Field Author", + ], + env=workflow_env, + ) + assert add_result.returncode == 0, add_result.stderr or add_result.stdout + + list_result = _run_cli(cli_base, ["--json", "--library", str(real_library), "book", "list"], env=workflow_env) + assert list_result.returncode == 0, list_result.stderr or list_result.stdout + books = json.loads(list_result.stdout) + assert books + book_id = books[0]["id"] + + get_before = _run_cli( + cli_base, + ["--json", "--library", str(real_library), "book", "get", str(book_id)], + env=workflow_env, + ) + assert get_before.returncode == 0, get_before.stderr or get_before.stdout + before_data = json.loads(get_before.stdout) + assert "Field Test Book" in before_data["metadata"] + + set_result = _run_cli( + cli_base, + [ + "--json", + "--library", + str(real_library), + "book", + "set-field", + str(book_id), + "--title", + "Updated Field Book", + "--authors", + "Updated Author", + "--tags", + "test,updated", + ], + env=workflow_env, + ) + assert set_result.returncode == 0, set_result.stderr or set_result.stdout + + get_after = _run_cli( + cli_base, + ["--json", "--library", str(real_library), "book", "get", str(book_id)], + env=workflow_env, + ) + assert get_after.returncode == 0, get_after.stderr or get_after.stdout + after_data = json.loads(get_after.stdout) + assert "Updated Field Book" in after_data["metadata"] + assert "Updated Author" in after_data["metadata"] + + +def test_convert_presets_and_formats_and_invalid_preset(cli_base, workflow_env): + presets_result = _run_cli(cli_base, ["--json", "convert", "presets"], env=workflow_env) + assert presets_result.returncode == 0, presets_result.stderr or presets_result.stdout + presets_data = json.loads(presets_result.stdout) + assert "kindle" in presets_data + assert "generic-epub" in presets_data + assert "tablet" in presets_data + + formats_result = _run_cli(cli_base, ["--json", "convert", "formats"], env=workflow_env) + assert formats_result.returncode == 0, formats_result.stderr or formats_result.stdout + formats_data = json.loads(formats_result.stdout) + assert isinstance(formats_data, list) + assert "epub" in [x.lower() for x in formats_data] + assert "mobi" in [x.lower() for x in formats_data] + + # invalid preset should fail before any real ebook-convert invocation + convert_result = _run_cli( + cli_base, + ["--json", "convert", "run", "missing.epub", "output.mobi", "--preset", "invalid_preset"], + env=workflow_env, + ) + assert convert_result.returncode != 0 + error_data = json.loads(convert_result.stdout) + assert "error" in error_data + + diff --git a/calibre/agent-harness/cli_anything/calibre/utils/__init__.py b/calibre/agent-harness/cli_anything/calibre/utils/__init__.py new file mode 100644 index 0000000000..060d5de4f8 --- /dev/null +++ b/calibre/agent-harness/cli_anything/calibre/utils/__init__.py @@ -0,0 +1 @@ +"""cli-anything calibre utils package.""" diff --git a/calibre/agent-harness/cli_anything/calibre/utils/calibre_backend.py b/calibre/agent-harness/cli_anything/calibre/utils/calibre_backend.py new file mode 100644 index 0000000000..753c3b54c8 --- /dev/null +++ b/calibre/agent-harness/cli_anything/calibre/utils/calibre_backend.py @@ -0,0 +1,270 @@ +"""calibre backend wrappers around real calibre CLI tools.""" + +from __future__ import annotations + +import json +import locale +import os +import shutil +import subprocess +import tempfile +from pathlib import Path +from typing import Any + + +INSTALL_HINT = ( + "calibre is not installed or not on PATH. Install calibre and ensure these commands are available: " + "calibredb, ebook-convert, ebook-meta" +) + + +def _find_binary(*names: str) -> str: + for name in names: + path = shutil.which(name) + if path: + return path + raise RuntimeError(INSTALL_HINT) + + +def find_calibredb() -> str: + return _find_binary("calibredb") + + +def find_ebook_convert() -> str: + return _find_binary("ebook-convert") + + +def find_ebook_meta() -> str: + return _find_binary("ebook-meta") + + +def _run(cmd: list[str], timeout: int = 120) -> dict[str, Any]: + encoding = locale.getpreferredencoding(False) or "utf-8" + result = subprocess.run( + cmd, + capture_output=True, + text=True, + timeout=timeout, + encoding=encoding, + errors="replace", + ) + if result.returncode != 0: + raise RuntimeError( + f"Command failed (exit {result.returncode}): {' '.join(cmd)}\n" + f"stdout:\n{result.stdout[-4000:]}\n" + f"stderr:\n{result.stderr[-4000:]}" + ) + return { + "command": cmd, + "stdout": result.stdout, + "stderr": result.stderr, + "returncode": result.returncode, + } + + +def run_calibredb(args: list[str], library_path: str | None = None, timeout: int = 120) -> dict[str, Any]: + exe = find_calibredb() + cmd = [exe] + if library_path: + cmd.extend(["--with-library", os.path.abspath(library_path)]) + cmd.extend(args) + return _run(cmd, timeout=timeout) + + +def calibredb_list( + library_path: str, + fields: str = "id,title,authors,formats", + search: str | None = None, + limit: int | None = None, + sort_by: str | None = None, + ascending: bool = False, +) -> list[dict[str, Any]]: + args = ["list", "--for-machine", "--fields", fields] + if search: + args.extend(["--search", search]) + if limit is not None: + args.extend(["--limit", str(limit)]) + if sort_by: + args.extend(["--sort-by", sort_by]) + if ascending: + args.append("--ascending") + result = run_calibredb(args, library_path=library_path) + return json.loads(result["stdout"] or "[]") + + +def calibredb_add( + library_path: str, + input_path: str, + title: str | None = None, + authors: str | None = None, + tags: str | None = None, + series: str | None = None, + duplicate: bool = False, +) -> dict[str, Any]: + abs_input = os.path.abspath(input_path) + if not os.path.exists(abs_input): + raise FileNotFoundError(f"Input file not found: {abs_input}") + args = ["add"] + if duplicate: + args.append("--duplicates") + if title: + args.extend(["--title", title]) + if authors: + args.extend(["--authors", authors]) + if tags: + args.extend(["--tags", tags]) + if series: + args.extend(["--series", series]) + args.append(abs_input) + result = run_calibredb(args, library_path=library_path) + return {"stdout": result["stdout"].strip(), "input": abs_input} + + +def calibredb_remove(library_path: str, book_id: int, permanent: bool = False) -> dict[str, Any]: + args = ["remove"] + if permanent: + args.append("--permanent") + args.append(str(book_id)) + result = run_calibredb(args, library_path=library_path) + return {"removed": book_id, "stdout": result["stdout"].strip()} + + +def calibredb_show_metadata(library_path: str, book_id: int, as_opf: bool = False) -> dict[str, Any]: + args = ["show_metadata"] + if as_opf: + args.append("--as-opf") + args.append(str(book_id)) + result = run_calibredb(args, library_path=library_path) + return {"book_id": book_id, "metadata": result["stdout"]} + + +def calibredb_export( + library_path: str, + book_ids: list[int], + to_dir: str, + single_dir: bool = False, + formats: str | None = None, +) -> dict[str, Any]: + out_dir = os.path.abspath(to_dir) + os.makedirs(out_dir, exist_ok=True) + args = ["export", "--to-dir", out_dir] + if single_dir: + args.append("--single-dir") + if formats: + args.extend(["--formats", formats]) + args.extend(str(x) for x in book_ids) + result = run_calibredb(args, library_path=library_path) + return {"output_dir": out_dir, "book_ids": book_ids, "stdout": result["stdout"].strip()} + + +def calibredb_catalog(library_path: str, output_path: str, search: str | None = None) -> dict[str, Any]: + # calibredb's `catalog` subcommand is unusual: it requires the output filename + # to appear immediately after `catalog` (before any options). Some calibre + # builds treat `--with-library` as an option and will error if it appears + # before the output filename. + exe = find_calibredb() + abs_output = os.path.abspath(output_path) + abs_library = os.path.abspath(library_path) + os.makedirs(os.path.dirname(abs_output), exist_ok=True) + + cmd = [exe, "catalog", abs_output, "--with-library", abs_library] + if search: + cmd.extend(["--search", search]) + result = _run(cmd, timeout=300) + return {"output": abs_output, "stdout": result["stdout"].strip()} + + +def calibredb_backup_metadata(library_path: str, all_records: bool = False) -> dict[str, Any]: + args = ["backup_metadata"] + if all_records: + args.append("--all") + result = run_calibredb(args, library_path=library_path, timeout=300) + return {"library_path": os.path.abspath(library_path), "stdout": result["stdout"].strip()} + + +def ebook_meta_show(book_path: str) -> dict[str, Any]: + exe = find_ebook_meta() + abs_path = os.path.abspath(book_path) + result = _run([exe, abs_path]) + return {"path": abs_path, "metadata": result["stdout"]} + + +def ebook_meta_set( + book_path: str, + title: str | None = None, + authors: str | None = None, + cover: str | None = None, + language: str | None = None, + publisher: str | None = None, + tags: str | None = None, + comments: str | None = None, +) -> dict[str, Any]: + exe = find_ebook_meta() + abs_path = os.path.abspath(book_path) + cmd = [exe, abs_path] + if title: + cmd.extend(["--title", title]) + if authors: + cmd.extend(["--authors", authors]) + if cover: + cmd.extend(["--cover", os.path.abspath(cover)]) + if language: + cmd.extend(["--language", language]) + if publisher: + cmd.extend(["--publisher", publisher]) + if tags: + cmd.extend(["--tags", tags]) + if comments: + cmd.extend(["--comments", comments]) + result = _run(cmd) + return {"path": abs_path, "stdout": result["stdout"].strip()} + + +def ebook_convert(input_path: str, output_path: str, extra_args: list[str] | None = None) -> dict[str, Any]: + exe = find_ebook_convert() + abs_input = os.path.abspath(input_path) + abs_output = os.path.abspath(output_path) + os.makedirs(os.path.dirname(abs_output), exist_ok=True) + cmd = [exe, abs_input, abs_output] + if extra_args: + cmd.extend(extra_args) + result = _run(cmd, timeout=600) + return { + "input": abs_input, + "output": abs_output, + "stdout": result["stdout"].strip(), + "stderr": result["stderr"].strip(), + "exists": os.path.exists(abs_output), + "file_size": os.path.getsize(abs_output) if os.path.exists(abs_output) else 0, + } + + +def write_opf_temp(title: str | None = None, authors: str | None = None, tags: str | None = None) -> str: + parts = [ + '', + '', + ' ', + ] + if title: + parts.append(f' {title}') + if authors: + for author in [x.strip() for x in authors.split('&') if x.strip()]: + parts.append(f' {author}') + if tags: + for tag in [x.strip() for x in tags.split(',') if x.strip()]: + parts.append(f' {tag}') + parts.extend([' ', '']) + fd, temp_path = tempfile.mkstemp(suffix='.opf', prefix='cli-anything-calibre-') + os.close(fd) + Path(temp_path).write_text('\n'.join(parts), encoding='utf-8') + return temp_path + + +def calibredb_set_metadata(library_path: str, book_id: int, opf_path: str) -> dict[str, Any]: + args = ["set_metadata", str(book_id), os.path.abspath(opf_path)] + result = run_calibredb(args, library_path=library_path) + return {"book_id": book_id, "opf_path": os.path.abspath(opf_path), "stdout": result["stdout"].strip()} + + +def detect_available_formats() -> list[str]: + return ["epub", "mobi", "azw3", "pdf", "txt", "html", "docx"] diff --git a/calibre/agent-harness/cli_anything/calibre/utils/repl_skin.py b/calibre/agent-harness/cli_anything/calibre/utils/repl_skin.py new file mode 100644 index 0000000000..c7312348a7 --- /dev/null +++ b/calibre/agent-harness/cli_anything/calibre/utils/repl_skin.py @@ -0,0 +1,521 @@ +"""cli-anything REPL Skin โ€” Unified terminal interface for all CLI harnesses. + +Copy this file into your CLI package at: + cli_anything//utils/repl_skin.py + +Usage: + from cli_anything..utils.repl_skin import ReplSkin + + skin = ReplSkin("shotcut", version="1.0.0") + skin.print_banner() # auto-detects skills/SKILL.md inside the package + prompt_text = skin.prompt(project_name="my_video.mlt", modified=True) + skin.success("Project saved") + skin.error("File not found") + skin.warning("Unsaved changes") + skin.info("Processing 24 clips...") + skin.status("Track 1", "3 clips, 00:02:30") + skin.table(headers, rows) + skin.print_goodbye() +""" + +import os +import sys + +# โ”€โ”€ ANSI color codes (no external deps for core styling) โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ + +_RESET = "\033[0m" +_BOLD = "\033[1m" +_DIM = "\033[2m" +_ITALIC = "\033[3m" +_UNDERLINE = "\033[4m" + +# Brand colors +_CYAN = "\033[38;5;80m" # cli-anything brand cyan +_CYAN_BG = "\033[48;5;80m" +_WHITE = "\033[97m" +_GRAY = "\033[38;5;245m" +_DARK_GRAY = "\033[38;5;240m" +_LIGHT_GRAY = "\033[38;5;250m" + +# Software accent colors โ€” each software gets a unique accent +_ACCENT_COLORS = { + "gimp": "\033[38;5;214m", # warm orange + "blender": "\033[38;5;208m", # deep orange + "inkscape": "\033[38;5;39m", # bright blue + "audacity": "\033[38;5;33m", # navy blue + "libreoffice": "\033[38;5;40m", # green + "obs_studio": "\033[38;5;55m", # purple + "kdenlive": "\033[38;5;69m", # slate blue + "shotcut": "\033[38;5;35m", # teal green +} +_DEFAULT_ACCENT = "\033[38;5;75m" # default sky blue + +# Status colors +_GREEN = "\033[38;5;78m" +_YELLOW = "\033[38;5;220m" +_RED = "\033[38;5;196m" +_BLUE = "\033[38;5;75m" +_MAGENTA = "\033[38;5;176m" + +# โ”€โ”€ Brand icon โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ + +# The cli-anything icon: a small colored diamond/chevron mark +_ICON = f"{_CYAN}{_BOLD}โ—†{_RESET}" +_ICON_SMALL = f"{_CYAN}โ–ธ{_RESET}" + +# โ”€โ”€ Box drawing characters โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ + +_H_LINE = "โ”€" +_V_LINE = "โ”‚" +_TL = "โ•ญ" +_TR = "โ•ฎ" +_BL = "โ•ฐ" +_BR = "โ•ฏ" +_T_DOWN = "โ”ฌ" +_T_UP = "โ”ด" +_T_RIGHT = "โ”œ" +_T_LEFT = "โ”ค" +_CROSS = "โ”ผ" + + +def _strip_ansi(text: str) -> str: + """Remove ANSI escape codes for length calculation.""" + import re + return re.sub(r"\033\[[^m]*m", "", text) + + +def _visible_len(text: str) -> int: + """Get visible length of text (excluding ANSI codes).""" + return len(_strip_ansi(text)) + + +class ReplSkin: + """Unified REPL skin for cli-anything CLIs. + + Provides consistent branding, prompts, and message formatting + across all CLI harnesses built with the cli-anything methodology. + """ + + def __init__(self, software: str, version: str = "1.0.0", + history_file: str | None = None, skill_path: str | None = None): + """Initialize the REPL skin. + + Args: + software: Software name (e.g., "gimp", "shotcut", "blender"). + version: CLI version string. + history_file: Path for persistent command history. + Defaults to ~/.cli-anything-/history + skill_path: Path to the SKILL.md file for agent discovery. + Auto-detected from the package's skills/ directory if not provided. + Displayed in banner for AI agents to know where to read skill info. + """ + self.software = software.lower().replace("-", "_") + self.display_name = software.replace("_", " ").title() + self.version = version + + # Auto-detect skill path from package layout: + # cli_anything//utils/repl_skin.py (this file) + # cli_anything//skills/SKILL.md (target) + if skill_path is None: + from pathlib import Path + _auto = Path(__file__).resolve().parent.parent / "skills" / "SKILL.md" + if _auto.is_file(): + skill_path = str(_auto) + self.skill_path = skill_path + self.accent = _ACCENT_COLORS.get(self.software, _DEFAULT_ACCENT) + + # History file + if history_file is None: + from pathlib import Path + hist_dir = Path.home() / f".cli-anything-{self.software}" + hist_dir.mkdir(parents=True, exist_ok=True) + self.history_file = str(hist_dir / "history") + else: + self.history_file = history_file + + # Detect terminal capabilities + self._color = self._detect_color_support() + + def _detect_color_support(self) -> bool: + """Check if terminal supports color.""" + if os.environ.get("NO_COLOR"): + return False + if os.environ.get("CLI_ANYTHING_NO_COLOR"): + return False + if not hasattr(sys.stdout, "isatty"): + return False + return sys.stdout.isatty() + + def _c(self, code: str, text: str) -> str: + """Apply color code if colors are supported.""" + if not self._color: + return text + return f"{code}{text}{_RESET}" + + # โ”€โ”€ Banner โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ + + def print_banner(self): + """Print the startup banner with branding.""" + inner = 54 + + def _box_line(content: str) -> str: + """Wrap content in box drawing, padding to inner width.""" + pad = inner - _visible_len(content) + vl = self._c(_DARK_GRAY, _V_LINE) + return f"{vl}{content}{' ' * max(0, pad)}{vl}" + + top = self._c(_DARK_GRAY, f"{_TL}{_H_LINE * inner}{_TR}") + bot = self._c(_DARK_GRAY, f"{_BL}{_H_LINE * inner}{_BR}") + + # Title: โ—† cli-anything ยท Shotcut + icon = self._c(_CYAN + _BOLD, "โ—†") + brand = self._c(_CYAN + _BOLD, "cli-anything") + dot = self._c(_DARK_GRAY, "ยท") + name = self._c(self.accent + _BOLD, self.display_name) + title = f" {icon} {brand} {dot} {name}" + + ver = f" {self._c(_DARK_GRAY, f' v{self.version}')}" + tip = f" {self._c(_DARK_GRAY, ' Type help for commands, quit to exit')}" + empty = "" + + # Skill path for agent discovery + skill_line = None + if self.skill_path: + skill_icon = self._c(_MAGENTA, "โ—‡") + skill_label = self._c(_DARK_GRAY, " Skill:") + skill_path_display = self._c(_LIGHT_GRAY, self.skill_path) + skill_line = f" {skill_icon} {skill_label} {skill_path_display}" + + print(top) + print(_box_line(title)) + print(_box_line(ver)) + if skill_line: + print(_box_line(skill_line)) + print(_box_line(empty)) + print(_box_line(tip)) + print(bot) + print() + + # โ”€โ”€ Prompt โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ + + def prompt(self, project_name: str = "", modified: bool = False, + context: str = "") -> str: + """Build a styled prompt string for prompt_toolkit or input(). + + Args: + project_name: Current project name (empty if none open). + modified: Whether the project has unsaved changes. + context: Optional extra context to show in prompt. + + Returns: + Formatted prompt string. + """ + parts = [] + + # Icon + if self._color: + parts.append(f"{_CYAN}โ—†{_RESET} ") + else: + parts.append("> ") + + # Software name + parts.append(self._c(self.accent + _BOLD, self.software)) + + # Project context + if project_name or context: + ctx = context or project_name + mod = "*" if modified else "" + parts.append(f" {self._c(_DARK_GRAY, '[')}") + parts.append(self._c(_LIGHT_GRAY, f"{ctx}{mod}")) + parts.append(self._c(_DARK_GRAY, ']')) + + parts.append(self._c(_GRAY, " โฏ ")) + + return "".join(parts) + + def prompt_tokens(self, project_name: str = "", modified: bool = False, + context: str = ""): + """Build prompt_toolkit formatted text tokens for the prompt. + + Use with prompt_toolkit's FormattedText for proper ANSI handling. + + Returns: + list of (style, text) tuples for prompt_toolkit. + """ + accent_hex = _ANSI_256_TO_HEX.get(self.accent, "#5fafff") + tokens = [] + + tokens.append(("class:icon", "โ—† ")) + tokens.append(("class:software", self.software)) + + if project_name or context: + ctx = context or project_name + mod = "*" if modified else "" + tokens.append(("class:bracket", " [")) + tokens.append(("class:context", f"{ctx}{mod}")) + tokens.append(("class:bracket", "]")) + + tokens.append(("class:arrow", " โฏ ")) + + return tokens + + def get_prompt_style(self): + """Get a prompt_toolkit Style object matching the skin. + + Returns: + prompt_toolkit.styles.Style + """ + try: + from prompt_toolkit.styles import Style + except ImportError: + return None + + accent_hex = _ANSI_256_TO_HEX.get(self.accent, "#5fafff") + + return Style.from_dict({ + "icon": "#5fdfdf bold", # cyan brand color + "software": f"{accent_hex} bold", + "bracket": "#585858", + "context": "#bcbcbc", + "arrow": "#808080", + # Completion menu + "completion-menu.completion": "bg:#303030 #bcbcbc", + "completion-menu.completion.current": f"bg:{accent_hex} #000000", + "completion-menu.meta.completion": "bg:#303030 #808080", + "completion-menu.meta.completion.current": f"bg:{accent_hex} #000000", + # Auto-suggest + "auto-suggest": "#585858", + # Bottom toolbar + "bottom-toolbar": "bg:#1c1c1c #808080", + "bottom-toolbar.text": "#808080", + }) + + # โ”€โ”€ Messages โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ + + def success(self, message: str): + """Print a success message with green checkmark.""" + icon = self._c(_GREEN + _BOLD, "โœ“") + print(f" {icon} {self._c(_GREEN, message)}") + + def error(self, message: str): + """Print an error message with red cross.""" + icon = self._c(_RED + _BOLD, "โœ—") + print(f" {icon} {self._c(_RED, message)}", file=sys.stderr) + + def warning(self, message: str): + """Print a warning message with yellow triangle.""" + icon = self._c(_YELLOW + _BOLD, "โš ") + print(f" {icon} {self._c(_YELLOW, message)}") + + def info(self, message: str): + """Print an info message with blue dot.""" + icon = self._c(_BLUE, "โ—") + print(f" {icon} {self._c(_LIGHT_GRAY, message)}") + + def hint(self, message: str): + """Print a subtle hint message.""" + print(f" {self._c(_DARK_GRAY, message)}") + + def section(self, title: str): + """Print a section header.""" + print() + print(f" {self._c(self.accent + _BOLD, title)}") + print(f" {self._c(_DARK_GRAY, _H_LINE * len(title))}") + + # โ”€โ”€ Status display โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ + + def status(self, label: str, value: str): + """Print a key-value status line.""" + lbl = self._c(_GRAY, f" {label}:") + val = self._c(_WHITE, f" {value}") + print(f"{lbl}{val}") + + def status_block(self, items: dict[str, str], title: str = ""): + """Print a block of status key-value pairs. + + Args: + items: Dict of label -> value pairs. + title: Optional title for the block. + """ + if title: + self.section(title) + + max_key = max(len(k) for k in items) if items else 0 + for label, value in items.items(): + lbl = self._c(_GRAY, f" {label:<{max_key}}") + val = self._c(_WHITE, f" {value}") + print(f"{lbl}{val}") + + def progress(self, current: int, total: int, label: str = ""): + """Print a simple progress indicator. + + Args: + current: Current step number. + total: Total number of steps. + label: Optional label for the progress. + """ + pct = int(current / total * 100) if total > 0 else 0 + bar_width = 20 + filled = int(bar_width * current / total) if total > 0 else 0 + bar = "โ–ˆ" * filled + "โ–‘" * (bar_width - filled) + text = f" {self._c(_CYAN, bar)} {self._c(_GRAY, f'{pct:3d}%')}" + if label: + text += f" {self._c(_LIGHT_GRAY, label)}" + print(text) + + # โ”€โ”€ Table display โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ + + def table(self, headers: list[str], rows: list[list[str]], + max_col_width: int = 40): + """Print a formatted table with box-drawing characters. + + Args: + headers: Column header strings. + rows: List of rows, each a list of cell strings. + max_col_width: Maximum column width before truncation. + """ + if not headers: + return + + # Calculate column widths + col_widths = [min(len(h), max_col_width) for h in headers] + for row in rows: + for i, cell in enumerate(row): + if i < len(col_widths): + col_widths[i] = min( + max(col_widths[i], len(str(cell))), max_col_width + ) + + def pad(text: str, width: int) -> str: + t = str(text)[:width] + return t + " " * (width - len(t)) + + # Header + header_cells = [ + self._c(_CYAN + _BOLD, pad(h, col_widths[i])) + for i, h in enumerate(headers) + ] + sep = self._c(_DARK_GRAY, f" {_V_LINE} ") + header_line = f" {sep.join(header_cells)}" + print(header_line) + + # Separator + sep_parts = [self._c(_DARK_GRAY, _H_LINE * w) for w in col_widths] + sep_line = self._c(_DARK_GRAY, f" {'โ”€โ”€โ”€'.join([_H_LINE * w for w in col_widths])}") + print(sep_line) + + # Rows + for row in rows: + cells = [] + for i, cell in enumerate(row): + if i < len(col_widths): + cells.append(self._c(_LIGHT_GRAY, pad(str(cell), col_widths[i]))) + row_sep = self._c(_DARK_GRAY, f" {_V_LINE} ") + print(f" {row_sep.join(cells)}") + + # โ”€โ”€ Help display โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ + + def help(self, commands: dict[str, str]): + """Print a formatted help listing. + + Args: + commands: Dict of command -> description pairs. + """ + self.section("Commands") + max_cmd = max(len(c) for c in commands) if commands else 0 + for cmd, desc in commands.items(): + cmd_styled = self._c(self.accent, f" {cmd:<{max_cmd}}") + desc_styled = self._c(_GRAY, f" {desc}") + print(f"{cmd_styled}{desc_styled}") + print() + + # โ”€โ”€ Goodbye โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ + + def print_goodbye(self): + """Print a styled goodbye message.""" + print(f"\n {_ICON_SMALL} {self._c(_GRAY, 'Goodbye!')}\n") + + # โ”€โ”€ Prompt toolkit session factory โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ + + def create_prompt_session(self): + """Create a prompt_toolkit PromptSession with skin styling. + + Returns: + A configured PromptSession, or None if prompt_toolkit unavailable. + """ + try: + from prompt_toolkit import PromptSession + from prompt_toolkit.history import FileHistory + from prompt_toolkit.auto_suggest import AutoSuggestFromHistory + from prompt_toolkit.formatted_text import FormattedText + + style = self.get_prompt_style() + + session = PromptSession( + history=FileHistory(self.history_file), + auto_suggest=AutoSuggestFromHistory(), + style=style, + enable_history_search=True, + ) + return session + except ImportError: + return None + + def get_input(self, pt_session, project_name: str = "", + modified: bool = False, context: str = "") -> str: + """Get input from user using prompt_toolkit or fallback. + + Args: + pt_session: A prompt_toolkit PromptSession (or None). + project_name: Current project name. + modified: Whether project has unsaved changes. + context: Optional context string. + + Returns: + User input string (stripped). + """ + if pt_session is not None: + from prompt_toolkit.formatted_text import FormattedText + tokens = self.prompt_tokens(project_name, modified, context) + return pt_session.prompt(FormattedText(tokens)).strip() + else: + raw_prompt = self.prompt(project_name, modified, context) + return input(raw_prompt).strip() + + # โ”€โ”€ Toolbar builder โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ + + def bottom_toolbar(self, items: dict[str, str]): + """Create a bottom toolbar callback for prompt_toolkit. + + Args: + items: Dict of label -> value pairs to show in toolbar. + + Returns: + A callable that returns FormattedText for the toolbar. + """ + def toolbar(): + from prompt_toolkit.formatted_text import FormattedText + parts = [] + for i, (k, v) in enumerate(items.items()): + if i > 0: + parts.append(("class:bottom-toolbar.text", " โ”‚ ")) + parts.append(("class:bottom-toolbar.text", f" {k}: ")) + parts.append(("class:bottom-toolbar", v)) + return FormattedText(parts) + return toolbar + + +# โ”€โ”€ ANSI 256-color to hex mapping (for prompt_toolkit styles) โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ + +_ANSI_256_TO_HEX = { + "\033[38;5;33m": "#0087ff", # audacity navy blue + "\033[38;5;35m": "#00af5f", # shotcut teal + "\033[38;5;39m": "#00afff", # inkscape bright blue + "\033[38;5;40m": "#00d700", # libreoffice green + "\033[38;5;55m": "#5f00af", # obs purple + "\033[38;5;69m": "#5f87ff", # kdenlive slate blue + "\033[38;5;75m": "#5fafff", # default sky blue + "\033[38;5;80m": "#5fd7d7", # brand cyan + "\033[38;5;208m": "#ff8700", # blender deep orange + "\033[38;5;214m": "#ffaf00", # gimp warm orange +} diff --git a/calibre/agent-harness/setup.py b/calibre/agent-harness/setup.py new file mode 100644 index 0000000000..7a0bb15095 --- /dev/null +++ b/calibre/agent-harness/setup.py @@ -0,0 +1,58 @@ +#!/usr/bin/env python3 +""" +setup.py for cli-anything-calibre + +Install with: pip install -e . +Or publish to PyPI: python -m build && twine upload dist/* +""" + +from pathlib import Path +from setuptools import setup, find_namespace_packages + +ROOT = Path(__file__).parent +README = ROOT / "cli_anything/calibre/README.md" +long_description = README.read_text(encoding="utf-8") if README.exists() else "" + +setup( + name="cli-anything-calibre", + version="1.0.0", + author="cli-anything contributors", + author_email="", + description="CLI harness for calibre - ebook library management, metadata editing, and format conversion via calibredb/ebook-convert/ebook-meta.", + long_description=long_description, + long_description_content_type="text/markdown", + url="https://github.com/HKUDS/CLI-Anything", + packages=find_namespace_packages(include=["cli_anything.*"]), + classifiers=[ + "Development Status :: 4 - Beta", + "Intended Audience :: Developers", + "Topic :: Software Development :: Libraries :: Python Modules", + "Topic :: Multimedia :: Graphics", + "License :: OSI Approved :: MIT License", + "Programming Language :: Python :: 3", + "Programming Language :: Python :: 3.10", + "Programming Language :: Python :: 3.11", + "Programming Language :: Python :: 3.12", + ], + python_requires=">=3.10", + install_requires=[ + "click>=8.0.0", + "prompt-toolkit>=3.0.0", + ], + extras_require={ + "dev": [ + "pytest>=7.0.0", + "pytest-cov>=4.0.0", + ], + }, + entry_points={ + "console_scripts": [ + "cli-anything-calibre=cli_anything.calibre.calibre_cli:main", + ], + }, + package_data={ + "cli_anything.calibre": ["skills/*.md"], + }, + include_package_data=True, + zip_safe=False, +) diff --git a/cli-anything-plugin/scripts/setup-cli-anything.sh b/cli-anything-plugin/scripts/setup-cli-anything.sh old mode 100755 new mode 100644 diff --git a/cli-anything-plugin/verify-plugin.sh b/cli-anything-plugin/verify-plugin.sh old mode 100755 new mode 100644 diff --git a/codex-skill/scripts/install.sh b/codex-skill/scripts/install.sh old mode 100755 new mode 100644 diff --git a/comfyui/agent-harness/cli_anything/comfyui/README.md b/comfyui/agent-harness/cli_anything/comfyui/README.md index b9628454d0..00566ecee9 100644 --- a/comfyui/agent-harness/cli_anything/comfyui/README.md +++ b/comfyui/agent-harness/cli_anything/comfyui/README.md @@ -1,95 +1,95 @@ -# cli-anything-comfyui - -CLI harness for **ComfyUI** โ€” manage AI image generation workflows, queue prompts, inspect models, and download outputs from the command line via the ComfyUI REST API. - -## Installation - -```bash -pip install cli-anything-comfyui -# or from source: -cd comfyui/agent-harness && pip install -e . -``` - -## Prerequisites - -1. [ComfyUI](https://github.com/comfyanonymous/ComfyUI) installed and running -2. ComfyUI server accessible at `http://localhost:8188` (default) -3. At least one checkpoint model installed in ComfyUI's `models/checkpoints/` directory - -## Quick Start - -```bash -# Check server status -cli-anything-comfyui system stats - -# List available models -cli-anything-comfyui models checkpoints - -# List workflows -cli-anything-comfyui workflow list - -# Queue a workflow from a JSON file -cli-anything-comfyui queue prompt --workflow my_workflow.json - -# Check queue status -cli-anything-comfyui queue status - -# List output images -cli-anything-comfyui images list --prompt-id - -# Download an output image -cli-anything-comfyui images download --filename ComfyUI_00001_.png --output ./output.png - -# Interactive mode -cli-anything-comfyui repl -``` - -## Commands - -| Group | Commands | -|---|---| -| `workflow` | `list`, `load`, `validate` | -| `queue` | `prompt`, `status`, `clear`, `history`, `interrupt` | -| `models` | `checkpoints`, `loras`, `vaes`, `controlnets`, `node-info`, `list-nodes` | -| `images` | `list`, `download`, `download-all` | -| `system` | `stats`, `info` | - -## Agent Usage (JSON mode) - -All commands support `--json` for machine-readable output: - -```bash -cli-anything-comfyui --json models checkpoints -cli-anything-comfyui --json queue status -cli-anything-comfyui --json queue history -``` - -## Custom Server URL - -```bash -cli-anything-comfyui --url http://192.168.1.100:8188 system stats -``` - -## Workflow JSON Format - -ComfyUI workflows use a node graph format. Export them from the ComfyUI web UI via **Save (API Format)**: - -```json -{ - "3": { - "class_type": "KSampler", - "inputs": { - "cfg": 7, - "denoise": 1, - "model": ["4", 0], - "positive": ["6", 0], - "negative": ["7", 0], - "latent_image": ["5", 0], - "sampler_name": "euler", - "scheduler": "normal", - "seed": 42, - "steps": 20 - } - } -} -``` +# cli-anything-comfyui + +CLI harness for **ComfyUI** โ€” manage AI image generation workflows, queue prompts, inspect models, and download outputs from the command line via the ComfyUI REST API. + +## Installation + +```bash +pip install cli-anything-comfyui +# or from source: +cd comfyui/agent-harness && pip install -e . +``` + +## Prerequisites + +1. [ComfyUI](https://github.com/comfyanonymous/ComfyUI) installed and running +2. ComfyUI server accessible at `http://localhost:8188` (default) +3. At least one checkpoint model installed in ComfyUI's `models/checkpoints/` directory + +## Quick Start + +```bash +# Check server status +cli-anything-comfyui system stats + +# List available models +cli-anything-comfyui models checkpoints + +# List workflows +cli-anything-comfyui workflow list + +# Queue a workflow from a JSON file +cli-anything-comfyui queue prompt --workflow my_workflow.json + +# Check queue status +cli-anything-comfyui queue status + +# List output images +cli-anything-comfyui images list --prompt-id + +# Download an output image +cli-anything-comfyui images download --filename ComfyUI_00001_.png --output ./output.png + +# Interactive mode +cli-anything-comfyui repl +``` + +## Commands + +| Group | Commands | +|---|---| +| `workflow` | `list`, `load`, `validate` | +| `queue` | `prompt`, `status`, `clear`, `history`, `interrupt` | +| `models` | `checkpoints`, `loras`, `vaes`, `controlnets`, `node-info`, `list-nodes` | +| `images` | `list`, `download`, `download-all` | +| `system` | `stats`, `info` | + +## Agent Usage (JSON mode) + +All commands support `--json` for machine-readable output: + +```bash +cli-anything-comfyui --json models checkpoints +cli-anything-comfyui --json queue status +cli-anything-comfyui --json queue history +``` + +## Custom Server URL + +```bash +cli-anything-comfyui --url http://192.168.1.100:8188 system stats +``` + +## Workflow JSON Format + +ComfyUI workflows use a node graph format. Export them from the ComfyUI web UI via **Save (API Format)**: + +```json +{ + "3": { + "class_type": "KSampler", + "inputs": { + "cfg": 7, + "denoise": 1, + "model": ["4", 0], + "positive": ["6", 0], + "negative": ["7", 0], + "latent_image": ["5", 0], + "sampler_name": "euler", + "scheduler": "normal", + "seed": 42, + "steps": 20 + } + } +} +``` diff --git a/comfyui/agent-harness/cli_anything/comfyui/comfyui_cli.py b/comfyui/agent-harness/cli_anything/comfyui/comfyui_cli.py index 78b8799e14..2058f540ff 100644 --- a/comfyui/agent-harness/cli_anything/comfyui/comfyui_cli.py +++ b/comfyui/agent-harness/cli_anything/comfyui/comfyui_cli.py @@ -1,413 +1,413 @@ -#!/usr/bin/env python3 -"""ComfyUI CLI โ€” Manage AI image generation from the command line. - -This CLI wraps the ComfyUI REST API. It covers the full generation lifecycle: -workflow management, queue operations, model discovery, and image retrieval. - -Usage: - # Check server status - cli-anything-comfyui system stats - - # List available checkpoints - cli-anything-comfyui models checkpoints - - # Queue a workflow - cli-anything-comfyui queue prompt --workflow my_workflow.json - - # Check queue - cli-anything-comfyui queue status - - # Download images - cli-anything-comfyui images download --filename ComfyUI_00001_.png --output ./out.png - - # Interactive REPL - cli-anything-comfyui repl -""" - -import sys -import os -import json -import shlex -import click - -sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__)))) - -from cli_anything.comfyui.core import workflows as workflow_mod -from cli_anything.comfyui.core import queue as queue_mod -from cli_anything.comfyui.core import models as models_mod -from cli_anything.comfyui.core import images as images_mod -from cli_anything.comfyui.utils.comfyui_backend import api_get, DEFAULT_BASE_URL - -# Global state -_json_output = False -_base_url = DEFAULT_BASE_URL - - -def output(data, message: str = ""): - """Print output in JSON or human-readable format.""" - if _json_output: - click.echo(json.dumps(data, indent=2, default=str)) - else: - if message: - click.echo(message) - if isinstance(data, dict): - _print_dict(data) - elif isinstance(data, list): - _print_list(data) - else: - click.echo(str(data)) - - -def _print_dict(d: dict, indent: int = 0): - prefix = " " * indent - for k, v in d.items(): - if isinstance(v, dict): - click.echo(f"{prefix}{k}:") - _print_dict(v, indent + 1) - elif isinstance(v, list): - click.echo(f"{prefix}{k}:") - _print_list(v, indent + 1) - else: - click.echo(f"{prefix}{k}: {v}") - - -def _print_list(items: list, indent: int = 0): - prefix = " " * indent - for i, item in enumerate(items): - if isinstance(item, dict): - click.echo(f"{prefix}[{i}]") - _print_dict(item, indent + 1) - else: - click.echo(f"{prefix}- {item}") - - -def handle_error(func): - """Decorator for consistent error handling.""" - def wrapper(*args, **kwargs): - try: - return func(*args, **kwargs) - except Exception as e: - if _json_output: - click.echo(json.dumps({ - "error": str(e), - "type": type(e).__name__, - })) - else: - click.echo(f"Error: {e}", err=True) - sys.exit(1) - wrapper.__name__ = func.__name__ - wrapper.__doc__ = func.__doc__ - return wrapper - - -# โ”€โ”€ Main CLI Group โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ -@click.group(invoke_without_command=True) -@click.option("--json", "use_json", is_flag=True, help="Output as JSON") -@click.option("--url", default=DEFAULT_BASE_URL, show_default=True, - help="ComfyUI server URL") -@click.pass_context -def cli(ctx, use_json, url): - """ComfyUI CLI โ€” AI image generation from the command line. - - Run without a subcommand to enter interactive REPL mode. - """ - global _json_output, _base_url - _json_output = use_json - _base_url = url - - if ctx.invoked_subcommand is None: - ctx.invoke(repl) - - -# โ”€โ”€ Workflow Commands โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ -@cli.group() -def workflow(): - """Workflow file management.""" - pass - - -@workflow.command("list") -@click.argument("directory", default=".", type=click.Path()) -@handle_error -def workflow_list(directory): - """List workflow JSON files in a directory.""" - result = workflow_mod.list_workflows(directory) - output(result, f"Workflows in {directory}:") - - -@workflow.command("load") -@click.argument("path", type=click.Path(exists=True)) -@handle_error -def workflow_load(path): - """Load and display a workflow JSON file.""" - result = workflow_mod.load_workflow(path) - output(result, f"Workflow: {path}") - - -@workflow.command("validate") -@click.argument("path", type=click.Path(exists=True)) -@handle_error -def workflow_validate(path): - """Validate the structure of a workflow JSON file.""" - wf = workflow_mod.load_workflow(path) - result = workflow_mod.validate_workflow(wf) - output(result, f"Validation: {path}") - if result["valid"]: - click.echo(" Workflow is valid.") - else: - click.echo(f" {len(result['errors'])} error(s) found.", err=True) - - -# โ”€โ”€ Queue Commands โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ -@cli.group() -def queue(): - """Prompt queue management.""" - pass - - -@queue.command("prompt") -@click.option("--workflow", "-w", required=True, type=click.Path(exists=True), - help="Path to workflow JSON file (API format)") -@click.option("--client-id", default=None, help="Client ID for tracking") -@handle_error -def queue_prompt(workflow, client_id): - """Queue a workflow for generation.""" - wf = workflow_mod.load_workflow(workflow) - result = queue_mod.queue_prompt(_base_url, wf, client_id=client_id) - output(result, f"Queued prompt: {result.get('prompt_id', '')}") - - -@queue.command("status") -@handle_error -def queue_status(): - """Show current queue status (running and pending items).""" - result = queue_mod.get_queue_status(_base_url) - output(result, "Queue status:") - - -@queue.command("clear") -@click.option("--confirm", is_flag=True, help="Skip confirmation") -@handle_error -def queue_clear(confirm): - """Clear all pending items from the queue.""" - if not confirm: - click.confirm("Clear the queue?", abort=True) - result = queue_mod.clear_queue(_base_url) - output(result, "Queue cleared.") - - -@queue.command("history") -@click.option("--max-items", type=int, default=None, help="Maximum entries to show") -@handle_error -def queue_history(max_items): - """Show completed prompt history.""" - result = queue_mod.get_history(_base_url, max_items=max_items) - output(result, f"History ({result.get('total', 0)} entries):") - - -@queue.command("interrupt") -@handle_error -def queue_interrupt(): - """Stop the currently running generation.""" - result = queue_mod.interrupt(_base_url) - output(result, "Generation interrupted.") - - -# โ”€โ”€ Models Commands โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ -@cli.group() -def models(): - """Model discovery commands.""" - pass - - -@models.command("checkpoints") -@handle_error -def models_checkpoints(): - """List available checkpoint models.""" - result = models_mod.list_checkpoints(_base_url) - output(result, f"Checkpoints ({len(result)}):") - - -@models.command("loras") -@handle_error -def models_loras(): - """List available LoRA models.""" - result = models_mod.list_loras(_base_url) - output(result, f"LoRAs ({len(result)}):") - - -@models.command("vaes") -@handle_error -def models_vaes(): - """List available VAE models.""" - result = models_mod.list_vaes(_base_url) - output(result, f"VAEs ({len(result)}):") - - -@models.command("controlnets") -@handle_error -def models_controlnets(): - """List available ControlNet models.""" - result = models_mod.list_controlnets(_base_url) - output(result, f"ControlNets ({len(result)}):") - - -@models.command("node-info") -@click.argument("node_class") -@handle_error -def models_node_info(node_class): - """Get input/output schema for a node class (e.g., KSampler).""" - result = models_mod.get_node_info(_base_url, node_class) - output(result) - - -@models.command("list-nodes") -@handle_error -def models_list_nodes(): - """List all available node class names.""" - result = models_mod.list_all_node_classes(_base_url) - output(result, f"Node classes ({len(result)}):") - - -# โ”€โ”€ Images Commands โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ -@cli.group() -def images(): - """Image output management.""" - pass - - -@images.command("list") -@click.option("--prompt-id", required=True, help="Prompt ID to list images for") -@handle_error -def images_list(prompt_id): - """List output images for a completed prompt.""" - result = images_mod.list_output_images(_base_url, prompt_id) - output(result, f"Output images for {prompt_id}:") - - -@images.command("download") -@click.option("--filename", required=True, help="Image filename (e.g., ComfyUI_00001_.png)") -@click.option("--output", "output_path", required=True, - type=click.Path(), help="Local path to save the image") -@click.option("--subfolder", default="", help="Subfolder in ComfyUI output dir") -@click.option("--type", "image_type", default="output", - type=click.Choice(["output", "input", "temp"]), - help="Image type") -@click.option("--overwrite", is_flag=True, help="Overwrite existing file") -@handle_error -def images_download(filename, output_path, subfolder, image_type, overwrite): - """Download a single output image from ComfyUI.""" - result = images_mod.download_image( - base_url=_base_url, - filename=filename, - output_path=output_path, - subfolder=subfolder, - image_type=image_type, - overwrite=overwrite, - ) - output(result, f"Downloaded: {output_path}") - - -@images.command("download-all") -@click.option("--prompt-id", required=True, help="Prompt ID to download images for") -@click.option("--output-dir", required=True, - type=click.Path(), help="Directory to save images into") -@click.option("--overwrite", is_flag=True, help="Overwrite existing files") -@handle_error -def images_download_all(prompt_id, output_dir, overwrite): - """Download all output images for a prompt to a directory.""" - result = images_mod.download_prompt_images( - base_url=_base_url, - prompt_id=prompt_id, - output_dir=output_dir, - overwrite=overwrite, - ) - output(result, f"Downloaded {len(result)} image(s) to {output_dir}") - - -# โ”€โ”€ System Commands โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ -@cli.group() -def system(): - """System information commands.""" - pass - - -@system.command("stats") -@handle_error -def system_stats(): - """Show GPU/memory system stats.""" - result = api_get(_base_url, "/system_stats") - output(result, "System stats:") - - -@system.command("info") -@handle_error -def system_info(): - """Show ComfyUI server information.""" - result = api_get(_base_url, "/") - output(result, "Server info:") - - -# โ”€โ”€ REPL โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ -@cli.command() -@handle_error -def repl(): - """Start interactive REPL session.""" - click.echo("ComfyUI CLI REPL โ€” type 'help' for commands, 'quit' to exit") - click.echo(f"Server: {_base_url}") - - try: - api_get(_base_url, "/system_stats") - click.echo("Connected to ComfyUI server.") - except Exception as e: - click.echo(f"Warning: Could not connect to ComfyUI: {e}", err=True) - - repl_commands = { - "workflow": "list|load|validate", - "queue": "prompt|status|clear|history|interrupt", - "models": "checkpoints|loras|vaes|controlnets|node-info|list-nodes", - "images": "list|download|download-all", - "system": "stats|info", - "help": "Show this help", - "quit": "Exit REPL", - } - - while True: - try: - line = click.prompt("comfyui", prompt_suffix="> ", default="", show_default=False) - line = line.strip() - if not line: - continue - if line.lower() in ("quit", "exit", "q"): - click.echo("Goodbye.") - break - if line.lower() == "help": - for cmd, subs in repl_commands.items(): - click.echo(f" {cmd:<12} {subs}") - continue - - try: - args = shlex.split(line) - except ValueError: - args = line.split() - try: - cli.main(args, standalone_mode=False) - except SystemExit: - pass - except click.exceptions.UsageError as e: - click.echo(f"Usage error: {e}", err=True) - except Exception as e: - click.echo(f"Error: {e}", err=True) - - except (EOFError, KeyboardInterrupt): - click.echo("\nGoodbye.") - break - - -# โ”€โ”€ Entry Point โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ -def main(): - cli() - - -if __name__ == "__main__": - main() +#!/usr/bin/env python3 +"""ComfyUI CLI โ€” Manage AI image generation from the command line. + +This CLI wraps the ComfyUI REST API. It covers the full generation lifecycle: +workflow management, queue operations, model discovery, and image retrieval. + +Usage: + # Check server status + cli-anything-comfyui system stats + + # List available checkpoints + cli-anything-comfyui models checkpoints + + # Queue a workflow + cli-anything-comfyui queue prompt --workflow my_workflow.json + + # Check queue + cli-anything-comfyui queue status + + # Download images + cli-anything-comfyui images download --filename ComfyUI_00001_.png --output ./out.png + + # Interactive REPL + cli-anything-comfyui repl +""" + +import sys +import os +import json +import shlex +import click + +sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__)))) + +from cli_anything.comfyui.core import workflows as workflow_mod +from cli_anything.comfyui.core import queue as queue_mod +from cli_anything.comfyui.core import models as models_mod +from cli_anything.comfyui.core import images as images_mod +from cli_anything.comfyui.utils.comfyui_backend import api_get, DEFAULT_BASE_URL + +# Global state +_json_output = False +_base_url = DEFAULT_BASE_URL + + +def output(data, message: str = ""): + """Print output in JSON or human-readable format.""" + if _json_output: + click.echo(json.dumps(data, indent=2, default=str)) + else: + if message: + click.echo(message) + if isinstance(data, dict): + _print_dict(data) + elif isinstance(data, list): + _print_list(data) + else: + click.echo(str(data)) + + +def _print_dict(d: dict, indent: int = 0): + prefix = " " * indent + for k, v in d.items(): + if isinstance(v, dict): + click.echo(f"{prefix}{k}:") + _print_dict(v, indent + 1) + elif isinstance(v, list): + click.echo(f"{prefix}{k}:") + _print_list(v, indent + 1) + else: + click.echo(f"{prefix}{k}: {v}") + + +def _print_list(items: list, indent: int = 0): + prefix = " " * indent + for i, item in enumerate(items): + if isinstance(item, dict): + click.echo(f"{prefix}[{i}]") + _print_dict(item, indent + 1) + else: + click.echo(f"{prefix}- {item}") + + +def handle_error(func): + """Decorator for consistent error handling.""" + def wrapper(*args, **kwargs): + try: + return func(*args, **kwargs) + except Exception as e: + if _json_output: + click.echo(json.dumps({ + "error": str(e), + "type": type(e).__name__, + })) + else: + click.echo(f"Error: {e}", err=True) + sys.exit(1) + wrapper.__name__ = func.__name__ + wrapper.__doc__ = func.__doc__ + return wrapper + + +# โ”€โ”€ Main CLI Group โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ +@click.group(invoke_without_command=True) +@click.option("--json", "use_json", is_flag=True, help="Output as JSON") +@click.option("--url", default=DEFAULT_BASE_URL, show_default=True, + help="ComfyUI server URL") +@click.pass_context +def cli(ctx, use_json, url): + """ComfyUI CLI โ€” AI image generation from the command line. + + Run without a subcommand to enter interactive REPL mode. + """ + global _json_output, _base_url + _json_output = use_json + _base_url = url + + if ctx.invoked_subcommand is None: + ctx.invoke(repl) + + +# โ”€โ”€ Workflow Commands โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ +@cli.group() +def workflow(): + """Workflow file management.""" + pass + + +@workflow.command("list") +@click.argument("directory", default=".", type=click.Path()) +@handle_error +def workflow_list(directory): + """List workflow JSON files in a directory.""" + result = workflow_mod.list_workflows(directory) + output(result, f"Workflows in {directory}:") + + +@workflow.command("load") +@click.argument("path", type=click.Path(exists=True)) +@handle_error +def workflow_load(path): + """Load and display a workflow JSON file.""" + result = workflow_mod.load_workflow(path) + output(result, f"Workflow: {path}") + + +@workflow.command("validate") +@click.argument("path", type=click.Path(exists=True)) +@handle_error +def workflow_validate(path): + """Validate the structure of a workflow JSON file.""" + wf = workflow_mod.load_workflow(path) + result = workflow_mod.validate_workflow(wf) + output(result, f"Validation: {path}") + if result["valid"]: + click.echo(" Workflow is valid.") + else: + click.echo(f" {len(result['errors'])} error(s) found.", err=True) + + +# โ”€โ”€ Queue Commands โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ +@cli.group() +def queue(): + """Prompt queue management.""" + pass + + +@queue.command("prompt") +@click.option("--workflow", "-w", required=True, type=click.Path(exists=True), + help="Path to workflow JSON file (API format)") +@click.option("--client-id", default=None, help="Client ID for tracking") +@handle_error +def queue_prompt(workflow, client_id): + """Queue a workflow for generation.""" + wf = workflow_mod.load_workflow(workflow) + result = queue_mod.queue_prompt(_base_url, wf, client_id=client_id) + output(result, f"Queued prompt: {result.get('prompt_id', '')}") + + +@queue.command("status") +@handle_error +def queue_status(): + """Show current queue status (running and pending items).""" + result = queue_mod.get_queue_status(_base_url) + output(result, "Queue status:") + + +@queue.command("clear") +@click.option("--confirm", is_flag=True, help="Skip confirmation") +@handle_error +def queue_clear(confirm): + """Clear all pending items from the queue.""" + if not confirm: + click.confirm("Clear the queue?", abort=True) + result = queue_mod.clear_queue(_base_url) + output(result, "Queue cleared.") + + +@queue.command("history") +@click.option("--max-items", type=int, default=None, help="Maximum entries to show") +@handle_error +def queue_history(max_items): + """Show completed prompt history.""" + result = queue_mod.get_history(_base_url, max_items=max_items) + output(result, f"History ({result.get('total', 0)} entries):") + + +@queue.command("interrupt") +@handle_error +def queue_interrupt(): + """Stop the currently running generation.""" + result = queue_mod.interrupt(_base_url) + output(result, "Generation interrupted.") + + +# โ”€โ”€ Models Commands โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ +@cli.group() +def models(): + """Model discovery commands.""" + pass + + +@models.command("checkpoints") +@handle_error +def models_checkpoints(): + """List available checkpoint models.""" + result = models_mod.list_checkpoints(_base_url) + output(result, f"Checkpoints ({len(result)}):") + + +@models.command("loras") +@handle_error +def models_loras(): + """List available LoRA models.""" + result = models_mod.list_loras(_base_url) + output(result, f"LoRAs ({len(result)}):") + + +@models.command("vaes") +@handle_error +def models_vaes(): + """List available VAE models.""" + result = models_mod.list_vaes(_base_url) + output(result, f"VAEs ({len(result)}):") + + +@models.command("controlnets") +@handle_error +def models_controlnets(): + """List available ControlNet models.""" + result = models_mod.list_controlnets(_base_url) + output(result, f"ControlNets ({len(result)}):") + + +@models.command("node-info") +@click.argument("node_class") +@handle_error +def models_node_info(node_class): + """Get input/output schema for a node class (e.g., KSampler).""" + result = models_mod.get_node_info(_base_url, node_class) + output(result) + + +@models.command("list-nodes") +@handle_error +def models_list_nodes(): + """List all available node class names.""" + result = models_mod.list_all_node_classes(_base_url) + output(result, f"Node classes ({len(result)}):") + + +# โ”€โ”€ Images Commands โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ +@cli.group() +def images(): + """Image output management.""" + pass + + +@images.command("list") +@click.option("--prompt-id", required=True, help="Prompt ID to list images for") +@handle_error +def images_list(prompt_id): + """List output images for a completed prompt.""" + result = images_mod.list_output_images(_base_url, prompt_id) + output(result, f"Output images for {prompt_id}:") + + +@images.command("download") +@click.option("--filename", required=True, help="Image filename (e.g., ComfyUI_00001_.png)") +@click.option("--output", "output_path", required=True, + type=click.Path(), help="Local path to save the image") +@click.option("--subfolder", default="", help="Subfolder in ComfyUI output dir") +@click.option("--type", "image_type", default="output", + type=click.Choice(["output", "input", "temp"]), + help="Image type") +@click.option("--overwrite", is_flag=True, help="Overwrite existing file") +@handle_error +def images_download(filename, output_path, subfolder, image_type, overwrite): + """Download a single output image from ComfyUI.""" + result = images_mod.download_image( + base_url=_base_url, + filename=filename, + output_path=output_path, + subfolder=subfolder, + image_type=image_type, + overwrite=overwrite, + ) + output(result, f"Downloaded: {output_path}") + + +@images.command("download-all") +@click.option("--prompt-id", required=True, help="Prompt ID to download images for") +@click.option("--output-dir", required=True, + type=click.Path(), help="Directory to save images into") +@click.option("--overwrite", is_flag=True, help="Overwrite existing files") +@handle_error +def images_download_all(prompt_id, output_dir, overwrite): + """Download all output images for a prompt to a directory.""" + result = images_mod.download_prompt_images( + base_url=_base_url, + prompt_id=prompt_id, + output_dir=output_dir, + overwrite=overwrite, + ) + output(result, f"Downloaded {len(result)} image(s) to {output_dir}") + + +# โ”€โ”€ System Commands โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ +@cli.group() +def system(): + """System information commands.""" + pass + + +@system.command("stats") +@handle_error +def system_stats(): + """Show GPU/memory system stats.""" + result = api_get(_base_url, "/system_stats") + output(result, "System stats:") + + +@system.command("info") +@handle_error +def system_info(): + """Show ComfyUI server information.""" + result = api_get(_base_url, "/") + output(result, "Server info:") + + +# โ”€โ”€ REPL โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ +@cli.command() +@handle_error +def repl(): + """Start interactive REPL session.""" + click.echo("ComfyUI CLI REPL โ€” type 'help' for commands, 'quit' to exit") + click.echo(f"Server: {_base_url}") + + try: + api_get(_base_url, "/system_stats") + click.echo("Connected to ComfyUI server.") + except Exception as e: + click.echo(f"Warning: Could not connect to ComfyUI: {e}", err=True) + + repl_commands = { + "workflow": "list|load|validate", + "queue": "prompt|status|clear|history|interrupt", + "models": "checkpoints|loras|vaes|controlnets|node-info|list-nodes", + "images": "list|download|download-all", + "system": "stats|info", + "help": "Show this help", + "quit": "Exit REPL", + } + + while True: + try: + line = click.prompt("comfyui", prompt_suffix="> ", default="", show_default=False) + line = line.strip() + if not line: + continue + if line.lower() in ("quit", "exit", "q"): + click.echo("Goodbye.") + break + if line.lower() == "help": + for cmd, subs in repl_commands.items(): + click.echo(f" {cmd:<12} {subs}") + continue + + try: + args = shlex.split(line) + except ValueError: + args = line.split() + try: + cli.main(args, standalone_mode=False) + except SystemExit: + pass + except click.exceptions.UsageError as e: + click.echo(f"Usage error: {e}", err=True) + except Exception as e: + click.echo(f"Error: {e}", err=True) + + except (EOFError, KeyboardInterrupt): + click.echo("\nGoodbye.") + break + + +# โ”€โ”€ Entry Point โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ +def main(): + cli() + + +if __name__ == "__main__": + main() diff --git a/comfyui/agent-harness/cli_anything/comfyui/core/images.py b/comfyui/agent-harness/cli_anything/comfyui/core/images.py index 45d07939cc..3366eb9503 100644 --- a/comfyui/agent-harness/cli_anything/comfyui/core/images.py +++ b/comfyui/agent-harness/cli_anything/comfyui/core/images.py @@ -1,137 +1,137 @@ -"""Image output management โ€” download and list generated images. - -Covers: -- Listing output images from a prompt's history -- Downloading images from ComfyUI's /view endpoint -- Saving images to local disk -""" - -from pathlib import Path - -from cli_anything.comfyui.utils.comfyui_backend import api_get_raw -from cli_anything.comfyui.core.queue import get_prompt_history - - -def list_output_images(base_url: str, prompt_id: str) -> list[dict]: - """List all output images for a completed prompt. - - Args: - base_url: ComfyUI server base URL. - prompt_id: The prompt ID returned from queue_prompt(). - - Returns: - List of image dicts with 'filename', 'subfolder', 'type', and 'node_id'. - - Raises: - RuntimeError: If the prompt is not found or not yet completed. - """ - history = get_prompt_history(base_url, prompt_id) - - outputs = history.get("outputs", []) - if not outputs: - status = history.get("status", "unknown") - if not history.get("completed", False): - raise RuntimeError( - f"Prompt {prompt_id} has not completed yet (status: {status}). " - "Wait for generation to finish before listing images." - ) - return [] - - return outputs - - -def download_image( - base_url: str, - filename: str, - output_path: str, - subfolder: str = "", - image_type: str = "output", - overwrite: bool = False, -) -> dict: - """Download a single output image from ComfyUI. - - Args: - base_url: ComfyUI server base URL. - filename: Image filename (e.g., 'ComfyUI_00001_.png'). - output_path: Local path to save the image. - subfolder: Subfolder within ComfyUI's output directory (usually empty). - image_type: Image type โ€” 'output', 'input', or 'temp'. - overwrite: If False, raise RuntimeError if output_path already exists. - - Returns: - Dict with 'status', 'path', and 'size_bytes'. - - Raises: - RuntimeError: If the file exists and overwrite is False, or download fails. - """ - dest = Path(output_path) - - if dest.exists() and not overwrite: - raise RuntimeError( - f"Output file already exists: {output_path}. " - "Use overwrite=True to replace it." - ) - - params = { - "filename": filename, - "type": image_type, - } - if subfolder: - params["subfolder"] = subfolder - - image_bytes = api_get_raw(base_url, "/view", params=params) - - dest.parent.mkdir(parents=True, exist_ok=True) - dest.write_bytes(image_bytes) - - return { - "status": "downloaded", - "path": str(dest.resolve()), - "filename": filename, - "size_bytes": len(image_bytes), - } - - -def download_prompt_images( - base_url: str, - prompt_id: str, - output_dir: str, - overwrite: bool = False, -) -> list[dict]: - """Download all output images for a completed prompt to a directory. - - Args: - base_url: ComfyUI server base URL. - prompt_id: The prompt ID returned from queue_prompt(). - output_dir: Local directory to save images into. - overwrite: If True, overwrite existing files. - - Returns: - List of result dicts from download_image(), one per image. - - Raises: - RuntimeError: If the prompt is not found or no images are available. - """ - images = list_output_images(base_url, prompt_id) - - if not images: - raise RuntimeError(f"No output images found for prompt: {prompt_id}") - - output_d = Path(output_dir) - output_d.mkdir(parents=True, exist_ok=True) - - results = [] - for img in images: - filename = img["filename"] - dest = str(output_d / filename) - result = download_image( - base_url=base_url, - filename=filename, - output_path=dest, - subfolder=img.get("subfolder", ""), - image_type=img.get("type", "output"), - overwrite=overwrite, - ) - results.append(result) - - return results +"""Image output management โ€” download and list generated images. + +Covers: +- Listing output images from a prompt's history +- Downloading images from ComfyUI's /view endpoint +- Saving images to local disk +""" + +from pathlib import Path + +from cli_anything.comfyui.utils.comfyui_backend import api_get_raw +from cli_anything.comfyui.core.queue import get_prompt_history + + +def list_output_images(base_url: str, prompt_id: str) -> list[dict]: + """List all output images for a completed prompt. + + Args: + base_url: ComfyUI server base URL. + prompt_id: The prompt ID returned from queue_prompt(). + + Returns: + List of image dicts with 'filename', 'subfolder', 'type', and 'node_id'. + + Raises: + RuntimeError: If the prompt is not found or not yet completed. + """ + history = get_prompt_history(base_url, prompt_id) + + outputs = history.get("outputs", []) + if not outputs: + status = history.get("status", "unknown") + if not history.get("completed", False): + raise RuntimeError( + f"Prompt {prompt_id} has not completed yet (status: {status}). " + "Wait for generation to finish before listing images." + ) + return [] + + return outputs + + +def download_image( + base_url: str, + filename: str, + output_path: str, + subfolder: str = "", + image_type: str = "output", + overwrite: bool = False, +) -> dict: + """Download a single output image from ComfyUI. + + Args: + base_url: ComfyUI server base URL. + filename: Image filename (e.g., 'ComfyUI_00001_.png'). + output_path: Local path to save the image. + subfolder: Subfolder within ComfyUI's output directory (usually empty). + image_type: Image type โ€” 'output', 'input', or 'temp'. + overwrite: If False, raise RuntimeError if output_path already exists. + + Returns: + Dict with 'status', 'path', and 'size_bytes'. + + Raises: + RuntimeError: If the file exists and overwrite is False, or download fails. + """ + dest = Path(output_path) + + if dest.exists() and not overwrite: + raise RuntimeError( + f"Output file already exists: {output_path}. " + "Use overwrite=True to replace it." + ) + + params = { + "filename": filename, + "type": image_type, + } + if subfolder: + params["subfolder"] = subfolder + + image_bytes = api_get_raw(base_url, "/view", params=params) + + dest.parent.mkdir(parents=True, exist_ok=True) + dest.write_bytes(image_bytes) + + return { + "status": "downloaded", + "path": str(dest.resolve()), + "filename": filename, + "size_bytes": len(image_bytes), + } + + +def download_prompt_images( + base_url: str, + prompt_id: str, + output_dir: str, + overwrite: bool = False, +) -> list[dict]: + """Download all output images for a completed prompt to a directory. + + Args: + base_url: ComfyUI server base URL. + prompt_id: The prompt ID returned from queue_prompt(). + output_dir: Local directory to save images into. + overwrite: If True, overwrite existing files. + + Returns: + List of result dicts from download_image(), one per image. + + Raises: + RuntimeError: If the prompt is not found or no images are available. + """ + images = list_output_images(base_url, prompt_id) + + if not images: + raise RuntimeError(f"No output images found for prompt: {prompt_id}") + + output_d = Path(output_dir) + output_d.mkdir(parents=True, exist_ok=True) + + results = [] + for img in images: + filename = img["filename"] + dest = str(output_d / filename) + result = download_image( + base_url=base_url, + filename=filename, + output_path=dest, + subfolder=img.get("subfolder", ""), + image_type=img.get("type", "output"), + overwrite=overwrite, + ) + results.append(result) + + return results diff --git a/comfyui/agent-harness/cli_anything/comfyui/core/models.py b/comfyui/agent-harness/cli_anything/comfyui/core/models.py index 318a41f733..43c3a902f9 100644 --- a/comfyui/agent-harness/cli_anything/comfyui/core/models.py +++ b/comfyui/agent-harness/cli_anything/comfyui/core/models.py @@ -1,169 +1,169 @@ -"""Model discovery โ€” list checkpoints, LoRAs, VAEs, and ControlNet models. - -Uses ComfyUI's /object_info endpoint to enumerate available models. -No file system access is required โ€” all model lists come from the running server. -""" - -from cli_anything.comfyui.utils.comfyui_backend import api_get - - -def list_checkpoints(base_url: str) -> list[str]: - """List all available checkpoint models. - - Queries CheckpointLoaderSimple to find installed checkpoint files. - - Args: - base_url: ComfyUI server base URL. - - Returns: - Sorted list of checkpoint filenames/paths. - - Raises: - RuntimeError: If the server is unreachable or returns unexpected data. - """ - result = api_get(base_url, "/object_info/CheckpointLoaderSimple") - - try: - ckpt_input = result["CheckpointLoaderSimple"]["input"]["required"]["ckpt_name"] - models = ckpt_input[0] - if not isinstance(models, list): - raise ValueError("Expected list of checkpoint names") - except (KeyError, IndexError, TypeError) as e: - raise RuntimeError( - f"Could not parse checkpoint list from ComfyUI response: {e}" - ) from e - - return sorted(models) - - -def list_loras(base_url: str) -> list[str]: - """List all available LoRA models. - - Queries LoraLoader to find installed LoRA files. - - Args: - base_url: ComfyUI server base URL. - - Returns: - Sorted list of LoRA filenames/paths. - - Raises: - RuntimeError: If the server is unreachable or returns unexpected data. - """ - result = api_get(base_url, "/object_info/LoraLoader") - - try: - lora_input = result["LoraLoader"]["input"]["required"]["lora_name"] - models = lora_input[0] - if not isinstance(models, list): - raise ValueError("Expected list of LoRA names") - except (KeyError, IndexError, TypeError) as e: - raise RuntimeError( - f"Could not parse LoRA list from ComfyUI response: {e}" - ) from e - - return sorted(models) - - -def list_vaes(base_url: str) -> list[str]: - """List all available VAE models. - - Queries VAELoader to find installed VAE files. - - Args: - base_url: ComfyUI server base URL. - - Returns: - Sorted list of VAE filenames/paths. - - Raises: - RuntimeError: If the server is unreachable or returns unexpected data. - """ - result = api_get(base_url, "/object_info/VAELoader") - - try: - vae_input = result["VAELoader"]["input"]["required"]["vae_name"] - models = vae_input[0] - if not isinstance(models, list): - raise ValueError("Expected list of VAE names") - except (KeyError, IndexError, TypeError) as e: - raise RuntimeError( - f"Could not parse VAE list from ComfyUI response: {e}" - ) from e - - return sorted(models) - - -def list_controlnets(base_url: str) -> list[str]: - """List all available ControlNet models. - - Queries ControlNetLoader to find installed ControlNet files. - - Args: - base_url: ComfyUI server base URL. - - Returns: - Sorted list of ControlNet filenames/paths. Empty list if none installed. - - Raises: - RuntimeError: If the server is unreachable or returns unexpected data. - """ - result = api_get(base_url, "/object_info/ControlNetLoader") - - try: - cn_input = result["ControlNetLoader"]["input"]["required"]["control_net_name"] - models = cn_input[0] - if not isinstance(models, list): - raise ValueError("Expected list of ControlNet names") - except (KeyError, IndexError, TypeError) as e: - raise RuntimeError( - f"Could not parse ControlNet list from ComfyUI response: {e}" - ) from e - - return sorted(models) - - -def get_node_info(base_url: str, node_class: str) -> dict: - """Get detailed input/output info for a specific node class. - - Args: - base_url: ComfyUI server base URL. - node_class: ComfyUI node class name (e.g., 'KSampler', 'CLIPTextEncode'). - - Returns: - Dict with node input/output schema. - - Raises: - RuntimeError: If the node class is not found. - """ - result = api_get(base_url, f"/object_info/{node_class}") - - if node_class not in result: - raise RuntimeError( - f"Node class '{node_class}' not found. " - "Check spelling or use 'models list-nodes' to see all classes." - ) - - node = result[node_class] - return { - "class_type": node_class, - "display_name": node.get("display_name", node_class), - "description": node.get("description", ""), - "category": node.get("category", ""), - "input": node.get("input", {}), - "output": node.get("output", []), - "output_name": node.get("output_name", []), - } - - -def list_all_node_classes(base_url: str) -> list[str]: - """List all available node class names. - - Args: - base_url: ComfyUI server base URL. - - Returns: - Sorted list of all node class names. - """ - result = api_get(base_url, "/object_info") - return sorted(result.keys()) +"""Model discovery โ€” list checkpoints, LoRAs, VAEs, and ControlNet models. + +Uses ComfyUI's /object_info endpoint to enumerate available models. +No file system access is required โ€” all model lists come from the running server. +""" + +from cli_anything.comfyui.utils.comfyui_backend import api_get + + +def list_checkpoints(base_url: str) -> list[str]: + """List all available checkpoint models. + + Queries CheckpointLoaderSimple to find installed checkpoint files. + + Args: + base_url: ComfyUI server base URL. + + Returns: + Sorted list of checkpoint filenames/paths. + + Raises: + RuntimeError: If the server is unreachable or returns unexpected data. + """ + result = api_get(base_url, "/object_info/CheckpointLoaderSimple") + + try: + ckpt_input = result["CheckpointLoaderSimple"]["input"]["required"]["ckpt_name"] + models = ckpt_input[0] + if not isinstance(models, list): + raise ValueError("Expected list of checkpoint names") + except (KeyError, IndexError, TypeError) as e: + raise RuntimeError( + f"Could not parse checkpoint list from ComfyUI response: {e}" + ) from e + + return sorted(models) + + +def list_loras(base_url: str) -> list[str]: + """List all available LoRA models. + + Queries LoraLoader to find installed LoRA files. + + Args: + base_url: ComfyUI server base URL. + + Returns: + Sorted list of LoRA filenames/paths. + + Raises: + RuntimeError: If the server is unreachable or returns unexpected data. + """ + result = api_get(base_url, "/object_info/LoraLoader") + + try: + lora_input = result["LoraLoader"]["input"]["required"]["lora_name"] + models = lora_input[0] + if not isinstance(models, list): + raise ValueError("Expected list of LoRA names") + except (KeyError, IndexError, TypeError) as e: + raise RuntimeError( + f"Could not parse LoRA list from ComfyUI response: {e}" + ) from e + + return sorted(models) + + +def list_vaes(base_url: str) -> list[str]: + """List all available VAE models. + + Queries VAELoader to find installed VAE files. + + Args: + base_url: ComfyUI server base URL. + + Returns: + Sorted list of VAE filenames/paths. + + Raises: + RuntimeError: If the server is unreachable or returns unexpected data. + """ + result = api_get(base_url, "/object_info/VAELoader") + + try: + vae_input = result["VAELoader"]["input"]["required"]["vae_name"] + models = vae_input[0] + if not isinstance(models, list): + raise ValueError("Expected list of VAE names") + except (KeyError, IndexError, TypeError) as e: + raise RuntimeError( + f"Could not parse VAE list from ComfyUI response: {e}" + ) from e + + return sorted(models) + + +def list_controlnets(base_url: str) -> list[str]: + """List all available ControlNet models. + + Queries ControlNetLoader to find installed ControlNet files. + + Args: + base_url: ComfyUI server base URL. + + Returns: + Sorted list of ControlNet filenames/paths. Empty list if none installed. + + Raises: + RuntimeError: If the server is unreachable or returns unexpected data. + """ + result = api_get(base_url, "/object_info/ControlNetLoader") + + try: + cn_input = result["ControlNetLoader"]["input"]["required"]["control_net_name"] + models = cn_input[0] + if not isinstance(models, list): + raise ValueError("Expected list of ControlNet names") + except (KeyError, IndexError, TypeError) as e: + raise RuntimeError( + f"Could not parse ControlNet list from ComfyUI response: {e}" + ) from e + + return sorted(models) + + +def get_node_info(base_url: str, node_class: str) -> dict: + """Get detailed input/output info for a specific node class. + + Args: + base_url: ComfyUI server base URL. + node_class: ComfyUI node class name (e.g., 'KSampler', 'CLIPTextEncode'). + + Returns: + Dict with node input/output schema. + + Raises: + RuntimeError: If the node class is not found. + """ + result = api_get(base_url, f"/object_info/{node_class}") + + if node_class not in result: + raise RuntimeError( + f"Node class '{node_class}' not found. " + "Check spelling or use 'models list-nodes' to see all classes." + ) + + node = result[node_class] + return { + "class_type": node_class, + "display_name": node.get("display_name", node_class), + "description": node.get("description", ""), + "category": node.get("category", ""), + "input": node.get("input", {}), + "output": node.get("output", []), + "output_name": node.get("output_name", []), + } + + +def list_all_node_classes(base_url: str) -> list[str]: + """List all available node class names. + + Args: + base_url: ComfyUI server base URL. + + Returns: + Sorted list of all node class names. + """ + result = api_get(base_url, "/object_info") + return sorted(result.keys()) diff --git a/comfyui/agent-harness/cli_anything/comfyui/core/queue.py b/comfyui/agent-harness/cli_anything/comfyui/core/queue.py index 9fe6b80e62..0507109b0b 100644 --- a/comfyui/agent-harness/cli_anything/comfyui/core/queue.py +++ b/comfyui/agent-harness/cli_anything/comfyui/core/queue.py @@ -1,194 +1,194 @@ -"""Queue management โ€” submit prompts, check status, clear queue, get history. - -Covers the ComfyUI prompt queue lifecycle: -- POST /prompt โ€” submit a workflow for generation -- GET /queue โ€” inspect pending and running items -- DELETE /queue โ€” clear the queue -- GET /history โ€” completed prompt history -- POST /interrupt โ€” stop the current generation -""" - -import uuid - -from cli_anything.comfyui.utils.comfyui_backend import api_get, api_post, api_delete - - -def queue_prompt( - base_url: str, - workflow: dict, - client_id: str | None = None, -) -> dict: - """Submit a workflow to the ComfyUI generation queue. - - Args: - base_url: ComfyUI server base URL. - workflow: Workflow node graph dict (API format). - client_id: Optional client identifier for tracking. Auto-generated if None. - - Returns: - Dict with 'prompt_id', 'number' (queue position), and 'node_errors'. - - Raises: - RuntimeError: If the workflow is empty or the server rejects it. - """ - if not workflow: - raise RuntimeError("Cannot queue an empty workflow.") - - if client_id is None: - client_id = str(uuid.uuid4()) - - body = { - "prompt": workflow, - "client_id": client_id, - } - - result = api_post(base_url, "/prompt", body) - - if "error" in result: - detail = result.get("error", {}) - msg = detail.get("message", str(detail)) if isinstance(detail, dict) else str(detail) - raise RuntimeError(f"ComfyUI rejected the workflow: {msg}") - - return { - "prompt_id": result.get("prompt_id", ""), - "number": result.get("number", 0), - "node_errors": result.get("node_errors", {}), - "client_id": client_id, - } - - -def get_queue_status(base_url: str) -> dict: - """Get the current queue status (pending and running items). - - Args: - base_url: ComfyUI server base URL. - - Returns: - Dict with 'queue_running' (list) and 'queue_pending' (list), - plus 'running_count' and 'pending_count' summaries. - """ - result = api_get(base_url, "/queue") - - running = result.get("queue_running", []) - pending = result.get("queue_pending", []) - - return { - "queue_running": running, - "queue_pending": pending, - "running_count": len(running), - "pending_count": len(pending), - } - - -def clear_queue(base_url: str) -> dict: - """Clear all pending items from the queue. - - Note: This does not stop the currently running generation. - Use interrupt() to stop the active generation. - - Args: - base_url: ComfyUI server base URL. - - Returns: - Dict with status confirmation. - """ - api_delete(base_url, "/queue", data={"clear": True}) - return {"status": "cleared"} - - -def get_history(base_url: str, max_items: int | None = None) -> dict: - """Get the history of completed prompts. - - Args: - base_url: ComfyUI server base URL. - max_items: Maximum number of history entries to return (most recent first). - None returns all available history. - - Returns: - Dict mapping prompt_id to output info, plus a 'total' count. - """ - params = {} - if max_items is not None: - params["max_items"] = max_items - - result = api_get(base_url, "/history", params=params if params else None) - - formatted = {} - for prompt_id, entry in result.items(): - outputs = entry.get("outputs", {}) - status = entry.get("status", {}) - formatted[prompt_id] = { - "prompt_id": prompt_id, - "status": status.get("status_str", "unknown"), - "completed": status.get("completed", False), - "outputs": _format_outputs(outputs), - } - - return { - "history": formatted, - "total": len(formatted), - } - - -def get_prompt_history(base_url: str, prompt_id: str) -> dict: - """Get the history and output files for a specific prompt. - - Args: - base_url: ComfyUI server base URL. - prompt_id: The prompt ID returned from queue_prompt(). - - Returns: - Dict with prompt status, outputs, and image file references. - - Raises: - RuntimeError: If the prompt ID is not found in history. - """ - result = api_get(base_url, f"/history/{prompt_id}") - - if not result: - raise RuntimeError(f"Prompt ID not found in history: {prompt_id}") - - entry = result.get(prompt_id, result) - outputs = entry.get("outputs", {}) - status = entry.get("status", {}) - - return { - "prompt_id": prompt_id, - "status": status.get("status_str", "unknown"), - "completed": status.get("completed", False), - "outputs": _format_outputs(outputs), - } - - -def interrupt(base_url: str) -> dict: - """Interrupt (stop) the currently running generation. - - Args: - base_url: ComfyUI server base URL. - - Returns: - Dict with status confirmation. - """ - api_post(base_url, "/interrupt") - return {"status": "interrupted"} - - -def _format_outputs(outputs: dict) -> list[dict]: - """Extract image file references from prompt outputs. - - Args: - outputs: Raw outputs dict from ComfyUI history response. - - Returns: - List of image file dicts with filename, subfolder, and type. - """ - images = [] - for node_id, node_output in outputs.items(): - for img in node_output.get("images", []): - images.append({ - "node_id": node_id, - "filename": img.get("filename", ""), - "subfolder": img.get("subfolder", ""), - "type": img.get("type", "output"), - }) - return images +"""Queue management โ€” submit prompts, check status, clear queue, get history. + +Covers the ComfyUI prompt queue lifecycle: +- POST /prompt โ€” submit a workflow for generation +- GET /queue โ€” inspect pending and running items +- DELETE /queue โ€” clear the queue +- GET /history โ€” completed prompt history +- POST /interrupt โ€” stop the current generation +""" + +import uuid + +from cli_anything.comfyui.utils.comfyui_backend import api_get, api_post, api_delete + + +def queue_prompt( + base_url: str, + workflow: dict, + client_id: str | None = None, +) -> dict: + """Submit a workflow to the ComfyUI generation queue. + + Args: + base_url: ComfyUI server base URL. + workflow: Workflow node graph dict (API format). + client_id: Optional client identifier for tracking. Auto-generated if None. + + Returns: + Dict with 'prompt_id', 'number' (queue position), and 'node_errors'. + + Raises: + RuntimeError: If the workflow is empty or the server rejects it. + """ + if not workflow: + raise RuntimeError("Cannot queue an empty workflow.") + + if client_id is None: + client_id = str(uuid.uuid4()) + + body = { + "prompt": workflow, + "client_id": client_id, + } + + result = api_post(base_url, "/prompt", body) + + if "error" in result: + detail = result.get("error", {}) + msg = detail.get("message", str(detail)) if isinstance(detail, dict) else str(detail) + raise RuntimeError(f"ComfyUI rejected the workflow: {msg}") + + return { + "prompt_id": result.get("prompt_id", ""), + "number": result.get("number", 0), + "node_errors": result.get("node_errors", {}), + "client_id": client_id, + } + + +def get_queue_status(base_url: str) -> dict: + """Get the current queue status (pending and running items). + + Args: + base_url: ComfyUI server base URL. + + Returns: + Dict with 'queue_running' (list) and 'queue_pending' (list), + plus 'running_count' and 'pending_count' summaries. + """ + result = api_get(base_url, "/queue") + + running = result.get("queue_running", []) + pending = result.get("queue_pending", []) + + return { + "queue_running": running, + "queue_pending": pending, + "running_count": len(running), + "pending_count": len(pending), + } + + +def clear_queue(base_url: str) -> dict: + """Clear all pending items from the queue. + + Note: This does not stop the currently running generation. + Use interrupt() to stop the active generation. + + Args: + base_url: ComfyUI server base URL. + + Returns: + Dict with status confirmation. + """ + api_delete(base_url, "/queue", data={"clear": True}) + return {"status": "cleared"} + + +def get_history(base_url: str, max_items: int | None = None) -> dict: + """Get the history of completed prompts. + + Args: + base_url: ComfyUI server base URL. + max_items: Maximum number of history entries to return (most recent first). + None returns all available history. + + Returns: + Dict mapping prompt_id to output info, plus a 'total' count. + """ + params = {} + if max_items is not None: + params["max_items"] = max_items + + result = api_get(base_url, "/history", params=params if params else None) + + formatted = {} + for prompt_id, entry in result.items(): + outputs = entry.get("outputs", {}) + status = entry.get("status", {}) + formatted[prompt_id] = { + "prompt_id": prompt_id, + "status": status.get("status_str", "unknown"), + "completed": status.get("completed", False), + "outputs": _format_outputs(outputs), + } + + return { + "history": formatted, + "total": len(formatted), + } + + +def get_prompt_history(base_url: str, prompt_id: str) -> dict: + """Get the history and output files for a specific prompt. + + Args: + base_url: ComfyUI server base URL. + prompt_id: The prompt ID returned from queue_prompt(). + + Returns: + Dict with prompt status, outputs, and image file references. + + Raises: + RuntimeError: If the prompt ID is not found in history. + """ + result = api_get(base_url, f"/history/{prompt_id}") + + if not result: + raise RuntimeError(f"Prompt ID not found in history: {prompt_id}") + + entry = result.get(prompt_id, result) + outputs = entry.get("outputs", {}) + status = entry.get("status", {}) + + return { + "prompt_id": prompt_id, + "status": status.get("status_str", "unknown"), + "completed": status.get("completed", False), + "outputs": _format_outputs(outputs), + } + + +def interrupt(base_url: str) -> dict: + """Interrupt (stop) the currently running generation. + + Args: + base_url: ComfyUI server base URL. + + Returns: + Dict with status confirmation. + """ + api_post(base_url, "/interrupt") + return {"status": "interrupted"} + + +def _format_outputs(outputs: dict) -> list[dict]: + """Extract image file references from prompt outputs. + + Args: + outputs: Raw outputs dict from ComfyUI history response. + + Returns: + List of image file dicts with filename, subfolder, and type. + """ + images = [] + for node_id, node_output in outputs.items(): + for img in node_output.get("images", []): + images.append({ + "node_id": node_id, + "filename": img.get("filename", ""), + "subfolder": img.get("subfolder", ""), + "type": img.get("type", "output"), + }) + return images diff --git a/comfyui/agent-harness/cli_anything/comfyui/core/workflows.py b/comfyui/agent-harness/cli_anything/comfyui/core/workflows.py index 236633b362..9710594d44 100644 --- a/comfyui/agent-harness/cli_anything/comfyui/core/workflows.py +++ b/comfyui/agent-harness/cli_anything/comfyui/core/workflows.py @@ -1,158 +1,158 @@ -"""Workflow management โ€” load, save, list, and validate ComfyUI workflow JSON files. - -ComfyUI workflows are node graphs stored as JSON. Each node has a class_type -and an inputs dict. This module handles the file-level operations for workflows. -""" - -import json -from pathlib import Path - - -def load_workflow(path: str) -> dict: - """Load a ComfyUI workflow from a JSON file. - - Args: - path: Path to the workflow JSON file. - - Returns: - Workflow dict (node graph). - - Raises: - RuntimeError: If the file does not exist or is not valid JSON. - """ - p = Path(path) - if not p.exists(): - raise RuntimeError(f"Workflow file not found: {path}") - if not p.suffix.lower() == ".json": - raise RuntimeError(f"Workflow file must be a .json file, got: {path}") - try: - with open(p, "r", encoding="utf-8") as f: - data = json.load(f) - except json.JSONDecodeError as e: - raise RuntimeError(f"Invalid JSON in workflow file {path}: {e}") from e - - if not isinstance(data, dict): - raise RuntimeError( - f"Workflow file must contain a JSON object (node graph), got: {type(data).__name__}" - ) - - return data - - -def save_workflow(workflow: dict, path: str) -> dict: - """Save a ComfyUI workflow to a JSON file. - - Args: - workflow: Workflow dict (node graph). - path: Destination path for the JSON file. - - Returns: - Dict with status and saved path. - - Raises: - RuntimeError: If the workflow is not a dict or write fails. - """ - if not isinstance(workflow, dict): - raise RuntimeError( - f"Workflow must be a dict, got: {type(workflow).__name__}" - ) - - p = Path(path) - p.parent.mkdir(parents=True, exist_ok=True) - - try: - with open(p, "w", encoding="utf-8") as f: - json.dump(workflow, f, indent=2) - except OSError as e: - raise RuntimeError(f"Failed to write workflow to {path}: {e}") from e - - return {"status": "saved", "path": str(p.resolve()), "node_count": len(workflow)} - - -def list_workflows(directory: str) -> list[dict]: - """List all workflow JSON files in a directory. - - Args: - directory: Directory to search for workflow files. - - Returns: - List of dicts with filename, path, and node_count for each workflow. - - Raises: - RuntimeError: If the directory does not exist. - """ - d = Path(directory) - if not d.exists(): - raise RuntimeError(f"Workflow directory not found: {directory}") - if not d.is_dir(): - raise RuntimeError(f"Not a directory: {directory}") - - results = [] - for p in sorted(d.glob("*.json")): - try: - with open(p, "r", encoding="utf-8") as f: - data = json.load(f) - node_count = len(data) if isinstance(data, dict) else 0 - valid = isinstance(data, dict) - except Exception: - node_count = 0 - valid = False - - results.append({ - "filename": p.name, - "path": str(p.resolve()), - "node_count": node_count, - "valid": valid, - }) - - return results - - -def validate_workflow(workflow: dict) -> dict: - """Validate a workflow's structure. - - Checks that the workflow is a dict of nodes, and each node has - a 'class_type' and 'inputs' field. - - Args: - workflow: Workflow dict to validate. - - Returns: - Dict with 'valid' bool, 'node_count', 'errors' list, and 'warnings' list. - """ - errors = [] - warnings = [] - - if not isinstance(workflow, dict): - return { - "valid": False, - "node_count": 0, - "errors": [f"Workflow must be a dict, got: {type(workflow).__name__}"], - "warnings": [], - } - - if len(workflow) == 0: - warnings.append("Workflow is empty (no nodes)") - - for node_id, node in workflow.items(): - if not isinstance(node, dict): - errors.append(f"Node '{node_id}': must be a dict, got {type(node).__name__}") - continue - - if "class_type" not in node: - errors.append(f"Node '{node_id}': missing 'class_type' field") - - if "inputs" not in node: - warnings.append(f"Node '{node_id}': missing 'inputs' field") - elif not isinstance(node["inputs"], dict): - errors.append( - f"Node '{node_id}': 'inputs' must be a dict, " - f"got {type(node['inputs']).__name__}" - ) - - return { - "valid": len(errors) == 0, - "node_count": len(workflow), - "errors": errors, - "warnings": warnings, - } +"""Workflow management โ€” load, save, list, and validate ComfyUI workflow JSON files. + +ComfyUI workflows are node graphs stored as JSON. Each node has a class_type +and an inputs dict. This module handles the file-level operations for workflows. +""" + +import json +from pathlib import Path + + +def load_workflow(path: str) -> dict: + """Load a ComfyUI workflow from a JSON file. + + Args: + path: Path to the workflow JSON file. + + Returns: + Workflow dict (node graph). + + Raises: + RuntimeError: If the file does not exist or is not valid JSON. + """ + p = Path(path) + if not p.exists(): + raise RuntimeError(f"Workflow file not found: {path}") + if not p.suffix.lower() == ".json": + raise RuntimeError(f"Workflow file must be a .json file, got: {path}") + try: + with open(p, "r", encoding="utf-8") as f: + data = json.load(f) + except json.JSONDecodeError as e: + raise RuntimeError(f"Invalid JSON in workflow file {path}: {e}") from e + + if not isinstance(data, dict): + raise RuntimeError( + f"Workflow file must contain a JSON object (node graph), got: {type(data).__name__}" + ) + + return data + + +def save_workflow(workflow: dict, path: str) -> dict: + """Save a ComfyUI workflow to a JSON file. + + Args: + workflow: Workflow dict (node graph). + path: Destination path for the JSON file. + + Returns: + Dict with status and saved path. + + Raises: + RuntimeError: If the workflow is not a dict or write fails. + """ + if not isinstance(workflow, dict): + raise RuntimeError( + f"Workflow must be a dict, got: {type(workflow).__name__}" + ) + + p = Path(path) + p.parent.mkdir(parents=True, exist_ok=True) + + try: + with open(p, "w", encoding="utf-8") as f: + json.dump(workflow, f, indent=2) + except OSError as e: + raise RuntimeError(f"Failed to write workflow to {path}: {e}") from e + + return {"status": "saved", "path": str(p.resolve()), "node_count": len(workflow)} + + +def list_workflows(directory: str) -> list[dict]: + """List all workflow JSON files in a directory. + + Args: + directory: Directory to search for workflow files. + + Returns: + List of dicts with filename, path, and node_count for each workflow. + + Raises: + RuntimeError: If the directory does not exist. + """ + d = Path(directory) + if not d.exists(): + raise RuntimeError(f"Workflow directory not found: {directory}") + if not d.is_dir(): + raise RuntimeError(f"Not a directory: {directory}") + + results = [] + for p in sorted(d.glob("*.json")): + try: + with open(p, "r", encoding="utf-8") as f: + data = json.load(f) + node_count = len(data) if isinstance(data, dict) else 0 + valid = isinstance(data, dict) + except Exception: + node_count = 0 + valid = False + + results.append({ + "filename": p.name, + "path": str(p.resolve()), + "node_count": node_count, + "valid": valid, + }) + + return results + + +def validate_workflow(workflow: dict) -> dict: + """Validate a workflow's structure. + + Checks that the workflow is a dict of nodes, and each node has + a 'class_type' and 'inputs' field. + + Args: + workflow: Workflow dict to validate. + + Returns: + Dict with 'valid' bool, 'node_count', 'errors' list, and 'warnings' list. + """ + errors = [] + warnings = [] + + if not isinstance(workflow, dict): + return { + "valid": False, + "node_count": 0, + "errors": [f"Workflow must be a dict, got: {type(workflow).__name__}"], + "warnings": [], + } + + if len(workflow) == 0: + warnings.append("Workflow is empty (no nodes)") + + for node_id, node in workflow.items(): + if not isinstance(node, dict): + errors.append(f"Node '{node_id}': must be a dict, got {type(node).__name__}") + continue + + if "class_type" not in node: + errors.append(f"Node '{node_id}': missing 'class_type' field") + + if "inputs" not in node: + warnings.append(f"Node '{node_id}': missing 'inputs' field") + elif not isinstance(node["inputs"], dict): + errors.append( + f"Node '{node_id}': 'inputs' must be a dict, " + f"got {type(node['inputs']).__name__}" + ) + + return { + "valid": len(errors) == 0, + "node_count": len(workflow), + "errors": errors, + "warnings": warnings, + } diff --git a/comfyui/agent-harness/cli_anything/comfyui/tests/TEST.md b/comfyui/agent-harness/cli_anything/comfyui/tests/TEST.md index 9c803ef2be..4726c01d8d 100644 --- a/comfyui/agent-harness/cli_anything/comfyui/tests/TEST.md +++ b/comfyui/agent-harness/cli_anything/comfyui/tests/TEST.md @@ -1,66 +1,66 @@ -# ComfyUI Harness Test Guide - -## Requirements - -No ComfyUI installation required โ€” all tests use mocked HTTP responses. - -```bash -pip install pytest pytest-cov -# or install the harness with dev extras: -pip install -e ".[dev]" -``` - -## Running Tests - -```bash -# From the agent-harness directory: -python -m pytest cli_anything/comfyui/tests/ -v - -# Unit tests only: -python -m pytest cli_anything/comfyui/tests/test_core.py -v - -# E2E simulation tests: -python -m pytest cli_anything/comfyui/tests/test_full_e2e.py -v - -# With coverage: -python -m pytest cli_anything/comfyui/tests/ --cov=cli_anything.comfyui --cov-report=term-missing -``` - -## Test Structure - -| File | Coverage | -|---|---| -| `test_core.py` | Unit tests for all core modules + backend + CLI commands | -| `test_full_e2e.py` | Simulated end-to-end generation workflows | - -## What Is Tested - -- **Workflow:** load, save, list, validate (valid + invalid cases) -- **Queue:** submit prompt, check status, clear, history, interrupt -- **Models:** checkpoints, LoRAs, VAEs, ControlNets, node info, all node classes -- **Images:** list outputs, download single image, download all for prompt -- **Backend:** GET/POST/DELETE/raw byte wrappers, connection errors, timeouts -- **CLI:** all command groups in both human and `--json` output modes -- **Errors:** connection refused, server rejects workflow, file not found, overwrite protection - -## Mock Patterns - -All tests use `unittest.mock.patch` to intercept HTTP calls at the backend layer: - -```python -from unittest.mock import patch - -with patch("cli_anything.comfyui.core.queue.api_post", return_value={"prompt_id": "abc"}): - result = queue_mod.queue_prompt("http://localhost:8188", workflow) -``` - -For CLI tests, use Click's `CliRunner`: - -```python -from click.testing import CliRunner -from cli_anything.comfyui.comfyui_cli import cli - -runner = CliRunner() -result = runner.invoke(cli, ["--json", "queue", "status"]) -assert result.exit_code == 0 -``` +# ComfyUI Harness Test Guide + +## Requirements + +No ComfyUI installation required โ€” all tests use mocked HTTP responses. + +```bash +pip install pytest pytest-cov +# or install the harness with dev extras: +pip install -e ".[dev]" +``` + +## Running Tests + +```bash +# From the agent-harness directory: +python -m pytest cli_anything/comfyui/tests/ -v + +# Unit tests only: +python -m pytest cli_anything/comfyui/tests/test_core.py -v + +# E2E simulation tests: +python -m pytest cli_anything/comfyui/tests/test_full_e2e.py -v + +# With coverage: +python -m pytest cli_anything/comfyui/tests/ --cov=cli_anything.comfyui --cov-report=term-missing +``` + +## Test Structure + +| File | Coverage | +|---|---| +| `test_core.py` | Unit tests for all core modules + backend + CLI commands | +| `test_full_e2e.py` | Simulated end-to-end generation workflows | + +## What Is Tested + +- **Workflow:** load, save, list, validate (valid + invalid cases) +- **Queue:** submit prompt, check status, clear, history, interrupt +- **Models:** checkpoints, LoRAs, VAEs, ControlNets, node info, all node classes +- **Images:** list outputs, download single image, download all for prompt +- **Backend:** GET/POST/DELETE/raw byte wrappers, connection errors, timeouts +- **CLI:** all command groups in both human and `--json` output modes +- **Errors:** connection refused, server rejects workflow, file not found, overwrite protection + +## Mock Patterns + +All tests use `unittest.mock.patch` to intercept HTTP calls at the backend layer: + +```python +from unittest.mock import patch + +with patch("cli_anything.comfyui.core.queue.api_post", return_value={"prompt_id": "abc"}): + result = queue_mod.queue_prompt("http://localhost:8188", workflow) +``` + +For CLI tests, use Click's `CliRunner`: + +```python +from click.testing import CliRunner +from cli_anything.comfyui.comfyui_cli import cli + +runner = CliRunner() +result = runner.invoke(cli, ["--json", "queue", "status"]) +assert result.exit_code == 0 +``` diff --git a/comfyui/agent-harness/cli_anything/comfyui/tests/test_core.py b/comfyui/agent-harness/cli_anything/comfyui/tests/test_core.py index 3f55565c96..ced84127aa 100644 --- a/comfyui/agent-harness/cli_anything/comfyui/tests/test_core.py +++ b/comfyui/agent-harness/cli_anything/comfyui/tests/test_core.py @@ -1,789 +1,789 @@ -"""Unit tests for ComfyUI CLI harness โ€” no ComfyUI installation required. - -Tests cover: -- Workflow load/save/list/validate -- Queue operations (prompt, status, clear, history, interrupt) -- Model listing (checkpoints, LoRAs, VAEs, ControlNets, node info) -- Image listing and downloading -- CLI command parsing and output -- Error handling and edge cases - -Run with: - python -m pytest comfyui/tests/test_core.py - python -m pytest comfyui/tests/test_core.py -v -""" - -import json -from pathlib import Path -from unittest.mock import patch, MagicMock -import pytest -from click.testing import CliRunner - -from cli_anything.comfyui.comfyui_cli import cli -from cli_anything.comfyui.core import workflows as workflow_mod -from cli_anything.comfyui.core import queue as queue_mod -from cli_anything.comfyui.core import models as models_mod -from cli_anything.comfyui.core import images as images_mod - - -# โ”€โ”€ Fixtures โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ - -@pytest.fixture -def runner(): - """Click CLI test runner.""" - return CliRunner() - - -@pytest.fixture -def sample_workflow(): - """Minimal valid ComfyUI workflow (API format).""" - return { - "4": { - "class_type": "CheckpointLoaderSimple", - "inputs": {"ckpt_name": "v1-5-pruned-emaonly.ckpt"} - }, - "6": { - "class_type": "CLIPTextEncode", - "inputs": {"text": "a photo of a cat", "clip": ["4", 1]} - }, - "7": { - "class_type": "CLIPTextEncode", - "inputs": {"text": "bad quality", "clip": ["4", 1]} - }, - "5": { - "class_type": "EmptyLatentImage", - "inputs": {"batch_size": 1, "height": 512, "width": 512} - }, - "3": { - "class_type": "KSampler", - "inputs": { - "cfg": 7, "denoise": 1, "model": ["4", 0], - "negative": ["7", 0], "positive": ["6", 0], - "latent_image": ["5", 0], "sampler_name": "euler", - "scheduler": "normal", "seed": 42, "steps": 20 - } - }, - "8": { - "class_type": "VAEDecode", - "inputs": {"samples": ["3", 0], "vae": ["4", 2]} - }, - "9": { - "class_type": "SaveImage", - "inputs": {"filename_prefix": "ComfyUI", "images": ["8", 0]} - } - } - - -@pytest.fixture -def workflow_file(tmp_path, sample_workflow): - """Write sample workflow to a temp file and return the path.""" - p = tmp_path / "test_workflow.json" - p.write_text(json.dumps(sample_workflow)) - return str(p) - - -# โ”€โ”€ Workflow Tests โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ - -class TestWorkflowLoad: - """Test workflow file loading.""" - - def test_load_valid_workflow(self, workflow_file, sample_workflow): - """Should load a valid workflow JSON file.""" - result = workflow_mod.load_workflow(workflow_file) - assert result == sample_workflow - assert "3" in result - - def test_load_nonexistent_file(self): - """Should raise RuntimeError for missing file.""" - with pytest.raises(RuntimeError, match="not found"): - workflow_mod.load_workflow("/nonexistent/path/workflow.json") - - def test_load_non_json_extension(self, tmp_path): - """Should raise RuntimeError for non-.json file.""" - p = tmp_path / "workflow.txt" - p.write_text("{}") - with pytest.raises(RuntimeError, match=".json"): - workflow_mod.load_workflow(str(p)) - - def test_load_invalid_json(self, tmp_path): - """Should raise RuntimeError for malformed JSON.""" - p = tmp_path / "bad.json" - p.write_text("{not valid json") - with pytest.raises(RuntimeError, match="Invalid JSON"): - workflow_mod.load_workflow(str(p)) - - def test_load_non_dict_json(self, tmp_path): - """Should raise RuntimeError if JSON root is not a dict.""" - p = tmp_path / "list.json" - p.write_text("[1, 2, 3]") - with pytest.raises(RuntimeError, match="JSON object"): - workflow_mod.load_workflow(str(p)) - - -class TestWorkflowSave: - """Test workflow file saving.""" - - def test_save_workflow(self, tmp_path, sample_workflow): - """Should save workflow to JSON file.""" - dest = str(tmp_path / "saved.json") - result = workflow_mod.save_workflow(sample_workflow, dest) - assert result["status"] == "saved" - assert result["node_count"] == len(sample_workflow) - assert Path(dest).exists() - loaded = json.loads(Path(dest).read_text()) - assert loaded == sample_workflow - - def test_save_creates_parent_dirs(self, tmp_path, sample_workflow): - """Should create parent directories if they don't exist.""" - dest = str(tmp_path / "nested" / "deep" / "workflow.json") - result = workflow_mod.save_workflow(sample_workflow, dest) - assert result["status"] == "saved" - assert Path(dest).exists() - - def test_save_non_dict_raises(self): - """Should raise RuntimeError if workflow is not a dict.""" - with pytest.raises(RuntimeError, match="must be a dict"): - workflow_mod.save_workflow([1, 2, 3], "/tmp/test.json") - - -class TestWorkflowList: - """Test listing workflow files in a directory.""" - - def test_list_workflows(self, tmp_path, sample_workflow): - """Should list all JSON files in directory.""" - (tmp_path / "workflow1.json").write_text(json.dumps(sample_workflow)) - (tmp_path / "workflow2.json").write_text(json.dumps({"1": {"class_type": "SaveImage", "inputs": {}}})) - (tmp_path / "not_json.txt").write_text("ignored") - - result = workflow_mod.list_workflows(str(tmp_path)) - assert len(result) == 2 - filenames = [r["filename"] for r in result] - assert "workflow1.json" in filenames - assert "workflow2.json" in filenames - - def test_list_empty_directory(self, tmp_path): - """Should return empty list for directory with no JSON files.""" - result = workflow_mod.list_workflows(str(tmp_path)) - assert result == [] - - def test_list_nonexistent_directory(self): - """Should raise RuntimeError for nonexistent directory.""" - with pytest.raises(RuntimeError, match="not found"): - workflow_mod.list_workflows("/nonexistent/dir/xyz") - - -class TestWorkflowValidate: - """Test workflow validation.""" - - def test_valid_workflow(self, sample_workflow): - """Should pass validation for a well-formed workflow.""" - result = workflow_mod.validate_workflow(sample_workflow) - assert result["valid"] is True - assert result["node_count"] == len(sample_workflow) - assert result["errors"] == [] - - def test_empty_workflow(self): - """Should warn about empty workflow but not fail.""" - result = workflow_mod.validate_workflow({}) - assert result["valid"] is True - assert any("empty" in w.lower() for w in result["warnings"]) - - def test_missing_class_type(self): - """Should error on node missing class_type.""" - wf = {"1": {"inputs": {"text": "hello"}}} - result = workflow_mod.validate_workflow(wf) - assert result["valid"] is False - assert any("class_type" in e for e in result["errors"]) - - def test_non_dict_inputs(self): - """Should error when inputs is not a dict.""" - wf = {"1": {"class_type": "CLIPTextEncode", "inputs": ["bad"]}} - result = workflow_mod.validate_workflow(wf) - assert result["valid"] is False - - def test_non_dict_workflow(self): - """Should fail validation if workflow is not a dict.""" - result = workflow_mod.validate_workflow("not a dict") - assert result["valid"] is False - assert result["node_count"] == 0 - - -# โ”€โ”€ Queue Tests โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ - -class TestQueuePrompt: - """Test submitting prompts to the queue.""" - - def test_queue_prompt_success(self, sample_workflow): - """Should return prompt_id and queue position.""" - mock_response = { - "prompt_id": "abc-123-def", - "number": 0, - "node_errors": {}, - } - with patch("cli_anything.comfyui.core.queue.api_post", return_value=mock_response): - result = queue_mod.queue_prompt("http://localhost:8188", sample_workflow) - - assert result["prompt_id"] == "abc-123-def" - assert result["number"] == 0 - assert result["node_errors"] == {} - assert "client_id" in result - - def test_queue_prompt_with_client_id(self, sample_workflow): - """Should use provided client_id.""" - mock_response = {"prompt_id": "xyz", "number": 1, "node_errors": {}} - with patch("cli_anything.comfyui.core.queue.api_post", return_value=mock_response) as mock_post: - result = queue_mod.queue_prompt("http://localhost:8188", sample_workflow, client_id="my-client") - - assert result["client_id"] == "my-client" - call_args = mock_post.call_args - assert call_args[0][2]["client_id"] == "my-client" - - def test_queue_empty_workflow_raises(self): - """Should raise RuntimeError for empty workflow.""" - with pytest.raises(RuntimeError, match="empty"): - queue_mod.queue_prompt("http://localhost:8188", {}) - - def test_queue_prompt_server_error_raises(self, sample_workflow): - """Should raise RuntimeError when server returns error.""" - mock_response = { - "error": {"message": "Invalid prompt", "type": "value_error"} - } - with patch("cli_anything.comfyui.core.queue.api_post", return_value=mock_response): - with pytest.raises(RuntimeError, match="rejected"): - queue_mod.queue_prompt("http://localhost:8188", sample_workflow) - - -class TestQueueStatus: - """Test queue status retrieval.""" - - def test_get_queue_status(self): - """Should return running and pending counts.""" - mock_response = { - "queue_running": [["abc", {}, {}, {}]], - "queue_pending": [["def", {}, {}, {}], ["ghi", {}, {}, {}]], - } - with patch("cli_anything.comfyui.core.queue.api_get", return_value=mock_response): - result = queue_mod.get_queue_status("http://localhost:8188") - - assert result["running_count"] == 1 - assert result["pending_count"] == 2 - - def test_get_queue_status_empty(self): - """Should handle empty queue.""" - mock_response = {"queue_running": [], "queue_pending": []} - with patch("cli_anything.comfyui.core.queue.api_get", return_value=mock_response): - result = queue_mod.get_queue_status("http://localhost:8188") - - assert result["running_count"] == 0 - assert result["pending_count"] == 0 - - -class TestQueueClear: - """Test queue clearing.""" - - def test_clear_queue(self): - """Should return cleared status.""" - with patch("cli_anything.comfyui.core.queue.api_delete", return_value={"status": "ok"}): - result = queue_mod.clear_queue("http://localhost:8188") - - assert result["status"] == "cleared" - - def test_clear_queue_passes_clear_flag(self): - """Should pass clear=True to the API.""" - with patch("cli_anything.comfyui.core.queue.api_delete", return_value={}) as mock_del: - queue_mod.clear_queue("http://localhost:8188") - - call_args = mock_del.call_args - # data kwarg or positional arg should contain {"clear": True} - data_arg = call_args[1].get("data") or (call_args[0][2] if len(call_args[0]) > 2 else None) - assert data_arg == {"clear": True} - - -class TestQueueHistory: - """Test prompt history retrieval.""" - - def test_get_history(self): - """Should format history entries with outputs.""" - mock_response = { - "abc-123": { - "outputs": { - "9": { - "images": [ - {"filename": "ComfyUI_00001_.png", "subfolder": "", "type": "output"} - ] - } - }, - "status": {"status_str": "success", "completed": True} - } - } - with patch("cli_anything.comfyui.core.queue.api_get", return_value=mock_response): - result = queue_mod.get_history("http://localhost:8188") - - assert result["total"] == 1 - assert "abc-123" in result["history"] - entry = result["history"]["abc-123"] - assert entry["completed"] is True - assert len(entry["outputs"]) == 1 - assert entry["outputs"][0]["filename"] == "ComfyUI_00001_.png" - - def test_get_prompt_history_not_found(self): - """Should raise RuntimeError when prompt ID not in history.""" - with patch("cli_anything.comfyui.core.queue.api_get", return_value={}): - with pytest.raises(RuntimeError, match="not found"): - queue_mod.get_prompt_history("http://localhost:8188", "nonexistent-id") - - def test_interrupt(self): - """Should call interrupt endpoint and return status.""" - with patch("cli_anything.comfyui.core.queue.api_post", return_value={}): - result = queue_mod.interrupt("http://localhost:8188") - assert result["status"] == "interrupted" - - -# โ”€โ”€ Models Tests โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ - -class TestModels: - """Test model listing functions.""" - - def _make_checkpoint_response(self, names): - return {"CheckpointLoaderSimple": {"input": {"required": {"ckpt_name": [names, {}]}}}} - - def _make_lora_response(self, names): - return {"LoraLoader": {"input": {"required": {"lora_name": [names, {}]}}}} - - def _make_vae_response(self, names): - return {"VAELoader": {"input": {"required": {"vae_name": [names, {}]}}}} - - def _make_controlnet_response(self, names): - return {"ControlNetLoader": {"input": {"required": {"control_net_name": [names, {}]}}}} - - def test_list_checkpoints(self): - """Should return sorted list of checkpoint names.""" - mock_resp = self._make_checkpoint_response([ - "sd_xl_base_1.0.safetensors", "v1-5-pruned-emaonly.ckpt", "deliberate_v2.safetensors", - ]) - with patch("cli_anything.comfyui.core.models.api_get", return_value=mock_resp): - result = models_mod.list_checkpoints("http://localhost:8188") - - assert isinstance(result, list) - assert len(result) == 3 - assert result == sorted(result) - - def test_list_loras(self): - """Should return sorted list of LoRA names.""" - mock_resp = self._make_lora_response(["lora_b.safetensors", "lora_a.safetensors"]) - with patch("cli_anything.comfyui.core.models.api_get", return_value=mock_resp): - result = models_mod.list_loras("http://localhost:8188") - - assert result == ["lora_a.safetensors", "lora_b.safetensors"] - - def test_list_vaes(self): - """Should return sorted list of VAE names.""" - mock_resp = self._make_vae_response(["vae-ft-mse-840000-ema-pruned.ckpt"]) - with patch("cli_anything.comfyui.core.models.api_get", return_value=mock_resp): - result = models_mod.list_vaes("http://localhost:8188") - - assert "vae-ft-mse-840000-ema-pruned.ckpt" in result - - def test_list_controlnets(self): - """Should return sorted list of ControlNet names.""" - mock_resp = self._make_controlnet_response(["control_v11p_sd15_canny.pth"]) - with patch("cli_anything.comfyui.core.models.api_get", return_value=mock_resp): - result = models_mod.list_controlnets("http://localhost:8188") - - assert "control_v11p_sd15_canny.pth" in result - - def test_list_checkpoints_bad_response_raises(self): - """Should raise RuntimeError on unexpected API response.""" - with patch("cli_anything.comfyui.core.models.api_get", return_value={}): - with pytest.raises(RuntimeError, match="checkpoint"): - models_mod.list_checkpoints("http://localhost:8188") - - def test_get_node_info(self): - """Should return formatted node schema.""" - mock_resp = { - "KSampler": { - "display_name": "KSampler", - "description": "Samples latents", - "category": "sampling", - "input": {"required": {"steps": [["INT"], {"default": 20}]}}, - "output": ["LATENT"], - "output_name": ["LATENT"], - } - } - with patch("cli_anything.comfyui.core.models.api_get", return_value=mock_resp): - result = models_mod.get_node_info("http://localhost:8188", "KSampler") - - assert result["class_type"] == "KSampler" - assert result["category"] == "sampling" - - def test_get_node_info_not_found_raises(self): - """Should raise RuntimeError when node class not in response.""" - with patch("cli_anything.comfyui.core.models.api_get", return_value={}): - with pytest.raises(RuntimeError, match="not found"): - models_mod.get_node_info("http://localhost:8188", "NonExistentNode") - - def test_list_all_node_classes(self): - """Should return sorted list of all node class names.""" - mock_resp = {"KSampler": {}, "CLIPTextEncode": {}, "SaveImage": {}} - with patch("cli_anything.comfyui.core.models.api_get", return_value=mock_resp): - result = models_mod.list_all_node_classes("http://localhost:8188") - - assert result == ["CLIPTextEncode", "KSampler", "SaveImage"] - - -# โ”€โ”€ Images Tests โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ - -class TestImages: - """Test image listing and downloading.""" - - def test_list_output_images(self): - """Should return list of image file refs for a prompt.""" - mock_history = { - "prompt_id": "abc-123", - "status": "success", - "completed": True, - "outputs": [ - {"node_id": "9", "filename": "ComfyUI_00001_.png", - "subfolder": "", "type": "output"} - ] - } - with patch("cli_anything.comfyui.core.images.get_prompt_history", - return_value=mock_history): - result = images_mod.list_output_images("http://localhost:8188", "abc-123") - - assert len(result) == 1 - assert result[0]["filename"] == "ComfyUI_00001_.png" - - def test_list_output_images_incomplete_raises(self): - """Should raise RuntimeError when prompt not yet complete.""" - mock_history = { - "prompt_id": "abc-123", - "status": "running", - "completed": False, - "outputs": [] - } - with patch("cli_anything.comfyui.core.images.get_prompt_history", - return_value=mock_history): - with pytest.raises(RuntimeError, match="not completed"): - images_mod.list_output_images("http://localhost:8188", "abc-123") - - def test_download_image(self, tmp_path): - """Should download image bytes and write to disk.""" - fake_png = b"\x89PNG\r\n\x1a\n" + b"\x00" * 100 - dest = str(tmp_path / "output.png") - - with patch("cli_anything.comfyui.core.images.api_get_raw", return_value=fake_png): - result = images_mod.download_image( - base_url="http://localhost:8188", - filename="ComfyUI_00001_.png", - output_path=dest, - ) - - assert result["status"] == "downloaded" - assert result["size_bytes"] == len(fake_png) - assert Path(dest).read_bytes() == fake_png - - def test_download_image_no_overwrite_raises(self, tmp_path): - """Should raise RuntimeError when output file exists and overwrite=False.""" - dest = tmp_path / "existing.png" - dest.write_bytes(b"existing content") - - with pytest.raises(RuntimeError, match="already exists"): - images_mod.download_image( - base_url="http://localhost:8188", - filename="ComfyUI_00001_.png", - output_path=str(dest), - overwrite=False, - ) - - def test_download_image_overwrite(self, tmp_path): - """Should overwrite existing file when overwrite=True.""" - dest = tmp_path / "existing.png" - dest.write_bytes(b"old content") - fake_png = b"\x89PNG\r\n\x1a\n" + b"\x00" * 50 - - with patch("cli_anything.comfyui.core.images.api_get_raw", return_value=fake_png): - images_mod.download_image( - base_url="http://localhost:8188", - filename="ComfyUI_00001_.png", - output_path=str(dest), - overwrite=True, - ) - - assert dest.read_bytes() == fake_png - - def test_download_prompt_images(self, tmp_path): - """Should download all images for a prompt to a directory.""" - mock_history = { - "prompt_id": "abc-123", - "status": "success", - "completed": True, - "outputs": [ - {"node_id": "9", "filename": "ComfyUI_00001_.png", "subfolder": "", "type": "output"}, - {"node_id": "9", "filename": "ComfyUI_00002_.png", "subfolder": "", "type": "output"}, - ] - } - fake_png = b"\x89PNG\r\n\x1a\n" + b"\x00" * 20 - - with patch("cli_anything.comfyui.core.images.get_prompt_history", return_value=mock_history), \ - patch("cli_anything.comfyui.core.images.api_get_raw", return_value=fake_png): - results = images_mod.download_prompt_images( - base_url="http://localhost:8188", - prompt_id="abc-123", - output_dir=str(tmp_path), - ) - - assert len(results) == 2 - assert all(r["status"] == "downloaded" for r in results) - - -# โ”€โ”€ CLI Integration Tests โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ - -class TestCLIWorkflow: - """Test CLI workflow commands.""" - - def test_workflow_list(self, runner, tmp_path, sample_workflow): - """workflow list should display JSON files.""" - (tmp_path / "my_wf.json").write_text(json.dumps(sample_workflow)) - - result = runner.invoke(cli, ["workflow", "list", str(tmp_path)]) - assert result.exit_code == 0 - assert "my_wf.json" in result.output - - def test_workflow_validate_valid(self, runner, workflow_file): - """workflow validate should pass for a valid workflow.""" - result = runner.invoke(cli, ["workflow", "validate", workflow_file]) - assert result.exit_code == 0 - assert "valid" in result.output.lower() - - def test_workflow_validate_json_output(self, runner, workflow_file): - """--json flag should produce valid JSON output.""" - result = runner.invoke(cli, ["--json", "workflow", "validate", workflow_file]) - assert result.exit_code == 0 - data = json.loads(result.output) - assert "valid" in data - assert "node_count" in data - - -class TestCLIQueue: - """Test CLI queue commands.""" - - def test_queue_prompt(self, runner, workflow_file): - """queue prompt should queue a workflow and show prompt_id.""" - mock_response = {"prompt_id": "test-id-999", "number": 0, "node_errors": {}} - with patch("cli_anything.comfyui.core.queue.api_post", return_value=mock_response): - result = runner.invoke(cli, ["queue", "prompt", "--workflow", workflow_file]) - - assert result.exit_code == 0 - assert "test-id-999" in result.output - - def test_queue_status(self, runner): - """queue status should show running and pending counts.""" - mock_response = {"queue_running": [], "queue_pending": [["id1", {}, {}, {}]]} - with patch("cli_anything.comfyui.core.queue.api_get", return_value=mock_response): - result = runner.invoke(cli, ["queue", "status"]) - - assert result.exit_code == 0 - assert "1" in result.output - - def test_queue_clear_with_confirm(self, runner): - """queue clear --confirm should skip prompt and clear.""" - with patch("cli_anything.comfyui.core.queue.api_delete", return_value={}): - result = runner.invoke(cli, ["queue", "clear", "--confirm"]) - - assert result.exit_code == 0 - assert "cleared" in result.output - - def test_queue_history_json(self, runner): - """queue history --json should return valid JSON.""" - mock_response = { - "abc": {"outputs": {}, "status": {"status_str": "success", "completed": True}} - } - with patch("cli_anything.comfyui.core.queue.api_get", return_value=mock_response): - result = runner.invoke(cli, ["--json", "queue", "history"]) - - assert result.exit_code == 0 - data = json.loads(result.output) - assert "history" in data - assert "total" in data - - -class TestCLIModels: - """Test CLI models commands.""" - - def test_models_checkpoints(self, runner): - """models checkpoints should list checkpoint names.""" - mock_resp = { - "CheckpointLoaderSimple": { - "input": {"required": {"ckpt_name": [["v1-5-pruned-emaonly.ckpt", "sd_xl_base_1.0.safetensors"], {}]}} - } - } - with patch("cli_anything.comfyui.core.models.api_get", return_value=mock_resp): - result = runner.invoke(cli, ["models", "checkpoints"]) - - assert result.exit_code == 0 - assert "v1-5-pruned-emaonly.ckpt" in result.output - - def test_models_checkpoints_json(self, runner): - """models checkpoints --json should return a JSON array.""" - mock_resp = { - "CheckpointLoaderSimple": { - "input": {"required": {"ckpt_name": [["model_a.safetensors"], {}]}} - } - } - with patch("cli_anything.comfyui.core.models.api_get", return_value=mock_resp): - result = runner.invoke(cli, ["--json", "models", "checkpoints"]) - - assert result.exit_code == 0 - data = json.loads(result.output) - assert isinstance(data, list) - assert "model_a.safetensors" in data - - -class TestCLIImages: - """Test CLI images commands.""" - - def test_images_list(self, runner): - """images list should show output filenames.""" - mock_history = { - "prompt_id": "abc-123", - "status": "success", - "completed": True, - "outputs": [{"node_id": "9", "filename": "ComfyUI_00001_.png", "subfolder": "", "type": "output"}] - } - with patch("cli_anything.comfyui.core.images.get_prompt_history", return_value=mock_history): - result = runner.invoke(cli, ["images", "list", "--prompt-id", "abc-123"]) - - assert result.exit_code == 0 - assert "ComfyUI_00001_.png" in result.output - - def test_images_download(self, runner, tmp_path): - """images download should save file to disk.""" - fake_png = b"\x89PNG\r\n\x1a\n" + b"\x00" * 30 - dest = str(tmp_path / "out.png") - - with patch("cli_anything.comfyui.core.images.api_get_raw", return_value=fake_png): - result = runner.invoke(cli, [ - "images", "download", - "--filename", "ComfyUI_00001_.png", - "--output", dest, - ]) - - assert result.exit_code == 0 - assert "downloaded" in result.output.lower() - assert Path(dest).exists() - - -class TestCLISystem: - """Test CLI system commands.""" - - def test_system_stats(self, runner): - """system stats should display server info.""" - mock_stats = { - "system": {"os": "linux", "python_version": "3.11"}, - "devices": [{"name": "NVIDIA RTX 3060", "vram_total": 12884901888}] - } - with patch("cli_anything.comfyui.comfyui_cli.api_get", return_value=mock_stats): - result = runner.invoke(cli, ["system", "stats"]) - - assert result.exit_code == 0 - - def test_system_stats_json(self, runner): - """system stats --json should return valid JSON.""" - mock_stats = {"system": {"os": "linux"}, "devices": []} - with patch("cli_anything.comfyui.comfyui_cli.api_get", return_value=mock_stats): - result = runner.invoke(cli, ["--json", "system", "stats"]) - - assert result.exit_code == 0 - data = json.loads(result.output) - assert "system" in data - - -# โ”€โ”€ Backend Tests โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ - -class TestBackend: - """Test comfyui_backend HTTP wrappers.""" - - def test_api_get_success(self): - """api_get should return parsed JSON on success.""" - from cli_anything.comfyui.utils.comfyui_backend import api_get - - mock_resp = MagicMock() - mock_resp.status_code = 200 - mock_resp.content = b'{"result": "ok"}' - mock_resp.json.return_value = {"result": "ok"} - mock_resp.raise_for_status = MagicMock() - - with patch("cli_anything.comfyui.utils.comfyui_backend.requests.get", - return_value=mock_resp): - result = api_get("http://localhost:8188", "/queue") - - assert result == {"result": "ok"} - - def test_api_get_connection_error(self): - """api_get should raise RuntimeError on connection failure.""" - import requests as req - from cli_anything.comfyui.utils.comfyui_backend import api_get - - with patch("cli_anything.comfyui.utils.comfyui_backend.requests.get", - side_effect=req.exceptions.ConnectionError("refused")): - with pytest.raises(RuntimeError, match="Cannot connect"): - api_get("http://localhost:8188", "/queue") - - def test_api_post_success(self): - """api_post should return parsed JSON on success.""" - from cli_anything.comfyui.utils.comfyui_backend import api_post - - mock_resp = MagicMock() - mock_resp.status_code = 200 - mock_resp.content = b'{"prompt_id": "abc"}' - mock_resp.json.return_value = {"prompt_id": "abc"} - mock_resp.raise_for_status = MagicMock() - - with patch("cli_anything.comfyui.utils.comfyui_backend.requests.post", - return_value=mock_resp): - result = api_post("http://localhost:8188", "/prompt", {"prompt": {}}) - - assert result["prompt_id"] == "abc" - - def test_api_delete_success(self): - """api_delete should return ok status on 204.""" - from cli_anything.comfyui.utils.comfyui_backend import api_delete - - mock_resp = MagicMock() - mock_resp.status_code = 204 - mock_resp.content = b"" - mock_resp.raise_for_status = MagicMock() - - with patch("cli_anything.comfyui.utils.comfyui_backend.requests.delete", - return_value=mock_resp): - result = api_delete("http://localhost:8188", "/queue") - - assert result == {"status": "ok"} - - def test_api_get_raw_returns_bytes(self): - """api_get_raw should return raw bytes.""" - from cli_anything.comfyui.utils.comfyui_backend import api_get_raw - - fake_bytes = b"\x89PNG\r\n\x1a\n" + b"\x00" * 50 - mock_resp = MagicMock() - mock_resp.status_code = 200 - mock_resp.content = fake_bytes - mock_resp.raise_for_status = MagicMock() - - with patch("cli_anything.comfyui.utils.comfyui_backend.requests.get", - return_value=mock_resp): - result = api_get_raw("http://localhost:8188", "/view", - params={"filename": "ComfyUI_00001_.png", "type": "output"}) - - assert result == fake_bytes - - def test_api_get_timeout_raises(self): - """api_get should raise RuntimeError on timeout.""" - import requests as req - from cli_anything.comfyui.utils.comfyui_backend import api_get - - with patch("cli_anything.comfyui.utils.comfyui_backend.requests.get", - side_effect=req.exceptions.Timeout()): - with pytest.raises(RuntimeError, match="timed out"): - api_get("http://localhost:8188", "/queue") +"""Unit tests for ComfyUI CLI harness โ€” no ComfyUI installation required. + +Tests cover: +- Workflow load/save/list/validate +- Queue operations (prompt, status, clear, history, interrupt) +- Model listing (checkpoints, LoRAs, VAEs, ControlNets, node info) +- Image listing and downloading +- CLI command parsing and output +- Error handling and edge cases + +Run with: + python -m pytest comfyui/tests/test_core.py + python -m pytest comfyui/tests/test_core.py -v +""" + +import json +from pathlib import Path +from unittest.mock import patch, MagicMock +import pytest +from click.testing import CliRunner + +from cli_anything.comfyui.comfyui_cli import cli +from cli_anything.comfyui.core import workflows as workflow_mod +from cli_anything.comfyui.core import queue as queue_mod +from cli_anything.comfyui.core import models as models_mod +from cli_anything.comfyui.core import images as images_mod + + +# โ”€โ”€ Fixtures โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ + +@pytest.fixture +def runner(): + """Click CLI test runner.""" + return CliRunner() + + +@pytest.fixture +def sample_workflow(): + """Minimal valid ComfyUI workflow (API format).""" + return { + "4": { + "class_type": "CheckpointLoaderSimple", + "inputs": {"ckpt_name": "v1-5-pruned-emaonly.ckpt"} + }, + "6": { + "class_type": "CLIPTextEncode", + "inputs": {"text": "a photo of a cat", "clip": ["4", 1]} + }, + "7": { + "class_type": "CLIPTextEncode", + "inputs": {"text": "bad quality", "clip": ["4", 1]} + }, + "5": { + "class_type": "EmptyLatentImage", + "inputs": {"batch_size": 1, "height": 512, "width": 512} + }, + "3": { + "class_type": "KSampler", + "inputs": { + "cfg": 7, "denoise": 1, "model": ["4", 0], + "negative": ["7", 0], "positive": ["6", 0], + "latent_image": ["5", 0], "sampler_name": "euler", + "scheduler": "normal", "seed": 42, "steps": 20 + } + }, + "8": { + "class_type": "VAEDecode", + "inputs": {"samples": ["3", 0], "vae": ["4", 2]} + }, + "9": { + "class_type": "SaveImage", + "inputs": {"filename_prefix": "ComfyUI", "images": ["8", 0]} + } + } + + +@pytest.fixture +def workflow_file(tmp_path, sample_workflow): + """Write sample workflow to a temp file and return the path.""" + p = tmp_path / "test_workflow.json" + p.write_text(json.dumps(sample_workflow)) + return str(p) + + +# โ”€โ”€ Workflow Tests โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ + +class TestWorkflowLoad: + """Test workflow file loading.""" + + def test_load_valid_workflow(self, workflow_file, sample_workflow): + """Should load a valid workflow JSON file.""" + result = workflow_mod.load_workflow(workflow_file) + assert result == sample_workflow + assert "3" in result + + def test_load_nonexistent_file(self): + """Should raise RuntimeError for missing file.""" + with pytest.raises(RuntimeError, match="not found"): + workflow_mod.load_workflow("/nonexistent/path/workflow.json") + + def test_load_non_json_extension(self, tmp_path): + """Should raise RuntimeError for non-.json file.""" + p = tmp_path / "workflow.txt" + p.write_text("{}") + with pytest.raises(RuntimeError, match=".json"): + workflow_mod.load_workflow(str(p)) + + def test_load_invalid_json(self, tmp_path): + """Should raise RuntimeError for malformed JSON.""" + p = tmp_path / "bad.json" + p.write_text("{not valid json") + with pytest.raises(RuntimeError, match="Invalid JSON"): + workflow_mod.load_workflow(str(p)) + + def test_load_non_dict_json(self, tmp_path): + """Should raise RuntimeError if JSON root is not a dict.""" + p = tmp_path / "list.json" + p.write_text("[1, 2, 3]") + with pytest.raises(RuntimeError, match="JSON object"): + workflow_mod.load_workflow(str(p)) + + +class TestWorkflowSave: + """Test workflow file saving.""" + + def test_save_workflow(self, tmp_path, sample_workflow): + """Should save workflow to JSON file.""" + dest = str(tmp_path / "saved.json") + result = workflow_mod.save_workflow(sample_workflow, dest) + assert result["status"] == "saved" + assert result["node_count"] == len(sample_workflow) + assert Path(dest).exists() + loaded = json.loads(Path(dest).read_text()) + assert loaded == sample_workflow + + def test_save_creates_parent_dirs(self, tmp_path, sample_workflow): + """Should create parent directories if they don't exist.""" + dest = str(tmp_path / "nested" / "deep" / "workflow.json") + result = workflow_mod.save_workflow(sample_workflow, dest) + assert result["status"] == "saved" + assert Path(dest).exists() + + def test_save_non_dict_raises(self): + """Should raise RuntimeError if workflow is not a dict.""" + with pytest.raises(RuntimeError, match="must be a dict"): + workflow_mod.save_workflow([1, 2, 3], "/tmp/test.json") + + +class TestWorkflowList: + """Test listing workflow files in a directory.""" + + def test_list_workflows(self, tmp_path, sample_workflow): + """Should list all JSON files in directory.""" + (tmp_path / "workflow1.json").write_text(json.dumps(sample_workflow)) + (tmp_path / "workflow2.json").write_text(json.dumps({"1": {"class_type": "SaveImage", "inputs": {}}})) + (tmp_path / "not_json.txt").write_text("ignored") + + result = workflow_mod.list_workflows(str(tmp_path)) + assert len(result) == 2 + filenames = [r["filename"] for r in result] + assert "workflow1.json" in filenames + assert "workflow2.json" in filenames + + def test_list_empty_directory(self, tmp_path): + """Should return empty list for directory with no JSON files.""" + result = workflow_mod.list_workflows(str(tmp_path)) + assert result == [] + + def test_list_nonexistent_directory(self): + """Should raise RuntimeError for nonexistent directory.""" + with pytest.raises(RuntimeError, match="not found"): + workflow_mod.list_workflows("/nonexistent/dir/xyz") + + +class TestWorkflowValidate: + """Test workflow validation.""" + + def test_valid_workflow(self, sample_workflow): + """Should pass validation for a well-formed workflow.""" + result = workflow_mod.validate_workflow(sample_workflow) + assert result["valid"] is True + assert result["node_count"] == len(sample_workflow) + assert result["errors"] == [] + + def test_empty_workflow(self): + """Should warn about empty workflow but not fail.""" + result = workflow_mod.validate_workflow({}) + assert result["valid"] is True + assert any("empty" in w.lower() for w in result["warnings"]) + + def test_missing_class_type(self): + """Should error on node missing class_type.""" + wf = {"1": {"inputs": {"text": "hello"}}} + result = workflow_mod.validate_workflow(wf) + assert result["valid"] is False + assert any("class_type" in e for e in result["errors"]) + + def test_non_dict_inputs(self): + """Should error when inputs is not a dict.""" + wf = {"1": {"class_type": "CLIPTextEncode", "inputs": ["bad"]}} + result = workflow_mod.validate_workflow(wf) + assert result["valid"] is False + + def test_non_dict_workflow(self): + """Should fail validation if workflow is not a dict.""" + result = workflow_mod.validate_workflow("not a dict") + assert result["valid"] is False + assert result["node_count"] == 0 + + +# โ”€โ”€ Queue Tests โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ + +class TestQueuePrompt: + """Test submitting prompts to the queue.""" + + def test_queue_prompt_success(self, sample_workflow): + """Should return prompt_id and queue position.""" + mock_response = { + "prompt_id": "abc-123-def", + "number": 0, + "node_errors": {}, + } + with patch("cli_anything.comfyui.core.queue.api_post", return_value=mock_response): + result = queue_mod.queue_prompt("http://localhost:8188", sample_workflow) + + assert result["prompt_id"] == "abc-123-def" + assert result["number"] == 0 + assert result["node_errors"] == {} + assert "client_id" in result + + def test_queue_prompt_with_client_id(self, sample_workflow): + """Should use provided client_id.""" + mock_response = {"prompt_id": "xyz", "number": 1, "node_errors": {}} + with patch("cli_anything.comfyui.core.queue.api_post", return_value=mock_response) as mock_post: + result = queue_mod.queue_prompt("http://localhost:8188", sample_workflow, client_id="my-client") + + assert result["client_id"] == "my-client" + call_args = mock_post.call_args + assert call_args[0][2]["client_id"] == "my-client" + + def test_queue_empty_workflow_raises(self): + """Should raise RuntimeError for empty workflow.""" + with pytest.raises(RuntimeError, match="empty"): + queue_mod.queue_prompt("http://localhost:8188", {}) + + def test_queue_prompt_server_error_raises(self, sample_workflow): + """Should raise RuntimeError when server returns error.""" + mock_response = { + "error": {"message": "Invalid prompt", "type": "value_error"} + } + with patch("cli_anything.comfyui.core.queue.api_post", return_value=mock_response): + with pytest.raises(RuntimeError, match="rejected"): + queue_mod.queue_prompt("http://localhost:8188", sample_workflow) + + +class TestQueueStatus: + """Test queue status retrieval.""" + + def test_get_queue_status(self): + """Should return running and pending counts.""" + mock_response = { + "queue_running": [["abc", {}, {}, {}]], + "queue_pending": [["def", {}, {}, {}], ["ghi", {}, {}, {}]], + } + with patch("cli_anything.comfyui.core.queue.api_get", return_value=mock_response): + result = queue_mod.get_queue_status("http://localhost:8188") + + assert result["running_count"] == 1 + assert result["pending_count"] == 2 + + def test_get_queue_status_empty(self): + """Should handle empty queue.""" + mock_response = {"queue_running": [], "queue_pending": []} + with patch("cli_anything.comfyui.core.queue.api_get", return_value=mock_response): + result = queue_mod.get_queue_status("http://localhost:8188") + + assert result["running_count"] == 0 + assert result["pending_count"] == 0 + + +class TestQueueClear: + """Test queue clearing.""" + + def test_clear_queue(self): + """Should return cleared status.""" + with patch("cli_anything.comfyui.core.queue.api_delete", return_value={"status": "ok"}): + result = queue_mod.clear_queue("http://localhost:8188") + + assert result["status"] == "cleared" + + def test_clear_queue_passes_clear_flag(self): + """Should pass clear=True to the API.""" + with patch("cli_anything.comfyui.core.queue.api_delete", return_value={}) as mock_del: + queue_mod.clear_queue("http://localhost:8188") + + call_args = mock_del.call_args + # data kwarg or positional arg should contain {"clear": True} + data_arg = call_args[1].get("data") or (call_args[0][2] if len(call_args[0]) > 2 else None) + assert data_arg == {"clear": True} + + +class TestQueueHistory: + """Test prompt history retrieval.""" + + def test_get_history(self): + """Should format history entries with outputs.""" + mock_response = { + "abc-123": { + "outputs": { + "9": { + "images": [ + {"filename": "ComfyUI_00001_.png", "subfolder": "", "type": "output"} + ] + } + }, + "status": {"status_str": "success", "completed": True} + } + } + with patch("cli_anything.comfyui.core.queue.api_get", return_value=mock_response): + result = queue_mod.get_history("http://localhost:8188") + + assert result["total"] == 1 + assert "abc-123" in result["history"] + entry = result["history"]["abc-123"] + assert entry["completed"] is True + assert len(entry["outputs"]) == 1 + assert entry["outputs"][0]["filename"] == "ComfyUI_00001_.png" + + def test_get_prompt_history_not_found(self): + """Should raise RuntimeError when prompt ID not in history.""" + with patch("cli_anything.comfyui.core.queue.api_get", return_value={}): + with pytest.raises(RuntimeError, match="not found"): + queue_mod.get_prompt_history("http://localhost:8188", "nonexistent-id") + + def test_interrupt(self): + """Should call interrupt endpoint and return status.""" + with patch("cli_anything.comfyui.core.queue.api_post", return_value={}): + result = queue_mod.interrupt("http://localhost:8188") + assert result["status"] == "interrupted" + + +# โ”€โ”€ Models Tests โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ + +class TestModels: + """Test model listing functions.""" + + def _make_checkpoint_response(self, names): + return {"CheckpointLoaderSimple": {"input": {"required": {"ckpt_name": [names, {}]}}}} + + def _make_lora_response(self, names): + return {"LoraLoader": {"input": {"required": {"lora_name": [names, {}]}}}} + + def _make_vae_response(self, names): + return {"VAELoader": {"input": {"required": {"vae_name": [names, {}]}}}} + + def _make_controlnet_response(self, names): + return {"ControlNetLoader": {"input": {"required": {"control_net_name": [names, {}]}}}} + + def test_list_checkpoints(self): + """Should return sorted list of checkpoint names.""" + mock_resp = self._make_checkpoint_response([ + "sd_xl_base_1.0.safetensors", "v1-5-pruned-emaonly.ckpt", "deliberate_v2.safetensors", + ]) + with patch("cli_anything.comfyui.core.models.api_get", return_value=mock_resp): + result = models_mod.list_checkpoints("http://localhost:8188") + + assert isinstance(result, list) + assert len(result) == 3 + assert result == sorted(result) + + def test_list_loras(self): + """Should return sorted list of LoRA names.""" + mock_resp = self._make_lora_response(["lora_b.safetensors", "lora_a.safetensors"]) + with patch("cli_anything.comfyui.core.models.api_get", return_value=mock_resp): + result = models_mod.list_loras("http://localhost:8188") + + assert result == ["lora_a.safetensors", "lora_b.safetensors"] + + def test_list_vaes(self): + """Should return sorted list of VAE names.""" + mock_resp = self._make_vae_response(["vae-ft-mse-840000-ema-pruned.ckpt"]) + with patch("cli_anything.comfyui.core.models.api_get", return_value=mock_resp): + result = models_mod.list_vaes("http://localhost:8188") + + assert "vae-ft-mse-840000-ema-pruned.ckpt" in result + + def test_list_controlnets(self): + """Should return sorted list of ControlNet names.""" + mock_resp = self._make_controlnet_response(["control_v11p_sd15_canny.pth"]) + with patch("cli_anything.comfyui.core.models.api_get", return_value=mock_resp): + result = models_mod.list_controlnets("http://localhost:8188") + + assert "control_v11p_sd15_canny.pth" in result + + def test_list_checkpoints_bad_response_raises(self): + """Should raise RuntimeError on unexpected API response.""" + with patch("cli_anything.comfyui.core.models.api_get", return_value={}): + with pytest.raises(RuntimeError, match="checkpoint"): + models_mod.list_checkpoints("http://localhost:8188") + + def test_get_node_info(self): + """Should return formatted node schema.""" + mock_resp = { + "KSampler": { + "display_name": "KSampler", + "description": "Samples latents", + "category": "sampling", + "input": {"required": {"steps": [["INT"], {"default": 20}]}}, + "output": ["LATENT"], + "output_name": ["LATENT"], + } + } + with patch("cli_anything.comfyui.core.models.api_get", return_value=mock_resp): + result = models_mod.get_node_info("http://localhost:8188", "KSampler") + + assert result["class_type"] == "KSampler" + assert result["category"] == "sampling" + + def test_get_node_info_not_found_raises(self): + """Should raise RuntimeError when node class not in response.""" + with patch("cli_anything.comfyui.core.models.api_get", return_value={}): + with pytest.raises(RuntimeError, match="not found"): + models_mod.get_node_info("http://localhost:8188", "NonExistentNode") + + def test_list_all_node_classes(self): + """Should return sorted list of all node class names.""" + mock_resp = {"KSampler": {}, "CLIPTextEncode": {}, "SaveImage": {}} + with patch("cli_anything.comfyui.core.models.api_get", return_value=mock_resp): + result = models_mod.list_all_node_classes("http://localhost:8188") + + assert result == ["CLIPTextEncode", "KSampler", "SaveImage"] + + +# โ”€โ”€ Images Tests โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ + +class TestImages: + """Test image listing and downloading.""" + + def test_list_output_images(self): + """Should return list of image file refs for a prompt.""" + mock_history = { + "prompt_id": "abc-123", + "status": "success", + "completed": True, + "outputs": [ + {"node_id": "9", "filename": "ComfyUI_00001_.png", + "subfolder": "", "type": "output"} + ] + } + with patch("cli_anything.comfyui.core.images.get_prompt_history", + return_value=mock_history): + result = images_mod.list_output_images("http://localhost:8188", "abc-123") + + assert len(result) == 1 + assert result[0]["filename"] == "ComfyUI_00001_.png" + + def test_list_output_images_incomplete_raises(self): + """Should raise RuntimeError when prompt not yet complete.""" + mock_history = { + "prompt_id": "abc-123", + "status": "running", + "completed": False, + "outputs": [] + } + with patch("cli_anything.comfyui.core.images.get_prompt_history", + return_value=mock_history): + with pytest.raises(RuntimeError, match="not completed"): + images_mod.list_output_images("http://localhost:8188", "abc-123") + + def test_download_image(self, tmp_path): + """Should download image bytes and write to disk.""" + fake_png = b"\x89PNG\r\n\x1a\n" + b"\x00" * 100 + dest = str(tmp_path / "output.png") + + with patch("cli_anything.comfyui.core.images.api_get_raw", return_value=fake_png): + result = images_mod.download_image( + base_url="http://localhost:8188", + filename="ComfyUI_00001_.png", + output_path=dest, + ) + + assert result["status"] == "downloaded" + assert result["size_bytes"] == len(fake_png) + assert Path(dest).read_bytes() == fake_png + + def test_download_image_no_overwrite_raises(self, tmp_path): + """Should raise RuntimeError when output file exists and overwrite=False.""" + dest = tmp_path / "existing.png" + dest.write_bytes(b"existing content") + + with pytest.raises(RuntimeError, match="already exists"): + images_mod.download_image( + base_url="http://localhost:8188", + filename="ComfyUI_00001_.png", + output_path=str(dest), + overwrite=False, + ) + + def test_download_image_overwrite(self, tmp_path): + """Should overwrite existing file when overwrite=True.""" + dest = tmp_path / "existing.png" + dest.write_bytes(b"old content") + fake_png = b"\x89PNG\r\n\x1a\n" + b"\x00" * 50 + + with patch("cli_anything.comfyui.core.images.api_get_raw", return_value=fake_png): + images_mod.download_image( + base_url="http://localhost:8188", + filename="ComfyUI_00001_.png", + output_path=str(dest), + overwrite=True, + ) + + assert dest.read_bytes() == fake_png + + def test_download_prompt_images(self, tmp_path): + """Should download all images for a prompt to a directory.""" + mock_history = { + "prompt_id": "abc-123", + "status": "success", + "completed": True, + "outputs": [ + {"node_id": "9", "filename": "ComfyUI_00001_.png", "subfolder": "", "type": "output"}, + {"node_id": "9", "filename": "ComfyUI_00002_.png", "subfolder": "", "type": "output"}, + ] + } + fake_png = b"\x89PNG\r\n\x1a\n" + b"\x00" * 20 + + with patch("cli_anything.comfyui.core.images.get_prompt_history", return_value=mock_history), \ + patch("cli_anything.comfyui.core.images.api_get_raw", return_value=fake_png): + results = images_mod.download_prompt_images( + base_url="http://localhost:8188", + prompt_id="abc-123", + output_dir=str(tmp_path), + ) + + assert len(results) == 2 + assert all(r["status"] == "downloaded" for r in results) + + +# โ”€โ”€ CLI Integration Tests โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ + +class TestCLIWorkflow: + """Test CLI workflow commands.""" + + def test_workflow_list(self, runner, tmp_path, sample_workflow): + """workflow list should display JSON files.""" + (tmp_path / "my_wf.json").write_text(json.dumps(sample_workflow)) + + result = runner.invoke(cli, ["workflow", "list", str(tmp_path)]) + assert result.exit_code == 0 + assert "my_wf.json" in result.output + + def test_workflow_validate_valid(self, runner, workflow_file): + """workflow validate should pass for a valid workflow.""" + result = runner.invoke(cli, ["workflow", "validate", workflow_file]) + assert result.exit_code == 0 + assert "valid" in result.output.lower() + + def test_workflow_validate_json_output(self, runner, workflow_file): + """--json flag should produce valid JSON output.""" + result = runner.invoke(cli, ["--json", "workflow", "validate", workflow_file]) + assert result.exit_code == 0 + data = json.loads(result.output) + assert "valid" in data + assert "node_count" in data + + +class TestCLIQueue: + """Test CLI queue commands.""" + + def test_queue_prompt(self, runner, workflow_file): + """queue prompt should queue a workflow and show prompt_id.""" + mock_response = {"prompt_id": "test-id-999", "number": 0, "node_errors": {}} + with patch("cli_anything.comfyui.core.queue.api_post", return_value=mock_response): + result = runner.invoke(cli, ["queue", "prompt", "--workflow", workflow_file]) + + assert result.exit_code == 0 + assert "test-id-999" in result.output + + def test_queue_status(self, runner): + """queue status should show running and pending counts.""" + mock_response = {"queue_running": [], "queue_pending": [["id1", {}, {}, {}]]} + with patch("cli_anything.comfyui.core.queue.api_get", return_value=mock_response): + result = runner.invoke(cli, ["queue", "status"]) + + assert result.exit_code == 0 + assert "1" in result.output + + def test_queue_clear_with_confirm(self, runner): + """queue clear --confirm should skip prompt and clear.""" + with patch("cli_anything.comfyui.core.queue.api_delete", return_value={}): + result = runner.invoke(cli, ["queue", "clear", "--confirm"]) + + assert result.exit_code == 0 + assert "cleared" in result.output + + def test_queue_history_json(self, runner): + """queue history --json should return valid JSON.""" + mock_response = { + "abc": {"outputs": {}, "status": {"status_str": "success", "completed": True}} + } + with patch("cli_anything.comfyui.core.queue.api_get", return_value=mock_response): + result = runner.invoke(cli, ["--json", "queue", "history"]) + + assert result.exit_code == 0 + data = json.loads(result.output) + assert "history" in data + assert "total" in data + + +class TestCLIModels: + """Test CLI models commands.""" + + def test_models_checkpoints(self, runner): + """models checkpoints should list checkpoint names.""" + mock_resp = { + "CheckpointLoaderSimple": { + "input": {"required": {"ckpt_name": [["v1-5-pruned-emaonly.ckpt", "sd_xl_base_1.0.safetensors"], {}]}} + } + } + with patch("cli_anything.comfyui.core.models.api_get", return_value=mock_resp): + result = runner.invoke(cli, ["models", "checkpoints"]) + + assert result.exit_code == 0 + assert "v1-5-pruned-emaonly.ckpt" in result.output + + def test_models_checkpoints_json(self, runner): + """models checkpoints --json should return a JSON array.""" + mock_resp = { + "CheckpointLoaderSimple": { + "input": {"required": {"ckpt_name": [["model_a.safetensors"], {}]}} + } + } + with patch("cli_anything.comfyui.core.models.api_get", return_value=mock_resp): + result = runner.invoke(cli, ["--json", "models", "checkpoints"]) + + assert result.exit_code == 0 + data = json.loads(result.output) + assert isinstance(data, list) + assert "model_a.safetensors" in data + + +class TestCLIImages: + """Test CLI images commands.""" + + def test_images_list(self, runner): + """images list should show output filenames.""" + mock_history = { + "prompt_id": "abc-123", + "status": "success", + "completed": True, + "outputs": [{"node_id": "9", "filename": "ComfyUI_00001_.png", "subfolder": "", "type": "output"}] + } + with patch("cli_anything.comfyui.core.images.get_prompt_history", return_value=mock_history): + result = runner.invoke(cli, ["images", "list", "--prompt-id", "abc-123"]) + + assert result.exit_code == 0 + assert "ComfyUI_00001_.png" in result.output + + def test_images_download(self, runner, tmp_path): + """images download should save file to disk.""" + fake_png = b"\x89PNG\r\n\x1a\n" + b"\x00" * 30 + dest = str(tmp_path / "out.png") + + with patch("cli_anything.comfyui.core.images.api_get_raw", return_value=fake_png): + result = runner.invoke(cli, [ + "images", "download", + "--filename", "ComfyUI_00001_.png", + "--output", dest, + ]) + + assert result.exit_code == 0 + assert "downloaded" in result.output.lower() + assert Path(dest).exists() + + +class TestCLISystem: + """Test CLI system commands.""" + + def test_system_stats(self, runner): + """system stats should display server info.""" + mock_stats = { + "system": {"os": "linux", "python_version": "3.11"}, + "devices": [{"name": "NVIDIA RTX 3060", "vram_total": 12884901888}] + } + with patch("cli_anything.comfyui.comfyui_cli.api_get", return_value=mock_stats): + result = runner.invoke(cli, ["system", "stats"]) + + assert result.exit_code == 0 + + def test_system_stats_json(self, runner): + """system stats --json should return valid JSON.""" + mock_stats = {"system": {"os": "linux"}, "devices": []} + with patch("cli_anything.comfyui.comfyui_cli.api_get", return_value=mock_stats): + result = runner.invoke(cli, ["--json", "system", "stats"]) + + assert result.exit_code == 0 + data = json.loads(result.output) + assert "system" in data + + +# โ”€โ”€ Backend Tests โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ + +class TestBackend: + """Test comfyui_backend HTTP wrappers.""" + + def test_api_get_success(self): + """api_get should return parsed JSON on success.""" + from cli_anything.comfyui.utils.comfyui_backend import api_get + + mock_resp = MagicMock() + mock_resp.status_code = 200 + mock_resp.content = b'{"result": "ok"}' + mock_resp.json.return_value = {"result": "ok"} + mock_resp.raise_for_status = MagicMock() + + with patch("cli_anything.comfyui.utils.comfyui_backend.requests.get", + return_value=mock_resp): + result = api_get("http://localhost:8188", "/queue") + + assert result == {"result": "ok"} + + def test_api_get_connection_error(self): + """api_get should raise RuntimeError on connection failure.""" + import requests as req + from cli_anything.comfyui.utils.comfyui_backend import api_get + + with patch("cli_anything.comfyui.utils.comfyui_backend.requests.get", + side_effect=req.exceptions.ConnectionError("refused")): + with pytest.raises(RuntimeError, match="Cannot connect"): + api_get("http://localhost:8188", "/queue") + + def test_api_post_success(self): + """api_post should return parsed JSON on success.""" + from cli_anything.comfyui.utils.comfyui_backend import api_post + + mock_resp = MagicMock() + mock_resp.status_code = 200 + mock_resp.content = b'{"prompt_id": "abc"}' + mock_resp.json.return_value = {"prompt_id": "abc"} + mock_resp.raise_for_status = MagicMock() + + with patch("cli_anything.comfyui.utils.comfyui_backend.requests.post", + return_value=mock_resp): + result = api_post("http://localhost:8188", "/prompt", {"prompt": {}}) + + assert result["prompt_id"] == "abc" + + def test_api_delete_success(self): + """api_delete should return ok status on 204.""" + from cli_anything.comfyui.utils.comfyui_backend import api_delete + + mock_resp = MagicMock() + mock_resp.status_code = 204 + mock_resp.content = b"" + mock_resp.raise_for_status = MagicMock() + + with patch("cli_anything.comfyui.utils.comfyui_backend.requests.delete", + return_value=mock_resp): + result = api_delete("http://localhost:8188", "/queue") + + assert result == {"status": "ok"} + + def test_api_get_raw_returns_bytes(self): + """api_get_raw should return raw bytes.""" + from cli_anything.comfyui.utils.comfyui_backend import api_get_raw + + fake_bytes = b"\x89PNG\r\n\x1a\n" + b"\x00" * 50 + mock_resp = MagicMock() + mock_resp.status_code = 200 + mock_resp.content = fake_bytes + mock_resp.raise_for_status = MagicMock() + + with patch("cli_anything.comfyui.utils.comfyui_backend.requests.get", + return_value=mock_resp): + result = api_get_raw("http://localhost:8188", "/view", + params={"filename": "ComfyUI_00001_.png", "type": "output"}) + + assert result == fake_bytes + + def test_api_get_timeout_raises(self): + """api_get should raise RuntimeError on timeout.""" + import requests as req + from cli_anything.comfyui.utils.comfyui_backend import api_get + + with patch("cli_anything.comfyui.utils.comfyui_backend.requests.get", + side_effect=req.exceptions.Timeout()): + with pytest.raises(RuntimeError, match="timed out"): + api_get("http://localhost:8188", "/queue") diff --git a/comfyui/agent-harness/cli_anything/comfyui/tests/test_full_e2e.py b/comfyui/agent-harness/cli_anything/comfyui/tests/test_full_e2e.py index 93e223e2cc..7f2e63f29a 100644 --- a/comfyui/agent-harness/cli_anything/comfyui/tests/test_full_e2e.py +++ b/comfyui/agent-harness/cli_anything/comfyui/tests/test_full_e2e.py @@ -1,222 +1,222 @@ -"""Full end-to-end tests for ComfyUI CLI harness. - -These tests simulate a complete generation workflow using mocked HTTP responses. -They do NOT require ComfyUI to be installed or running. - -Run with: - python -m pytest comfyui/tests/test_full_e2e.py -v -""" - -import json -from pathlib import Path -from unittest.mock import patch -import pytest -from click.testing import CliRunner - -from cli_anything.comfyui.comfyui_cli import cli - - -@pytest.fixture -def runner(): - return CliRunner() - - -@pytest.fixture -def sample_workflow(): - return { - "4": {"class_type": "CheckpointLoaderSimple", - "inputs": {"ckpt_name": "v1-5-pruned-emaonly.ckpt"}}, - "6": {"class_type": "CLIPTextEncode", - "inputs": {"text": "a beautiful landscape", "clip": ["4", 1]}}, - "7": {"class_type": "CLIPTextEncode", - "inputs": {"text": "ugly, bad", "clip": ["4", 1]}}, - "5": {"class_type": "EmptyLatentImage", - "inputs": {"batch_size": 1, "height": 512, "width": 512}}, - "3": {"class_type": "KSampler", - "inputs": {"cfg": 7, "denoise": 1, "model": ["4", 0], - "negative": ["7", 0], "positive": ["6", 0], - "latent_image": ["5", 0], "sampler_name": "euler", - "scheduler": "normal", "seed": 12345, "steps": 20}}, - "8": {"class_type": "VAEDecode", - "inputs": {"samples": ["3", 0], "vae": ["4", 2]}}, - "9": {"class_type": "SaveImage", - "inputs": {"filename_prefix": "ComfyUI", "images": ["8", 0]}}, - } - - -@pytest.fixture -def workflow_file(tmp_path, sample_workflow): - p = tmp_path / "landscape.json" - p.write_text(json.dumps(sample_workflow)) - return str(p) - - -class TestFullGenerationWorkflow: - """Simulate complete generate -> check -> download workflow.""" - - def test_queue_and_check_status(self, runner, workflow_file): - """Full flow: validate workflow -> queue it -> check queue status.""" - prompt_id = "e2e-prompt-001" - - queue_response = {"prompt_id": prompt_id, "number": 0, "node_errors": {}} - status_response = { - "queue_running": [["e2e-prompt-001", {}, {}, {}]], - "queue_pending": [], - } - - with patch("cli_anything.comfyui.core.queue.api_post", return_value=queue_response), \ - patch("cli_anything.comfyui.core.queue.api_get", return_value=status_response): - - # Step 1: Validate - result = runner.invoke(cli, ["workflow", "validate", workflow_file]) - assert result.exit_code == 0 - - # Step 2: Queue - result = runner.invoke(cli, ["queue", "prompt", "--workflow", workflow_file]) - assert result.exit_code == 0 - assert prompt_id in result.output - - # Step 3: Check status - result = runner.invoke(cli, ["queue", "status"]) - assert result.exit_code == 0 - - def test_queue_then_download(self, runner, workflow_file, tmp_path): - """Full flow: queue -> list outputs -> download image.""" - prompt_id = "e2e-prompt-002" - img_filename = "ComfyUI_00001_.png" - fake_png = b"\x89PNG\r\n\x1a\n" + b"\x00" * 128 - - history_response = { - prompt_id: { - "outputs": { - "9": { - "images": [ - {"filename": img_filename, "subfolder": "", "type": "output"} - ] - } - }, - "status": {"status_str": "success", "completed": True} - } - } - - dest = str(tmp_path / "downloaded.png") - - with patch("cli_anything.comfyui.core.queue.api_post", - return_value={"prompt_id": prompt_id, "number": 0, "node_errors": {}}), \ - patch("cli_anything.comfyui.core.queue.api_get", return_value=history_response), \ - patch("cli_anything.comfyui.core.images.api_get_raw", return_value=fake_png): - - # Queue - result = runner.invoke(cli, ["queue", "prompt", "--workflow", workflow_file]) - assert result.exit_code == 0 - - # List outputs - result = runner.invoke(cli, ["images", "list", "--prompt-id", prompt_id]) - assert result.exit_code == 0 - assert img_filename in result.output - - # Download - result = runner.invoke(cli, [ - "images", "download", - "--filename", img_filename, - "--output", dest, - ]) - assert result.exit_code == 0 - assert Path(dest).read_bytes() == fake_png - - def test_json_mode_full_flow(self, runner, workflow_file): - """All commands in --json mode should produce valid JSON throughout.""" - prompt_id = "e2e-json-003" - queue_response = {"prompt_id": prompt_id, "number": 0, "node_errors": {}} - - with patch("cli_anything.comfyui.core.queue.api_post", return_value=queue_response): - result = runner.invoke(cli, ["--json", "queue", "prompt", "--workflow", workflow_file]) - assert result.exit_code == 0 - data = json.loads(result.output) - assert data["prompt_id"] == prompt_id - - def test_interrupt_generation(self, runner): - """queue interrupt should stop current generation.""" - with patch("cli_anything.comfyui.core.queue.api_post", return_value={}): - result = runner.invoke(cli, ["queue", "interrupt"]) - assert result.exit_code == 0 - assert "interrupted" in result.output - - def test_clear_queue_and_verify(self, runner): - """Clear queue then verify it is empty.""" - empty_status = {"queue_running": [], "queue_pending": []} - - with patch("cli_anything.comfyui.core.queue.api_delete", return_value={}), \ - patch("cli_anything.comfyui.core.queue.api_get", return_value=empty_status): - - result = runner.invoke(cli, ["queue", "clear", "--confirm"]) - assert result.exit_code == 0 - assert "cleared" in result.output - - result = runner.invoke(cli, ["--json", "queue", "status"]) - assert result.exit_code == 0 - data = json.loads(result.output) - assert data["running_count"] == 0 - assert data["pending_count"] == 0 - - -class TestModelDiscovery: - """Test model listing as part of setup workflow.""" - - def test_discover_all_model_types(self, runner): - """Should list all four model types without error.""" - ckpt_resp = {"CheckpointLoaderSimple": {"input": {"required": {"ckpt_name": [["model_a.ckpt"], {}]}}}} - lora_resp = {"LoraLoader": {"input": {"required": {"lora_name": [["lora_style.safetensors"], {}]}}}} - vae_resp = {"VAELoader": {"input": {"required": {"vae_name": [["vae.ckpt"], {}]}}}} - cn_resp = {"ControlNetLoader": {"input": {"required": {"control_net_name": [["canny.pth"], {}]}}}} - - with patch("cli_anything.comfyui.core.models.api_get") as mock_api: - mock_api.side_effect = [ckpt_resp, lora_resp, vae_resp, cn_resp] - - for cmd in [ - ["models", "checkpoints"], - ["models", "loras"], - ["models", "vaes"], - ["models", "controlnets"], - ]: - result = runner.invoke(cli, cmd) - assert result.exit_code == 0, f"Failed on: {cmd} โ€” {result.output}" - - -class TestErrorHandling: - """Test error scenarios are handled gracefully.""" - - def test_connection_refused_shows_error(self, runner, workflow_file): - """Should show friendly error when ComfyUI is not running.""" - with patch("cli_anything.comfyui.core.queue.api_post", - side_effect=RuntimeError("Cannot connect to ComfyUI at http://localhost:8188. Is ComfyUI running?")): - result = runner.invoke(cli, ["queue", "prompt", "--workflow", workflow_file]) - - assert result.exit_code != 0 - assert "Cannot connect" in result.output or "Error" in result.output - - def test_server_rejects_workflow_shows_error(self, runner, workflow_file): - """Should show error message when server rejects the workflow.""" - with patch("cli_anything.comfyui.core.queue.api_post", - return_value={"error": {"message": "Node not found: BadNode", "type": "value_error"}}): - result = runner.invoke(cli, ["queue", "prompt", "--workflow", workflow_file]) - - assert result.exit_code != 0 - assert "Error" in result.output or "rejected" in result.output - - def test_nonexistent_workflow_shows_error(self, runner): - """Should error when workflow file does not exist.""" - result = runner.invoke(cli, ["queue", "prompt", "--workflow", "/nonexistent.json"]) - assert result.exit_code != 0 - - def test_download_missing_image_shows_error(self, runner, tmp_path): - """Should error when trying to download non-existent image.""" - with patch("cli_anything.comfyui.core.images.api_get_raw", - side_effect=RuntimeError("ComfyUI API error 404")): - result = runner.invoke(cli, [ - "images", "download", - "--filename", "nonexistent.png", - "--output", str(tmp_path / "out.png"), - ]) - - assert result.exit_code != 0 +"""Full end-to-end tests for ComfyUI CLI harness. + +These tests simulate a complete generation workflow using mocked HTTP responses. +They do NOT require ComfyUI to be installed or running. + +Run with: + python -m pytest comfyui/tests/test_full_e2e.py -v +""" + +import json +from pathlib import Path +from unittest.mock import patch +import pytest +from click.testing import CliRunner + +from cli_anything.comfyui.comfyui_cli import cli + + +@pytest.fixture +def runner(): + return CliRunner() + + +@pytest.fixture +def sample_workflow(): + return { + "4": {"class_type": "CheckpointLoaderSimple", + "inputs": {"ckpt_name": "v1-5-pruned-emaonly.ckpt"}}, + "6": {"class_type": "CLIPTextEncode", + "inputs": {"text": "a beautiful landscape", "clip": ["4", 1]}}, + "7": {"class_type": "CLIPTextEncode", + "inputs": {"text": "ugly, bad", "clip": ["4", 1]}}, + "5": {"class_type": "EmptyLatentImage", + "inputs": {"batch_size": 1, "height": 512, "width": 512}}, + "3": {"class_type": "KSampler", + "inputs": {"cfg": 7, "denoise": 1, "model": ["4", 0], + "negative": ["7", 0], "positive": ["6", 0], + "latent_image": ["5", 0], "sampler_name": "euler", + "scheduler": "normal", "seed": 12345, "steps": 20}}, + "8": {"class_type": "VAEDecode", + "inputs": {"samples": ["3", 0], "vae": ["4", 2]}}, + "9": {"class_type": "SaveImage", + "inputs": {"filename_prefix": "ComfyUI", "images": ["8", 0]}}, + } + + +@pytest.fixture +def workflow_file(tmp_path, sample_workflow): + p = tmp_path / "landscape.json" + p.write_text(json.dumps(sample_workflow)) + return str(p) + + +class TestFullGenerationWorkflow: + """Simulate complete generate -> check -> download workflow.""" + + def test_queue_and_check_status(self, runner, workflow_file): + """Full flow: validate workflow -> queue it -> check queue status.""" + prompt_id = "e2e-prompt-001" + + queue_response = {"prompt_id": prompt_id, "number": 0, "node_errors": {}} + status_response = { + "queue_running": [["e2e-prompt-001", {}, {}, {}]], + "queue_pending": [], + } + + with patch("cli_anything.comfyui.core.queue.api_post", return_value=queue_response), \ + patch("cli_anything.comfyui.core.queue.api_get", return_value=status_response): + + # Step 1: Validate + result = runner.invoke(cli, ["workflow", "validate", workflow_file]) + assert result.exit_code == 0 + + # Step 2: Queue + result = runner.invoke(cli, ["queue", "prompt", "--workflow", workflow_file]) + assert result.exit_code == 0 + assert prompt_id in result.output + + # Step 3: Check status + result = runner.invoke(cli, ["queue", "status"]) + assert result.exit_code == 0 + + def test_queue_then_download(self, runner, workflow_file, tmp_path): + """Full flow: queue -> list outputs -> download image.""" + prompt_id = "e2e-prompt-002" + img_filename = "ComfyUI_00001_.png" + fake_png = b"\x89PNG\r\n\x1a\n" + b"\x00" * 128 + + history_response = { + prompt_id: { + "outputs": { + "9": { + "images": [ + {"filename": img_filename, "subfolder": "", "type": "output"} + ] + } + }, + "status": {"status_str": "success", "completed": True} + } + } + + dest = str(tmp_path / "downloaded.png") + + with patch("cli_anything.comfyui.core.queue.api_post", + return_value={"prompt_id": prompt_id, "number": 0, "node_errors": {}}), \ + patch("cli_anything.comfyui.core.queue.api_get", return_value=history_response), \ + patch("cli_anything.comfyui.core.images.api_get_raw", return_value=fake_png): + + # Queue + result = runner.invoke(cli, ["queue", "prompt", "--workflow", workflow_file]) + assert result.exit_code == 0 + + # List outputs + result = runner.invoke(cli, ["images", "list", "--prompt-id", prompt_id]) + assert result.exit_code == 0 + assert img_filename in result.output + + # Download + result = runner.invoke(cli, [ + "images", "download", + "--filename", img_filename, + "--output", dest, + ]) + assert result.exit_code == 0 + assert Path(dest).read_bytes() == fake_png + + def test_json_mode_full_flow(self, runner, workflow_file): + """All commands in --json mode should produce valid JSON throughout.""" + prompt_id = "e2e-json-003" + queue_response = {"prompt_id": prompt_id, "number": 0, "node_errors": {}} + + with patch("cli_anything.comfyui.core.queue.api_post", return_value=queue_response): + result = runner.invoke(cli, ["--json", "queue", "prompt", "--workflow", workflow_file]) + assert result.exit_code == 0 + data = json.loads(result.output) + assert data["prompt_id"] == prompt_id + + def test_interrupt_generation(self, runner): + """queue interrupt should stop current generation.""" + with patch("cli_anything.comfyui.core.queue.api_post", return_value={}): + result = runner.invoke(cli, ["queue", "interrupt"]) + assert result.exit_code == 0 + assert "interrupted" in result.output + + def test_clear_queue_and_verify(self, runner): + """Clear queue then verify it is empty.""" + empty_status = {"queue_running": [], "queue_pending": []} + + with patch("cli_anything.comfyui.core.queue.api_delete", return_value={}), \ + patch("cli_anything.comfyui.core.queue.api_get", return_value=empty_status): + + result = runner.invoke(cli, ["queue", "clear", "--confirm"]) + assert result.exit_code == 0 + assert "cleared" in result.output + + result = runner.invoke(cli, ["--json", "queue", "status"]) + assert result.exit_code == 0 + data = json.loads(result.output) + assert data["running_count"] == 0 + assert data["pending_count"] == 0 + + +class TestModelDiscovery: + """Test model listing as part of setup workflow.""" + + def test_discover_all_model_types(self, runner): + """Should list all four model types without error.""" + ckpt_resp = {"CheckpointLoaderSimple": {"input": {"required": {"ckpt_name": [["model_a.ckpt"], {}]}}}} + lora_resp = {"LoraLoader": {"input": {"required": {"lora_name": [["lora_style.safetensors"], {}]}}}} + vae_resp = {"VAELoader": {"input": {"required": {"vae_name": [["vae.ckpt"], {}]}}}} + cn_resp = {"ControlNetLoader": {"input": {"required": {"control_net_name": [["canny.pth"], {}]}}}} + + with patch("cli_anything.comfyui.core.models.api_get") as mock_api: + mock_api.side_effect = [ckpt_resp, lora_resp, vae_resp, cn_resp] + + for cmd in [ + ["models", "checkpoints"], + ["models", "loras"], + ["models", "vaes"], + ["models", "controlnets"], + ]: + result = runner.invoke(cli, cmd) + assert result.exit_code == 0, f"Failed on: {cmd} โ€” {result.output}" + + +class TestErrorHandling: + """Test error scenarios are handled gracefully.""" + + def test_connection_refused_shows_error(self, runner, workflow_file): + """Should show friendly error when ComfyUI is not running.""" + with patch("cli_anything.comfyui.core.queue.api_post", + side_effect=RuntimeError("Cannot connect to ComfyUI at http://localhost:8188. Is ComfyUI running?")): + result = runner.invoke(cli, ["queue", "prompt", "--workflow", workflow_file]) + + assert result.exit_code != 0 + assert "Cannot connect" in result.output or "Error" in result.output + + def test_server_rejects_workflow_shows_error(self, runner, workflow_file): + """Should show error message when server rejects the workflow.""" + with patch("cli_anything.comfyui.core.queue.api_post", + return_value={"error": {"message": "Node not found: BadNode", "type": "value_error"}}): + result = runner.invoke(cli, ["queue", "prompt", "--workflow", workflow_file]) + + assert result.exit_code != 0 + assert "Error" in result.output or "rejected" in result.output + + def test_nonexistent_workflow_shows_error(self, runner): + """Should error when workflow file does not exist.""" + result = runner.invoke(cli, ["queue", "prompt", "--workflow", "/nonexistent.json"]) + assert result.exit_code != 0 + + def test_download_missing_image_shows_error(self, runner, tmp_path): + """Should error when trying to download non-existent image.""" + with patch("cli_anything.comfyui.core.images.api_get_raw", + side_effect=RuntimeError("ComfyUI API error 404")): + result = runner.invoke(cli, [ + "images", "download", + "--filename", "nonexistent.png", + "--output", str(tmp_path / "out.png"), + ]) + + assert result.exit_code != 0 diff --git a/comfyui/agent-harness/cli_anything/comfyui/utils/comfyui_backend.py b/comfyui/agent-harness/cli_anything/comfyui/utils/comfyui_backend.py index 49988a6705..ac3a5ef8a2 100644 --- a/comfyui/agent-harness/cli_anything/comfyui/utils/comfyui_backend.py +++ b/comfyui/agent-harness/cli_anything/comfyui/utils/comfyui_backend.py @@ -1,156 +1,156 @@ -"""ComfyUI API backend โ€” wraps ComfyUI REST API HTTP calls. - -This module handles all HTTP communication with the ComfyUI server. -It is the only module that makes network requests. - -ComfyUI runs a local HTTP server (default: http://localhost:8188). -No authentication is required by default. -""" - -import requests -from typing import Any - -# Default ComfyUI server URL -DEFAULT_BASE_URL = "http://localhost:8188" - - -def api_get(base_url: str, endpoint: str, params: dict | None = None) -> Any: - """Perform a GET request against the ComfyUI API. - - Args: - base_url: ComfyUI server base URL (e.g., 'http://localhost:8188'). - endpoint: API endpoint path (e.g., '/queue'). - params: Optional query parameters. - - Returns: - Parsed JSON response as a dict or list. - - Raises: - RuntimeError: On HTTP error or connection failure. - """ - url = f"{base_url.rstrip('/')}{endpoint}" - try: - resp = requests.get(url, params=params, timeout=30) - resp.raise_for_status() - if resp.status_code == 204 or not resp.content: - return {"status": "ok"} - return resp.json() - except requests.exceptions.ConnectionError as e: - raise RuntimeError( - f"Cannot connect to ComfyUI at {base_url}. " - "Is ComfyUI running? Start it with: python main.py" - ) from e - except requests.exceptions.HTTPError as e: - raise RuntimeError( - f"ComfyUI API error {resp.status_code} on GET {endpoint}: {resp.text}" - ) from e - except requests.exceptions.Timeout as e: - raise RuntimeError( - f"Request to ComfyUI timed out: GET {endpoint}" - ) from e - - -def api_post(base_url: str, endpoint: str, data: dict | None = None) -> Any: - """Perform a POST request against the ComfyUI API. - - Args: - base_url: ComfyUI server base URL. - endpoint: API endpoint path. - data: JSON request body. - - Returns: - Parsed JSON response. - - Raises: - RuntimeError: On HTTP error or connection failure. - """ - url = f"{base_url.rstrip('/')}{endpoint}" - try: - resp = requests.post(url, json=data, timeout=30) - resp.raise_for_status() - if resp.status_code == 204 or not resp.content: - return {"status": "ok"} - return resp.json() - except requests.exceptions.ConnectionError as e: - raise RuntimeError( - f"Cannot connect to ComfyUI at {base_url}. " - "Is ComfyUI running? Start it with: python main.py" - ) from e - except requests.exceptions.HTTPError as e: - raise RuntimeError( - f"ComfyUI API error {resp.status_code} on POST {endpoint}: {resp.text}" - ) from e - except requests.exceptions.Timeout as e: - raise RuntimeError( - f"Request to ComfyUI timed out: POST {endpoint}" - ) from e - - -def api_delete(base_url: str, endpoint: str, data: dict | None = None) -> Any: - """Perform a DELETE request against the ComfyUI API. - - Args: - base_url: ComfyUI server base URL. - endpoint: API endpoint path. - data: Optional JSON request body. - - Returns: - Parsed JSON response or status dict. - - Raises: - RuntimeError: On HTTP error or connection failure. - """ - url = f"{base_url.rstrip('/')}{endpoint}" - try: - resp = requests.delete(url, json=data, timeout=30) - resp.raise_for_status() - if resp.status_code == 204 or not resp.content: - return {"status": "ok"} - return resp.json() - except requests.exceptions.ConnectionError as e: - raise RuntimeError( - f"Cannot connect to ComfyUI at {base_url}. " - "Is ComfyUI running? Start it with: python main.py" - ) from e - except requests.exceptions.HTTPError as e: - raise RuntimeError( - f"ComfyUI API error {resp.status_code} on DELETE {endpoint}: {resp.text}" - ) from e - except requests.exceptions.Timeout as e: - raise RuntimeError( - f"Request to ComfyUI timed out: DELETE {endpoint}" - ) from e - - -def api_get_raw(base_url: str, endpoint: str, params: dict | None = None) -> bytes: - """Perform a GET request and return raw bytes (for image downloads). - - Args: - base_url: ComfyUI server base URL. - endpoint: API endpoint path. - params: Optional query parameters. - - Returns: - Raw response bytes. - - Raises: - RuntimeError: On HTTP error or connection failure. - """ - url = f"{base_url.rstrip('/')}{endpoint}" - try: - resp = requests.get(url, params=params, timeout=60) - resp.raise_for_status() - return resp.content - except requests.exceptions.ConnectionError as e: - raise RuntimeError( - f"Cannot connect to ComfyUI at {base_url}. " - "Is ComfyUI running? Start it with: python main.py" - ) from e - except requests.exceptions.HTTPError as e: - raise RuntimeError( - f"ComfyUI API error {resp.status_code} on GET {endpoint}: {resp.text}" - ) from e - except requests.exceptions.Timeout as e: - raise RuntimeError( - f"Request to ComfyUI timed out: GET {endpoint}" - ) from e +"""ComfyUI API backend โ€” wraps ComfyUI REST API HTTP calls. + +This module handles all HTTP communication with the ComfyUI server. +It is the only module that makes network requests. + +ComfyUI runs a local HTTP server (default: http://localhost:8188). +No authentication is required by default. +""" + +import requests +from typing import Any + +# Default ComfyUI server URL +DEFAULT_BASE_URL = "http://localhost:8188" + + +def api_get(base_url: str, endpoint: str, params: dict | None = None) -> Any: + """Perform a GET request against the ComfyUI API. + + Args: + base_url: ComfyUI server base URL (e.g., 'http://localhost:8188'). + endpoint: API endpoint path (e.g., '/queue'). + params: Optional query parameters. + + Returns: + Parsed JSON response as a dict or list. + + Raises: + RuntimeError: On HTTP error or connection failure. + """ + url = f"{base_url.rstrip('/')}{endpoint}" + try: + resp = requests.get(url, params=params, timeout=30) + resp.raise_for_status() + if resp.status_code == 204 or not resp.content: + return {"status": "ok"} + return resp.json() + except requests.exceptions.ConnectionError as e: + raise RuntimeError( + f"Cannot connect to ComfyUI at {base_url}. " + "Is ComfyUI running? Start it with: python main.py" + ) from e + except requests.exceptions.HTTPError as e: + raise RuntimeError( + f"ComfyUI API error {resp.status_code} on GET {endpoint}: {resp.text}" + ) from e + except requests.exceptions.Timeout as e: + raise RuntimeError( + f"Request to ComfyUI timed out: GET {endpoint}" + ) from e + + +def api_post(base_url: str, endpoint: str, data: dict | None = None) -> Any: + """Perform a POST request against the ComfyUI API. + + Args: + base_url: ComfyUI server base URL. + endpoint: API endpoint path. + data: JSON request body. + + Returns: + Parsed JSON response. + + Raises: + RuntimeError: On HTTP error or connection failure. + """ + url = f"{base_url.rstrip('/')}{endpoint}" + try: + resp = requests.post(url, json=data, timeout=30) + resp.raise_for_status() + if resp.status_code == 204 or not resp.content: + return {"status": "ok"} + return resp.json() + except requests.exceptions.ConnectionError as e: + raise RuntimeError( + f"Cannot connect to ComfyUI at {base_url}. " + "Is ComfyUI running? Start it with: python main.py" + ) from e + except requests.exceptions.HTTPError as e: + raise RuntimeError( + f"ComfyUI API error {resp.status_code} on POST {endpoint}: {resp.text}" + ) from e + except requests.exceptions.Timeout as e: + raise RuntimeError( + f"Request to ComfyUI timed out: POST {endpoint}" + ) from e + + +def api_delete(base_url: str, endpoint: str, data: dict | None = None) -> Any: + """Perform a DELETE request against the ComfyUI API. + + Args: + base_url: ComfyUI server base URL. + endpoint: API endpoint path. + data: Optional JSON request body. + + Returns: + Parsed JSON response or status dict. + + Raises: + RuntimeError: On HTTP error or connection failure. + """ + url = f"{base_url.rstrip('/')}{endpoint}" + try: + resp = requests.delete(url, json=data, timeout=30) + resp.raise_for_status() + if resp.status_code == 204 or not resp.content: + return {"status": "ok"} + return resp.json() + except requests.exceptions.ConnectionError as e: + raise RuntimeError( + f"Cannot connect to ComfyUI at {base_url}. " + "Is ComfyUI running? Start it with: python main.py" + ) from e + except requests.exceptions.HTTPError as e: + raise RuntimeError( + f"ComfyUI API error {resp.status_code} on DELETE {endpoint}: {resp.text}" + ) from e + except requests.exceptions.Timeout as e: + raise RuntimeError( + f"Request to ComfyUI timed out: DELETE {endpoint}" + ) from e + + +def api_get_raw(base_url: str, endpoint: str, params: dict | None = None) -> bytes: + """Perform a GET request and return raw bytes (for image downloads). + + Args: + base_url: ComfyUI server base URL. + endpoint: API endpoint path. + params: Optional query parameters. + + Returns: + Raw response bytes. + + Raises: + RuntimeError: On HTTP error or connection failure. + """ + url = f"{base_url.rstrip('/')}{endpoint}" + try: + resp = requests.get(url, params=params, timeout=60) + resp.raise_for_status() + return resp.content + except requests.exceptions.ConnectionError as e: + raise RuntimeError( + f"Cannot connect to ComfyUI at {base_url}. " + "Is ComfyUI running? Start it with: python main.py" + ) from e + except requests.exceptions.HTTPError as e: + raise RuntimeError( + f"ComfyUI API error {resp.status_code} on GET {endpoint}: {resp.text}" + ) from e + except requests.exceptions.Timeout as e: + raise RuntimeError( + f"Request to ComfyUI timed out: GET {endpoint}" + ) from e diff --git a/qoder-plugin/setup-qodercli.sh b/qoder-plugin/setup-qodercli.sh old mode 100755 new mode 100644 diff --git a/registry.json b/registry.json index 53a7b6107c..f13920bb4f 100644 --- a/registry.json +++ b/registry.json @@ -783,6 +783,20 @@ } ] }, + { + "name": "calibre", + "display_name": "calibre", + "version": "1.0.0", + "description": "Ebook library management, metadata editing, export, and format conversion via calibredb/ebook-meta/ebook-convert", + "requires": "calibre installed (calibredb, ebook-convert, ebook-meta on PATH)", + "homepage": "https://calibre-ebook.com", + "install_cmd": "pip install git+https://github.com/HKUDS/CLI-Anything.git#subdirectory=calibre/agent-harness", + "entry_point": "cli-anything-calibre", + "skill_md": "calibre/agent-harness/cli_anything/calibre/skills/SKILL.md", + "category": "office", + "contributor": "CLI-Anything-Team", + "contributor_url": "https://github.com/HKUDS/CLI-Anything" + }, { "name": "cloudcompare", "display_name": "CloudCompare", diff --git a/shotcut/agent-harness/cli_anything/shotcut/shotcut_cli.py b/shotcut/agent-harness/cli_anything/shotcut/shotcut_cli.py old mode 100755 new mode 100644 diff --git a/shotcut/agent-harness/examples/workflow_basic.sh b/shotcut/agent-harness/examples/workflow_basic.sh old mode 100755 new mode 100644 diff --git a/sketch/agent-harness/src/cli.js b/sketch/agent-harness/src/cli.js old mode 100755 new mode 100644 diff --git a/skills/cli-anything-calibre/SKILL.md b/skills/cli-anything-calibre/SKILL.md new file mode 100644 index 0000000000..f8866cd9a3 --- /dev/null +++ b/skills/cli-anything-calibre/SKILL.md @@ -0,0 +1,155 @@ +--- +name: "cli-anything-calibre" +description: "้ขๅ‘ Agent ็š„ calibre ๅ‘ฝไปค่กŒ๏ผš็ฎก็†ไนฆๅบ“ใ€็ผ–่พ‘ๅ…ƒๆ•ฐๆฎใ€ๅฏผๅ‡บไธŽๆ ผๅผ่ฝฌๆข๏ผˆๅŸบไบŽ calibredb / ebook-meta / ebook-convert๏ผ‰ใ€‚" +--- + +# cli-anything-calibre + +Stateful CLI harness for calibre. + +## Installation + +This CLI is installed as part of the cli-anything-calibre package: + +```bash +pip install cli-anything-calibre +``` + +**Prerequisites:** +- Python 3.10+ +- Calibre must be installed on your system + +## Usage + +### Basic Commands + +```bash +# Show help +cli-anything-calibre --help + +# Start interactive REPL mode +cli-anything-calibre +``` + +### JSON mode (for agents) + +Use `--json` to get machine-readable output for all commands. + +```bash +cli-anything-calibre --json --library "D:/Books/Calibre Library" library info +cli-anything-calibre --json --library "D:/Books/Calibre Library" book list --search "title:Python" --limit 5 +``` + +## Command Groups + +### Library + +Library management commands. + +Common subcommands: +- `library open ` +- `library info` +- `library list-fields` +- `library stats` + +### Book + +Book management commands. + +Common subcommands: +- `book add [--title ...] [--authors ...] [--tags ...] [--series ...] [--duplicate]` +- `book list [--search ...] [--limit ...] [--sort-by ...] [--ascending]` +- `book get ` +- `book search [--limit ...]` +- `book set-field [--title ...] [--authors ...] [--tags ...]` +- `book remove [--permanent]` + +### Meta + +Standalone ebook metadata commands. + +Common subcommands: +- `meta show ` +- `meta set [--title ...] [--authors ...] [--tags ...] [--comments ...] [--language ...] [--publisher ...] [--cover ...]` +- `meta set-cover ` +- `meta clear [--comments] [--tags]` + +### Convert + +Format conversion commands. + +Common subcommands: +- `convert formats` +- `convert presets` +- `convert run [--preset kindle|tablet|generic-epub] [--extra-arg ...]` + +### Export + +Export and backup commands. + +Common subcommands: +- `export book --to-dir [--single-dir] [--formats ...]` +- `export catalog [--search ...]` +- `export backup [--all]` + +### Session + +Session management commands. + +Common subcommands: +- `session status` +- `session undo` +- `session redo` +- `session history` +- `session save` + +## Examples + +### Open a library and inspect + +```bash +cli-anything-calibre library open "D:/Books/Calibre Library" +cli-anything-calibre --json library stats +cli-anything-calibre --json book list --limit 5 +``` + +### Interactive REPL Session + +Start an interactive session with undo/redo support. + +```bash +cli-anything-calibre +# Enter commands interactively +# Use 'help' to see available commands +# Use 'undo' and 'redo' for history navigation +``` + +### Ingest โ†’ search โ†’ export โ†’ convert (workflow) + +```bash +# Add a book file into the library +cli-anything-calibre --json --library "D:/Books/Calibre Library" book add "D:/tmp/book.epub" --title "My Book" --authors "Me" + +# Search and pick a book id +cli-anything-calibre --json --library "D:/Books/Calibre Library" book search "title:My Book" --limit 5 + +# Export the book files +cli-anything-calibre --json --library "D:/Books/Calibre Library" export book 1 --to-dir "D:/tmp/exported" --single-dir + +# Convert EPUB to MOBI +cli-anything-calibre --json convert run "D:/tmp/exported/My Book.epub" "D:/tmp/converted/My Book.mobi" --preset kindle +``` + +## For AI Agents + +When using this CLI programmatically: + +1. **Always use `--json` flag** for parseable output +2. **Check return codes** - 0 for success, non-zero for errors +3. **Parse stderr** for error messages on failure +4. **Use absolute paths** for all file operations (recommended on Windows) +5. **Verify outputs exist** after export operations + +## Version + +1.0.0 \ No newline at end of file