This document defines the user-facing contract for the binlogviz root command, binlogviz analyze, binlogviz compare, binlogviz trend, binlogviz snapshot, binlogviz workflow run, binlogviz workflow resume, binlogviz workflow status, binlogviz workflow clean, binlogviz workflow export, binlogviz workflow validate, and binlogviz workflow describe.
If you want the fastest operator path instead of the full contract, start with Quickstart or Analyze Local Binlogs.
binlogviz --version
binlogviz --lang zh-CN analyze <binlog files...>
binlogviz analyze <binlog files...>
binlogviz analyze --from-dir DIR --prefix PREFIX
binlogviz analyze --from-dir DIR --prefix PREFIX --format json --snapshot-name NAME
binlogviz compare <current.json> <baseline.json>
binlogviz compare --current-snapshot CURRENT --baseline-snapshot BASELINE
binlogviz trend <snapshot...>
binlogviz trend --from-snapshots 'incident-*'
binlogviz snapshot save <report.json> --name NAME
binlogviz snapshot list
binlogviz snapshot show <name>
binlogviz workflow run <plan.yaml>
binlogviz workflow run <plan.yaml> --output-dir ./artifacts
binlogviz workflow resume <output_dir>
binlogviz workflow resume <output_dir> --rerun analyze:week2
binlogviz workflow status <output_dir>
binlogviz workflow status <output_dir> --format json
binlogviz workflow clean <output_dir>
binlogviz workflow clean <output_dir> --format json
binlogviz workflow clean <output_dir> --apply --include-snapshots
binlogviz workflow export <output_dir>
binlogviz workflow export <output_dir> --output ./incident.zip
binlogviz workflow export <output_dir> --include-snapshots --format json
binlogviz workflow validate <plan.yaml>
binlogviz workflow validate <plan.yaml> --format json
binlogviz workflow describe <plan.yaml>
binlogviz workflow describe <plan.yaml> --format jsonThese flags are available on the root command and apply before subcommand execution:
| Flag | Default | Description |
|---|---|---|
--lang |
env-detected | Runtime output language, for example en or zh-CN. |
--version, -v |
false |
Print the version string and exit. |
analyze accepts exactly one input mode per invocation:
- Positional file mode: pass one or more local binlog file paths as positional arguments.
- Discovery mode: pass
--from-dirand--prefixtogether so BinlogViz resolves matching files from a directory.
Use positional arguments when you already know the exact files to analyze.
binlogviz analyze mysql-bin.000123
binlogviz analyze mysql-bin.000123 mysql-bin.000124Use discovery mode when you want BinlogViz to resolve an ordered file set for you.
binlogviz analyze --from-dir /var/lib/mysql --prefix mysql-bin.- Positional file arguments and discovery flags are mutually exclusive.
--from-dirand--prefixmust be provided together.- If neither positional files nor a complete discovery pair is provided, the command fails.
For the exact discovery matching, ordering, resolved-file reporting, and invalid-combination contract, see Input Discovery Reference.
| Flag | Default | Description |
|---|---|---|
--start |
none | Start time, inclusive, in RFC3339 format. |
--end |
none | End time, inclusive, in RFC3339 format. |
--from-dir |
none | Discover binlog files from this directory. Must be used with --prefix. |
--prefix |
none | Filename prefix used with --from-dir. Must be used with --from-dir. |
--format |
text |
Report output format: text, json, markdown (alias md), or html. |
--snapshot-name |
none | Save the JSON analyze output as <name>.json. Requires --format json. |
--snapshot-dir |
home-based default | Directory used when saving a snapshot. Default: ~/.binlogviz/snapshots. |
--sql-context |
summary |
SQL context presentation mode: summary, off, or full. |
--top-tables |
10 |
Number of top tables to include in the report. |
--top-transactions |
10 |
Number of top transactions to include in the report. |
--detect-spikes |
false |
Enable write spike detection. |
--large-trx-rows |
1000 |
Row threshold for large transaction alerts. |
--large-trx-duration |
30s |
Duration threshold for large transaction alerts. |
--top-minutes |
60 |
Number of top active minutes to include in the report. |
--spike-window |
5 |
Rolling baseline window in minutes for spike detection. |
--spike-factor |
3.0 |
Multiplier over baseline to trigger a spike alert. |
--spike-min-rows |
100 |
Minimum row count for a minute to be considered a spike candidate. |
--include-schema |
none | Comma-separated list of schemas to analyze (all others excluded). |
--exclude-schema |
none | Comma-separated list of schemas to skip. |
--include-table |
none | Comma-separated list of tables to analyze (all others excluded). |
--exclude-table |
none | Comma-separated list of tables to skip. |
analyze can optionally persist the exact JSON payload it writes to stdout.
binlogviz analyze --from-dir /var/lib/mysql --prefix mysql-bin. \
--format json \
--snapshot-name incident_currentRules:
--snapshot-namerequires--format json- the snapshot name must be a single file stem containing only letters, digits,
-, or_ --snapshot-diroverrides the default home-based snapshot store- when
--snapshot-nameis present, the report still goes tostdout - the save confirmation is written to
stderr
If --snapshot-dir is omitted, BinlogViz saves to ~/.binlogviz/snapshots/<name>.json.
binlogviz compare <current.json> <baseline.json>
binlogviz compare <current.json> <baseline.json> --format text
binlogviz compare <current.json> <baseline.json> --format json
binlogviz compare <current.json> <baseline.json> --format html
binlogviz compare --current-snapshot current --baseline-snapshot baseline
binlogviz compare --current-snapshot current --baseline-snapshot baseline --snapshot-dir /tmp/binlogviz-snapshotscompare supports two input modes per invocation:
- File mode: exactly two positional JSON reports
- Snapshot mode:
--current-snapshotplus--baseline-snapshot
compare accepts exactly two positional arguments in file mode:
current.json: the current BinlogViz analysis reportbaseline.json: the baseline BinlogViz analysis report
Snapshot mode loads previously saved analyze JSON reports by name:
--current-snapshot: snapshot name used as the current report--baseline-snapshot: snapshot name used as the baseline report--snapshot-dir: optional snapshot directory override; default is~/.binlogviz/snapshots
The command does not support discovery mode, binlog files, Markdown output, or mixing file mode with snapshot mode.
compare only accepts JSON reports generated by binlogviz analyze --format json, whether they are loaded from explicit files or from the snapshot store.
Validation rules:
- file mode requires both positional arguments
- snapshot mode requires both
--current-snapshotand--baseline-snapshot - file mode and snapshot mode cannot be combined
- each input must be readable local JSON
- each input must match the BinlogViz analyze JSON report shape
- the command rejects non-BinlogViz JSON and malformed JSON before rendering begins
Accepted output formats:
| Flag | Default | Description |
|---|---|---|
--current-snapshot |
none | Snapshot name used as the current report in snapshot mode. |
--baseline-snapshot |
none | Snapshot name used as the baseline report in snapshot mode. |
--snapshot-dir |
home-based default | Snapshot directory used in snapshot mode. Default: ~/.binlogviz/snapshots. |
--format |
text |
Compare report output format: text, json, or html. |
Representative usage:
binlogviz analyze --from-dir /var/lib/mysql --prefix mysql-bin. --format json --snapshot-name current > current.json
binlogviz analyze --from-dir /var/lib/mysql --prefix mysql-bin. --format json --snapshot-name baseline > baseline.json
binlogviz compare --current-snapshot current --baseline-snapshot baseline
binlogviz compare --current-snapshot current --baseline-snapshot baseline --format json > compare.json
binlogviz compare --current-snapshot current --baseline-snapshot baseline --format html > compare.html
# Legacy file mode remains supported
binlogviz compare current.json baseline.json
binlogviz compare current.json baseline.json --format json > compare.json
binlogviz compare current.json baseline.json --format html > compare.htmlbinlogviz trend <snapshot-a> <snapshot-b> [<snapshot-c> ...]
binlogviz trend <snapshot-a> <snapshot-b> --baseline-snapshot baseline
binlogviz trend --from-snapshots 'incident-*'
binlogviz trend --from-snapshots 'incident-*' --baseline-snapshot baseline --format html > trend.htmltrend is snapshot-oriented and supports two mutually exclusive input modes per invocation:
- Explicit snapshot mode: two or more snapshot names as positional arguments
- Pattern mode:
--from-snapshots <pattern>selects snapshot names from the snapshot store
Rules:
- explicit snapshot mode and pattern mode cannot be combined
- the resolved trend set must contain at least two snapshots
- trend points are always ordered by effective window start time ascending
- trend uses
snapshot.window.start_timewhen present and falls back tosummary.start_timefor older snapshots --baseline-snapshotis optional and does not automatically become a trend point unless it was selected separately- all trend formats include pattern trends;
textandjsonexposeTop Pattern Trends/pattern_trends, andhtmladds an interactivePattern Trendssection - HTML trend output defaults to the
share of rowsview and lets you switch to absoluterows
Accepted flags:
| Flag | Default | Description |
|---|---|---|
--format |
text |
Trend report output format: text, json, or html. |
--from-snapshots |
none | Pattern used to select snapshots by name from the snapshot store. |
--baseline-snapshot |
none | Optional snapshot used for per-point delta calculations. |
--snapshot-dir |
home-based default | Directory used when loading snapshots. Default: ~/.binlogviz/snapshots. |
--top-tables |
10 |
Number of top-table trend series to include in trend output. |
binlogviz snapshot save <report.json> --name NAME
binlogviz snapshot save <report.json> --name NAME --snapshot-dir /tmp/binlogviz-snapshots
binlogviz snapshot list
binlogviz snapshot list --format json
binlogviz snapshot show <name>
binlogviz snapshot show <name> --format json
binlogviz snapshot rename <old-name> <new-name>
binlogviz snapshot delete <name>The snapshot command manages analyze JSON reports stored by name.
snapshot save copies one analyze JSON report into the snapshot store.
Rules:
<report.json>must be a local JSON file that matches the analyze report shape--nameis required--snapshot-diroverrides the default~/.binlogviz/snapshotslocation- successful saves print no payload to
stdout - successful saves print
Saved snapshot "<name>" to <path>tostderr
Accepted flags:
| Flag | Default | Description |
|---|---|---|
--name |
none | Required snapshot name used as the stored snapshot identifier. |
--snapshot-dir |
home-based default | Directory used when saving the snapshot. Default: ~/.binlogviz/snapshots. |
snapshot list supports two output modes:
- text mode (default): prints a human-readable table with
name,label,created_at,input_mode, andwindow - JSON mode: prints a machine-readable object with
snapshot_dirandsnapshots
Accepted flags:
--format text--format json--snapshot-dir /path/to/store
snapshot show <name> supports two output modes:
- text mode (default): prints a small summary to
stdout, including snapshot name, resolved path, identity metadata, filters, and top-level totals - JSON mode: prints a machine-readable object containing the normalized snapshot descriptor under
snapshot
Accepted flags:
--format text--format json--snapshot-dir /path/to/store
snapshot rename <old-name> <new-name> renames a stored snapshot in the snapshot store.
Rules:
- both names must pass the same snapshot-name validation as
snapshot save - the command keeps the stored snapshot identity aligned with the new file name
- successful renames print
Renamed snapshot "<old>" to "<new>" at <path>tostderr
snapshot delete <name> removes one stored snapshot from the snapshot store.
Rules:
- the name must pass snapshot-name validation
- successful deletes print
Deleted snapshot "<name>" at <path>tostderr
--start and --end use RFC3339 timestamps.
binlogviz analyze mysql-bin.000123 \
--start "2026-03-15T10:00:00Z" \
--end "2026-03-15T10:30:00Z"Validation rules:
- An invalid
--startvalue fails with aninvalid start time formaterror. - An invalid
--endvalue fails with aninvalid end time formaterror. - If both are provided,
--endmust not be earlier than--start.
Validation happens before analysis starts:
- Missing files fail with
file not found: <path>. - Unreadable discovery directories fail with a directory read error.
- Discovery mode with no matches fails with
no matching binlog files found under <dir> with prefix "<prefix>". - Invalid input mode combinations fail fast before parsing.
The command rejects these invalid combinations:
- positional files plus
--from-diror--prefix --from-dirwithout--prefix--prefixwithout--from-dir- no positional files and no complete discovery pair
--snapshot-namewithout--format json
Representative failure cases:
# Invalid: mixed input modes
binlogviz analyze mysql-bin.000123 --from-dir /var/lib/mysql --prefix mysql-bin.
# Invalid: incomplete discovery mode
binlogviz analyze --from-dir /var/lib/mysqlBinlogViz deliberately separates machine-consumable report output from operator-facing status output.
stdout is reserved for the final analysis report:
- text report by default
- JSON report when
--format jsonis set - Markdown report when
--format markdown(or--format md) is set - HTML report when
--format htmlis set
This allows safe shell redirection and scripting.
binlogviz analyze mysql-bin.000123 --format json > report.json
binlogviz analyze mysql-bin.000123 --format markdown > report.md
binlogviz analyze mysql-bin.000123 --format html > report.htmlstderr is used for operator-facing runtime status:
- parse progress output
Finalizing analysis...- resolved file listings in discovery mode
- snapshot save confirmations when
--snapshot-nameis used - command errors
This keeps stdout clean for pipelines and file redirection.
For the exact channel contract and JSON field-level behavior, see Output Format Reference.
compare keeps the same report-to-stdout behavior, but it does not emit analyze-style progress output:
stdoutcarries the final compare reportstderrcarries command errors
compare output by format:
text: terminal-friendly compare summary with deltasjson: machine-readable compare resulthtml: self-contained visual compare report with charts
Examples:
binlogviz compare current.json baseline.json > compare.txt
binlogviz compare current.json baseline.json --format json > compare.json
binlogviz compare current.json baseline.json --format html > compare.htmlFor the compare output structure and user-visible report contents, see Output Format Reference.
snapshot subcommands use the channels below:
snapshot save: no report payload onstdout; save confirmation onstderrsnapshot list: snapshot names onstdoutsnapshot show: snapshot metadata and summary onstdout- command failures: CLI error path on
stderr
binlogviz analyze mysql-bin.000123binlogviz analyze mysql-bin.000123 mysql-bin.000124binlogviz analyze --from-dir /var/lib/mysql --prefix mysql-bin.binlogviz analyze --from-dir /var/lib/mysql --prefix mysql-bin. --format json > analyze.jsonbinlogviz analyze mysql-bin.000123 mysql-bin.000124 \
--top-tables 20 \
--top-transactions 20 \
--top-minutes 30 \
--detect-spikes \
--spike-window 10 \
--spike-factor 5.0 \
--large-trx-rows 5000 \
--large-trx-duration 60s# Only analyze a specific schema
binlogviz analyze mysql-bin.000123 --include-schema mydb
# Exclude system schemas
binlogviz analyze mysql-bin.000123 --exclude-schema mysql,sys,information_schema
# Only analyze specific tables
binlogviz analyze mysql-bin.000123 \
--include-schema mydb \
--include-table orders,paymentsbinlogviz compare current.json baseline.jsonbinlogviz compare current.json baseline.json --format json > compare.jsonbinlogviz compare current.json baseline.json --format html > compare.htmlbinlogviz workflow run <plan.yaml>
binlogviz workflow run <plan.yaml> --output-dir ./artifacts
binlogviz workflow run <plan.yaml> --snapshot-dir /tmp/snapshotsworkflow run executes a declarative YAML plan that describes one or more analysis windows, optional compare jobs, and optional trend jobs. It produces a deterministic artifact tree plus a manifest.json. The manifest always includes a normalized workflow_summary object with findings, recommendations, and warnings arrays. That summary is rebuilt best-effort from successful compare/trend JSON artifacts only, so summary warnings never change workflow or step status semantics.
The plan file uses YAML with version: 1. The root sections are:
version— required, must be1workflow— workflow name and output directorydefaults— shared input source, analyze options, and snapshot settingswindows— one or more named time windows to analyzecompare— optional compare jobs referencing named windowstrend— optional trend jobs referencing named windows
Example plan:
version: 1
workflow:
name: incident-investigation
output_dir: ./artifacts/incident
defaults:
input:
from_dir: /var/lib/mysql
prefix: mysql-bin.
analyze:
format: json
top_tables: 10
snapshot:
save: true
windows:
- name: baseline
start: 2026-04-09T10:00:00Z
end: 2026-04-09T10:30:00Z
- name: incident
start: 2026-04-09T11:00:00Z
end: 2026-04-09T11:30:00Z
compare:
- name: incident_vs_baseline
current: incident
baseline: baseline
formats: [json, html]
trend:
- name: incident_series
snapshots: [baseline, incident]
formats: [json, html]| Flag | Default | Description |
|---|---|---|
--output-dir |
plan-defined | Override the plan-defined output directory. |
--snapshot-dir |
home-based default | Override the snapshot storage directory. |
<output_dir>/
index.html
manifest.json
analyze/
baseline.json
incident.json
compare/
incident_vs_baseline.json
incident_vs_baseline.html
trend/
incident_series.json
incident_series.html
- Validate and load the plan
- Create the output directory layout
- Run all analyze windows in plan order
- Run compare jobs in plan order
- Run trend jobs in plan order
- Write
manifest.json - Write
index.html
workflow run persists a compact workflow-level rollup into manifest.json:
workflow_summary.findings,workflow_summary.recommendations, andworkflow_summary.warningsare always present as normalized arrays- only successful
compareandtrendsteps contribute summary items - summary extraction reads JSON artifacts only
- findings and recommendations are deterministically deduplicated and capped at 5 items each
- missing required top-level arrays append warnings, while present-but-empty arrays remain valid and warning-free
index.htmlprefers HTML source links for workflow summary items and falls back to JSON when no HTML artifact exists- workflow summary evidence links append
#anchoronly for HTML source reports; JSON fallback links omit anchors - summary rebuild is best-effort: missing, unreadable, or invalid summary sources append warning strings instead of failing the workflow
- summary warnings never change workflow or step status semantics
- Plan validation errors fail before any artifact is written
- Runtime step failures stop at the first failed step
- Already written artifacts remain on disk
manifest.jsonis always written withstatus: failedand the failed step's errorindex.htmlis always written on both success and failure
stdoutis unused in v1stderrcarries progress lines and the final manifest pathindex.htmlis written to<output_dir>/index.htmlas a self-contained workflow landing page showing workflow metadata, step status, and artifact links
binlogviz workflow status <output_dir>
binlogviz workflow status <output_dir> --format text
binlogviz workflow status <output_dir> --format jsonworkflow status inspects an existing workflow output directory without modifying it. It reads manifest.json, checks whether each artifact recorded in the manifest currently exists, carries through the persisted workflow_summary, determines whether the workflow root is resumable, and optionally builds a dry-run resume preview when the saved plan can be loaded.
The command is read-only:
- it never executes workflow steps
- it never rewrites
manifest.json,index.html, or any artifact - it never repairs missing artifacts or snapshots
- it never mutates runtime state on disk
| Flag | Default | Description |
|---|---|---|
--format |
text |
Status output format: text or json. |
workflow status reports these top-level runtime facts:
- workflow name, output root, manifest version, mode, attempt, and manifest status
runtime_state, which iscompletewhen all recorded artifacts are present and, if the saved plan can be loaded, reusable snapshots needed for resume are intact; otherwiseincompleteresumable, which istrueonly when the workflow root passes resume validationresume_error, which explains why resume is unavailable for legacy manifests, missing plan files, plan hash mismatches, invalid plan loads, or other resume guard failures- persisted
workflow_summary, with normalizedfindings,recommendations, andwarningsarrays copied from the manifest without recomputation - per-step artifact presence, using the recorded artifact paths from the manifest
resume_preview, when the saved plan loads successfully and a dry resume plan can be derived
Legacy manifests remain inspectable. When the manifest is from the pre-v2 format, the command still renders status output but reports resumable: false and a non-empty resume_error.
workflow status only trusts workflow-local rooted plan references. When manifest.plan_path resolves outside the workflow root or escapes via symlinks, the plan is treated as untrusted: the command still succeeds and reports full status, but sets resumable to false and populates resume_error with a trust-boundary explanation. Outside-root and symlink-escaped plan paths are rejected before the file is opened. The trust check is performed by ValidateWorkflowPlanPath(outputDir, planPath).
- supports
textandjsononly - fails before rendering when
<output_dir>/manifest.jsoncannot be read - keeps all output on
stdout - does not use
stderrfor progress reporting - omits
resume_previewwhen the plan is unavailable or cannot be loaded
binlogviz workflow clean <output_dir>
binlogviz workflow clean <output_dir> --format text
binlogviz workflow clean <output_dir> --format json
binlogviz workflow clean <output_dir> --apply
binlogviz workflow clean <output_dir> --apply --include-snapshotsworkflow clean inspects one existing workflow root and reports or deletes orphaned generated files that are no longer referenced by the current manifest.json.
| Flag | Default | Description |
|---|---|---|
--apply |
false |
Delete discovered cleanup candidates instead of only previewing them. |
--include-snapshots |
false |
Include orphaned snapshot JSON files from manifest.snapshot_dir. |
--format |
text |
Cleanup output format: text or json. |
workflow clean is intentionally narrow:
- it scans only workflow-generated directories:
analyze/,compare/, andtrend/ - it considers only known generated extensions in those directories
- it treats
steps[].artifactsin the current manifest as the live artifact set - it treats successful analyze
snapshot_namevalues as the live snapshot set - it never deletes
manifest.json - it never deletes
index.html - it never deletes plan files
- it never deletes unknown files outside the deterministic workflow artifact set
Known generated extensions in scope:
analyze:.jsoncompare:.json,.htmltrend:.json,.html
workflow clean fails before rendering only when cleanup cannot be evaluated meaningfully:
<output_dir>/manifest.jsonis missing- the manifest is unreadable or invalid
- one of the workflow artifact directories is unreadable
Additional rules:
- a missing snapshot directory is not an error; it yields zero snapshot candidates
- per-file delete failures in
--applymode do not stop the cleanup pass - failed deletions are reported in
skipped - if any deletion is skipped, the command exits non-zero after writing output
textmode prints a summary block followed by orphan, deleted, and skipped listsjsonmode writes a stable machine-readable object withworkflow_name,output_dir,mode,include_snapshots,artifact_orphans,snapshot_orphans,deleted,skipped, andcounts- output is written to
stdout - command errors continue to use the normal CLI failure path
workflow clean does not:
- repair workflow state
- rewrite manifest contents
- decide what
resumeshould do next - perform global cleanup outside one workflow root
- implement retention windows, TTLs, or age-based pruning
binlogviz workflow export <output_dir>
binlogviz workflow export <output_dir> --output ./incident.zip
binlogviz workflow export <output_dir> --include-snapshots
binlogviz workflow export <output_dir> --format jsonworkflow export bundles an existing workflow root into a deterministic, read-only zip archive. It reads manifest.json, includes manifest.json itself, best-effort includes index.html, includes only manifest-declared workflow artifacts, and best-effort includes plan.yaml from manifest.plan_path when present. It never reruns workflow steps, never rebuilds workflow summary, and never mutates files under the workflow root.
| Flag | Default | Description |
|---|---|---|
--output |
<output_dir>.zip |
Archive output path. The default is derived from filepath.Clean(output_dir) + ".zip". |
--include-snapshots |
false |
Include only snapshot JSON files referenced by the manifest. |
--format |
text |
Export command result format: text or json. |
manifest.jsonis required and is always included in the archiveindex.htmlis included best-effort and becomes a warning if missing- workflow artifacts are loaded only from
steps[].artifactsin the manifest - artifacts outside the workflow root are skipped with warnings
plan.yamlis included asplan.yamlonly whenmanifest.plan_pathis present, resolves inside the workflow root, is readable, still matchesmanifest.plan_sha256, and still parses as the recorded workflow plan metadata; otherwise it becomes a warning- snapshots are excluded by default
- with
--include-snapshots, only referenced snapshot JSON files are considered, and an emptymanifest.snapshot_dirbecomes a warning instead of reading from the current working directory - missing manifest artifacts, missing snapshots, and missing plan/index inputs become warnings rather than fatal errors
- the archive output path must be outside the workflow root; paths inside the root are rejected
- zip entry ordering, timestamps, and file modes are normalized so repeated exports are deterministic
workflow export fails before writing a successful result when:
<output_dir>/manifest.jsonis missing, unreadable, or invalid- an included artifact cannot be read for reasons other than file absence in optional best-effort paths
- archive creation or writing fails
- the archive output path resolves inside the workflow root
- all result output is written to
stdout - the command does not use
stderrfor progress output textmode writes a compact operator summary with an optionalWarningssectionjsonmode writes a machine-readable object withworkflow_name,output_dir,archive_path,format,included_files,included_snapshots, andwarnings
binlogviz workflow validate <plan.yaml>
binlogviz workflow validate <plan.yaml> --format text
binlogviz workflow validate <plan.yaml> --format jsonworkflow validate answers whether a workflow plan is statically runnable. It reads only plan.yaml, loads it with strict YAML field validation, and applies the same static plan validation used by workflow run before execution begins.
Validation covers workflow metadata, window definitions, named references, duplicate compare/trend job names, and duplicate format entries inside compare/trend jobs. The command does not inspect output_dir, manifest.json, index.html, or any existing runtime artifacts.
| Flag | Default | Description |
|---|---|---|
--format |
text |
Validation result format: text or json. |
- exits zero when the plan is valid
- writes a text or JSON summary to
stdout - reports workflow name, window count, compare job count, trend job count, and output root
- exits non-zero when the plan is invalid or unreadable
- writes a text or JSON error payload to
stdout - also returns the CLI error through the normal command failure path, so default CLI execution may emit an error line on
stderr
binlogviz workflow describe <plan.yaml>
binlogviz workflow describe <plan.yaml> --format text
binlogviz workflow describe <plan.yaml> --format jsonworkflow describe answers how a workflow plan would run without executing it. It reads only plan.yaml, requires the plan to pass static validation first, and then renders a deterministic preview derived from the plan alone.
The preview includes workflow metadata, analyze windows, compare jobs, trend jobs, declared dependencies, planned artifact paths, and snapshot names when defaults.snapshot.save is enabled. The command does not inspect output_dir, manifest.json, index.html, or any previously generated outputs.
| Flag | Default | Description |
|---|---|---|
--format |
text |
Description output format: text or json. |
- supports
textandjsononly - does not render HTML
- fails before rendering if the plan is invalid or unreadable
- on failure, writes the error payload to
stdoutand still returns the command error, so default CLI execution may also emit an error line onstderr - preserves plan order for analyze windows, compare jobs, and trend jobs
binlogviz workflow resume <output_dir>
binlogviz workflow resume <output_dir> --snapshot-dir /tmp/snapshots
binlogviz workflow resume <output_dir> --rerun analyze:week2 --rerun compare:incident_vs_baselineworkflow resume continues a previously executed workflow from its output directory. It reads the existing manifest.json, reuses successful steps, and reruns failed or missing ones.
| Flag | Default | Description |
|---|---|---|
--snapshot-dir |
home-based default | Override the snapshot storage directory. |
--rerun |
none | Repeatable explicit step selector. Forces a specific step to rerun regardless of its previous status. |
The --rerun flag accepts step selectors in the form <kind>:<name>:
| Kind | Name matches | Example |
|---|---|---|
analyze |
Window name from the plan | analyze:week2 |
compare |
Compare job name from the plan | compare:incident_vs_baseline |
trend |
Trend job name from the plan | trend:incident_series |
Multiple --rerun flags can be combined to force-rerun several steps in one invocation.
- Load the existing
manifest.jsonfrom<output_dir> - Validate the manifest version (must be v2; legacy pre-v2 manifests are rejected)
- Verify the plan file hash matches the original run (refuses if the plan changed)
- For each plan step:
- If the step succeeded and is not listed in
--rerun, mark it as reused - If the step failed, is missing, or is listed in
--rerun, execute it again
- If the step succeeded and is not listed in
- Dependency-aware rerun: rerunning an
analyzestep invalidates downstreamcompareandtrendsteps that reference it - Write an updated
manifest.jsonwith per-step execution status (executedorreused) - Write an updated
index.htmlshowing mode, attempt number, and per-step execution labels
Resume refuses to proceed when:
<output_dir>does not contain amanifest.json- The manifest was produced by a legacy pre-v2 run (missing
manifest_versionfield) - The plan file SHA-256 does not match the
plan_sha256recorded in the manifest - The plan path recorded in the manifest resolves outside the workflow root or escapes via symlinks (trust-boundary hard fail)
ValidateWorkflowPlanPath(outputDir, planPath) is called before the plan file is opened. Outside-root and symlink-escaped paths are rejected unconditionally. ValidateResumableManifest now takes four arguments (m Manifest, outputDir string, planPath string, planSHA256 string) to enforce the trust boundary during resume validation.
The output layout is identical to workflow run. Resume overwrites artifacts for rerun steps and leaves reused step artifacts unchanged.
stdoutis unusedstderrcarries progress lines and the final manifest pathindex.htmlis written to<output_dir>/index.htmland includes the resume mode, attempt number, and per-step execution labels (executed/reused)