Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
13 changes: 13 additions & 0 deletions .dockerignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
.venv/
__pycache__/
*.pyc
.git/
.github/
*.egg-info/
dist/
build/
docs/
.claude/
.env
tests/*.py
tests/__pycache__/
13 changes: 13 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,19 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/), and this

### Added

- `--from-json` flag for `serve` command — load pre-computed analysis from JSON file
- Dockerfile + docker-compose for public demo deployment (pre-baked LLM summaries)
- Compact mode for script flow diagrams — functions with >8 steps collapse to summary nodes
- Compact/Detailed toggle button on script pages (auto-shown for scripts with >30 steps)
- Importance scoring for scripts — overview sorted by connectivity, entry points, services
- "Key" badge on top scripts in overview
- LLM-generated plain-English summaries via litellm (BYOK — bring your own key)
- `--summarize` CLI flag for `analyze` and `serve` commands
- Per-script summaries from structured AST data (steps, services, triggers)
- Per-project executive summary from script summaries and connections
- Model override via `VISUALPY_MODEL` environment variable (default: `gemini/gemini-2.5-flash`)
- Summary rendering in web UI: overview header, script cards, script headers
- Graceful degradation when litellm not installed or API key missing
- README with badges, personas, quick start, roadmap, and acknowledgments
- CONTRIBUTING.md with non-dev-friendly contribution guide
- CODE_OF_CONDUCT.md (Contributor Covenant v2.1)
Expand Down
19 changes: 19 additions & 0 deletions Dockerfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,19 @@
FROM python:3.12-slim
WORKDIR /app

COPY pyproject.toml README.md ./
COPY visualpy/ visualpy/
COPY static/ static/
RUN pip install --no-cache-dir -e ".[llm]"

COPY tests/fixtures/agentic_workflows/ /demo_project/

ARG GEMINI_API_KEY=""
RUN if [ -n "$GEMINI_API_KEY" ]; then \
GEMINI_API_KEY=$GEMINI_API_KEY visualpy analyze /demo_project --summarize -o /demo_data.json; \
else \
visualpy analyze /demo_project -o /demo_data.json; \
fi

EXPOSE 8123
CMD ["visualpy", "serve", "--from-json", "/demo_data.json", "--host", "0.0.0.0", "--port", "8123"]
30 changes: 16 additions & 14 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,8 @@ Auto-visualise Python automations for non-technical stakeholders.

Drop a folder of Python scripts, get a visual breakdown of what they do, how they connect, and what they need. No execution required, no config needed.

**[Live demo](https://visualpy.lexi-energy.com)** — see it in action on a real 8-script lead generation pipeline.

## Who is this for?

- **Operations teams** who inherited a folder of automation scripts and need to understand what each one does before touching anything.
Expand All @@ -23,6 +25,10 @@ pip install -e .

visualpy analyze /path/to/your/scripts # JSON breakdown
visualpy serve /path/to/your/scripts # starts a local web UI

# Optional: add plain-English LLM summaries (needs an API key)
pip install -e ".[llm]"
visualpy serve /path/to/your/scripts --summarize
```

Requires Python 3.12 or later. No config files, no decorators in your code, no setup. Point it at a folder and go.
Expand All @@ -42,6 +48,9 @@ The result is a structured project map, viewable as JSON or as an interactive we

- **Project dependency graph** — see how scripts relate to each other at a glance
- **Per-script flow diagrams** — step-by-step visual breakdown of what each file does, grouped by function
- **Compact mode** — functions with many steps auto-collapse to readable summaries; toggle between compact and detailed views
- **LLM summaries** — optional plain-English descriptions powered by any LLM provider via litellm (BYOK). `--summarize` flag on both `analyze` and `serve`
- **Importance scoring** — scripts sorted by connectivity; most important scripts highlighted with a "key" badge
- **Service and secret detection** — instantly see which external services and API keys are in play
- **Entry point detection** — identifies scripts with `if __name__ == "__main__"`, cron triggers, webhooks, and CLI entry points
- **Dark mode** — toggle between light and dark themes, persisted across sessions
Expand All @@ -54,29 +63,22 @@ The result is a structured project map, viewable as JSON or as an interactive we
| Sprint | Status | What |
|--------|--------|------|
| 0: Init | Done | Repo skeleton, models, CLI stubs, test fixtures |
| 1: The Engine | Done | Folder-to-JSON analysis pipeline, 59 tests |
| 1: The Engine | Done | Folder-to-JSON analysis pipeline |
| 1.5: Hardening | Done | Transform detection, inputs/outputs enrichment, false positive fixes |
| 2: The Face | Done | Web UI with Mermaid.js graphs, dark mode, HTMX interactivity, 140 tests |
| 2: The Face | Done | Web UI with Mermaid.js graphs, dark mode, HTMX interactivity |
| 3: The Community | Done | FOSS prep, docs, issue templates, CI |
| 4: The Voice | Planned | LLM summaries (litellm, BYOK), per-script descriptions, project executive summary |
| 5: The Feedback Loop | Planned | Annotations, human-in-the-loop corrections |
| 4: The Voice | Done | LLM summaries (litellm, BYOK), per-script and project-level descriptions |
| 5: The Scaling Fix | Done | Compact mode, importance scoring, compact/detailed toggle |
| 5.5: The Demo | Done | Docker deployment, pre-baked summaries, [live demo](https://visualpy.lexi-energy.com) |
| 6: The Translation | Next | LLM-powered step descriptions, business language UI |
| 7: The Export | Planned | Static HTML export, summary caching, markdown export |

## Contributing

We'd love your help — whether it's a bug report, a feature idea, or a question. You don't need to be a developer to contribute.

See [CONTRIBUTING.md](CONTRIBUTING.md) for how to get started. We've written it specifically for people who might be new to open source.

## Acknowledgments

visualpy stands on the shoulders of great open-source projects. We studied their patterns and approaches:

- [pyflowchart](https://github.com/cdfmlr/pyflowchart) (MIT) — function subgraph grouping, AST-to-flowchart patterns
- [code2flow](https://github.com/scottrogowski/code2flow) (MIT) — entry point classification, directory hierarchy, graph organization
- [emerge](https://github.com/glato/emerge) (MIT) — dark mode toggle, data embedding strategy, module separation
- [staticfg](https://github.com/coetaur0/staticfg) (Apache-2.0) — AST visitor pattern for control flow
- [VizTracer](https://github.com/gaogaotiantian/viztracer) (Apache-2.0) — zero-config philosophy (no decorators, no code changes)

## License

[MIT](LICENSE)
15 changes: 15 additions & 0 deletions docker-compose.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
services:
visualpy:
build:
context: .
args:
GEMINI_API_KEY: ${GEMINI_API_KEY}
container_name: visualpy
restart: always
networks:
- caddy_net

networks:
caddy_net:
external: true
name: caddy_net
123 changes: 123 additions & 0 deletions tests/test_cli.py
Original file line number Diff line number Diff line change
@@ -1,9 +1,20 @@
"""Tests for CLI integration."""

import dataclasses
import json
import subprocess
import sys

from visualpy.cli import _project_from_dict
from visualpy.models import (
AnalyzedProject,
AnalyzedScript,
ScriptConnection,
Service,
Step,
Trigger,
)


def test_analyze_hello(hello_script):
result = subprocess.run(
Expand Down Expand Up @@ -59,6 +70,20 @@ def test_analyze_nonexistent_path():
assert "Error" in result.stderr


def test_analyze_summarize_flag(hello_script):
"""--summarize flag is accepted and summary fields are present in output."""
result = subprocess.run(
[sys.executable, "-m", "visualpy", "analyze", str(hello_script), "--summarize"],
capture_output=True,
text=True,
timeout=30,
)
assert result.returncode == 0
data = json.loads(result.stdout)
assert "summary" in data
assert "summary" in data["scripts"][0]


def test_version():
result = subprocess.run(
[sys.executable, "-m", "visualpy", "--version"],
Expand All @@ -67,3 +92,101 @@ def test_version():
)
assert result.returncode == 0
assert "0.1.0" in result.stdout


# --- --from-json ---


def test_project_from_dict_roundtrip():
"""asdict → _project_from_dict should reconstruct equivalent project."""
project = AnalyzedProject(
path="/tmp/test",
scripts=[
AnalyzedScript(
path="example.py",
is_entry_point=True,
steps=[
Step(
line_number=10,
type="api_call",
description="requests.get()",
function_name="fetch",
service=Service(name="HTTP", library="requests"),
inputs=["url"],
outputs=["response"],
),
],
services=[Service(name="HTTP", library="requests")],
secrets=["API_KEY"],
triggers=[Trigger(type="cli", detail="__main__")],
signature={"url": "str"},
summary="Fetches data.",
),
],
connections=[
ScriptConnection(source="a.py", target="b.py", type="import", detail="a→b"),
],
services=[Service(name="HTTP", library="requests")],
secrets=["API_KEY"],
entry_points=["example.py"],
summary="A test project.",
)
data = dataclasses.asdict(project)
restored = _project_from_dict(data)

assert restored.path == project.path
assert restored.summary == project.summary
assert len(restored.scripts) == 1
assert restored.scripts[0].path == "example.py"
assert restored.scripts[0].is_entry_point is True
assert restored.scripts[0].steps[0].line_number == 10
assert restored.scripts[0].steps[0].service.name == "HTTP"
assert restored.scripts[0].steps[0].inputs == ["url"]
assert restored.scripts[0].triggers[0].type == "cli"
assert restored.scripts[0].summary == "Fetches data."
assert len(restored.connections) == 1
assert restored.connections[0].source == "a.py"
assert restored.entry_points == ["example.py"]


def test_from_json_roundtrip_via_cli(hello_script, tmp_path):
"""analyze → JSON file → serve --from-json should accept the file."""
json_file = tmp_path / "analysis.json"
# Generate JSON
result = subprocess.run(
[sys.executable, "-m", "visualpy", "analyze", str(hello_script), "-o", str(json_file)],
capture_output=True,
text=True,
)
assert result.returncode == 0
assert json_file.exists()

# Load and verify it reconstructs
data = json.loads(json_file.read_text())
project = _project_from_dict(data)
assert len(project.scripts) == 1
assert project.scripts[0].path == "hello.py"


def test_from_json_missing_file():
"""--from-json with nonexistent file should fail cleanly."""
result = subprocess.run(
[sys.executable, "-m", "visualpy", "serve", "--from-json", "/does/not/exist.json"],
capture_output=True,
text=True,
timeout=10,
)
assert result.returncode != 0
assert "Error" in result.stderr


def test_serve_requires_path_or_from_json():
"""serve with neither path nor --from-json should fail."""
result = subprocess.run(
[sys.executable, "-m", "visualpy", "serve"],
capture_output=True,
text=True,
timeout=10,
)
assert result.returncode != 0
assert "Error" in result.stderr or "required" in result.stderr.lower()
Loading
Loading