ai-workflow-intelligence reconstructs business-readable workflows from agent, tool, human, and policy events. It clusters runs into workflow variants, surfaces bottlenecks and control drag, and generates intervention recommendations grounded in observed execution paths.
Most agent tooling answers questions such as:
- which model was called
- how many tokens were spent
- whether a trace exists
This repository answers a different class of questions:
- what workflow actually happened
- where approvals and policy gates added drag
- which variants are slower, riskier, or more review-heavy
- what operational changes are likely to improve the workflow
It is intentionally not an orchestration runtime and not a policy enforcement plane. It is the analysis layer above execution.
- canonical event and run schema for agentic workflows
- workflow graph reconstruction from ordered events or parent linkage
- variant clustering by observed path signature
- detectors for approval bottlenecks, retry loops, evidence gaps, escalation churn, access-scope risk, and policy drag
- intervention recommendations derived from findings rather than static lint rules
- CLI and Python API for analyzing checked-in or generated workflow runs
Editable install:
pip install -e .[dev]Repo-local execution without install:
PYTHONPATH=src python3 -m ai_workflow_intelligence.cli --helpThe checked-in dataset models:
- three runs of a vendor security response workflow
- one governed software change approval workflow
Generated outputs are committed in:
examples/output/demo-analysis.txtexamples/output/demo-analysis.json
Quick run:
PYTHONPATH=src python3 -m ai_workflow_intelligence.cli analyze \
--input examples/workflow-runs.json \
--format textJSON output:
PYTHONPATH=src python3 -m ai_workflow_intelligence.cli \
analyze \
--input examples/workflow-runs.json \
--format jsonExcerpt from the text output:
Run: vendor-response-001 (vendor_response)
Approvals: 1
Policies: 1
Retries: 1
Bottleneck: legal-review
Findings:
- high: Approval bottleneck detected
- medium: Retry loop detected
- high: Evidence-free action detected
- medium: Broad tool access detected
from ai_workflow_intelligence import WorkflowIntelligenceEngine
from ai_workflow_intelligence.io import load_runs_from_file
runs = load_runs_from_file("examples/workflow-runs.json")
engine = WorkflowIntelligenceEngine()
portfolio = engine.analyze_runs(runs)
for analysis in portfolio.run_analyses:
print(analysis.run.run_id, analysis.summary.approval_count, analysis.summary.variant_signature)Each run contains:
- top-level workflow metadata
- ordered
events - final
outcome
Each event includes:
event_typeactorandactor_typenametimestamp- optional
latency_ms,cost_usd,evidence_refs,resources_touched - optional
metadatasuch asscope,requires_evidence,escalated,policy_name
Primary event types used by the built-in analyzer:
model_calltool_callretrievalapproval_requestedapproval_resolvedpolicy_checkhuman_override
For each run the engine emits:
- a reconstructed
WorkflowGraph - extracted
ControlPointrecords - a
WorkflowSummary Findingobjects- ranked
InterventionCardrecommendations
Across multiple runs the engine emits VariantCluster summaries keyed by workflow type and path signature.
- vendor questionnaires and security reviews
- software change approvals and incident coordination
- internal service operations such as access provisioning and procurement
- any agentic workflow where humans, tools, and governance controls mix in the same path
src/ai_workflow_intelligence/models.pysrc/ai_workflow_intelligence/reconstruction.pysrc/ai_workflow_intelligence/analysis.pysrc/ai_workflow_intelligence/detectors.pysrc/ai_workflow_intelligence/interventions.pydocs/architecture.mddocs/demo.md
pytest -q- the repository ships with in-memory analysis, not a persistent service
- the demo dataset is intentionally small and deterministic
- path reconstruction currently uses ordered events unless parent linkage is provided
- built-in detectors are heuristic and designed as extension points