Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
44 changes: 44 additions & 0 deletions Makefile
Original file line number Diff line number Diff line change
@@ -0,0 +1,44 @@
############ Minimal Benchmark Makefile ############
# Deprecated legacy targets removed. Use grid_cli.py for generation.
# Override variables on invocation: e.g.
# make bench VARY='product.A1=16,18,20 ht.KC=2.75e-4,3.3e-4,4.0e-4'

.PHONY: help bench analyze

# Defaults
TASK ?= Tsh
SCENARIO ?= baseline
VARY ?= product.A1=16,18,20 ht.KC=2.75e-4,3.3e-4,4.0e-4
METHODS ?= scipy,fd,colloc
N_ELEMENTS ?= 24
N_COLLOCATION ?= 3
OUT ?= benchmarks/results/grid_$(TASK)_$(SCENARIO).jsonl
METRIC ?= ratio.pyomo_over_scipy

help:
@echo "Targets:"; \
echo " bench Generate grid JSONL via benchmarks/grid_cli.py"; \
echo " analyze Execute analysis notebook (headless) against OUT"; \
echo "Variables:"; \
echo " TASK=$(TASK) SCENARIO=$(SCENARIO)"; \
echo " VARY='$(VARY)'"; \
echo " METHODS=$(METHODS) N_ELEMENTS=$(N_ELEMENTS) N_COLLOCATION=$(N_COLLOCATION)"; \
echo " OUT=$(OUT) METRIC=$(METRIC)"; \
echo "Examples:"; \
echo " make bench VARY='product.A1=16,18,20 ht.KC=2.75e-4,3.3e-4,4.0e-4'"; \
echo " make analyze METRIC=pyomo.objective_time_hr"

bench:
@echo "[bench] Generating $(OUT)";
@python benchmarks/grid_cli.py generate \
--task $(TASK) --scenario $(SCENARIO) \
$(foreach spec,$(VARY),--vary $(spec)) \
--methods $(METHODS) \
--n-elements $(N_ELEMENTS) --n-collocation $(N_COLLOCATION) \
--out $(OUT)

analyze:
@echo "[analyze] Executing analysis notebook for $(OUT) (METRIC=$(METRIC))";
@JSONL_PATH=$(OUT) METRIC=$(METRIC) python -m nbconvert --to notebook --execute benchmarks/grid_analysis.ipynb --output benchmarks/results/analysis_executed.ipynb || \
echo "Notebook execution failed or nbconvert missing; open benchmarks/grid_analysis.ipynb manually with JSONL_PATH=$(OUT) METRIC=$(METRIC)"

23 changes: 23 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -45,6 +45,29 @@ python examples/example_design_space.py

See [`examples/README.md`](examples/README.md) for detailed documentation.

## Benchmarking (Pyomo vs Scipy)

Compare Pyomo optimizers (finite differences + orthogonal collocation) against scipy baseline:

```bash
# Generate 3×3 parameter grid with scipy, FD, and collocation
python benchmarks/grid_cli.py generate \
--task Tsh --scenario baseline \
--vary product.A1=16,18,20 \
--vary ht.KC=2.75e-4,3.3e-4,4.0e-4 \
--methods scipy,fd,colloc \
--out benchmarks/results/grid.jsonl

# Analyze results in notebook
JSONL_PATH=benchmarks/results/grid.jsonl jupyter notebook benchmarks/grid_analysis.ipynb

# Or use Makefile shortcuts
make bench VARY='product.A1=16,18,20 ht.KC=2.75e-4,3.3e-4,4.0e-4'
make analyze OUT=benchmarks/results/grid.jsonl
```

See [`benchmarks/README.md`](benchmarks/README.md) for detailed benchmarking documentation.

## Legacy Examples

The repository root contains legacy example scripts for backward compatibility:
Expand Down
100 changes: 100 additions & 0 deletions benchmarks/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,100 @@
# LyoPRONTO Benchmarks

Tools to compare Pyomo (finite differences + orthogonal collocation) vs Scipy optimizers across parameter grids.

## Current Workflow (Jan 2025)

### 1. Generate Grid Data
Use `grid_cli.py` to run Cartesian product benchmarks across N parameters:

```bash
# Generate 3×3 grid: scipy baseline + FD + collocation
python benchmarks/grid_cli.py generate \
--task Tsh --scenario baseline \
--vary product.A1=16,18,20 \
--vary ht.KC=2.75e-4,3.3e-4,4.0e-4 \
--methods scipy,fd,colloc \
--n-elements 24 --n-collocation 3 \
--out benchmarks/results/grid_A1_KC.jsonl
```

**Options:**
- `--vary key=val1,val2,...` (repeatable) — parameter sweeps
- `--methods scipy,fd,colloc` — which solvers to run
- `--n-elements N` — finite elements (both FD and collocation base)
- `--n-collocation NCP` — collocation points per element
- `--warmstart` — enable staged solve with scipy trajectory (off by default for robustness)
- `--raw-colloc` — disable effective-nfe parity reporting
- `--force` — regenerate even if output exists (reuse-first by default)

**Make shortcut:**
```bash
make bench VARY='product.A1=16,18,20 ht.KC=2.75e-4,3.3e-4,4.0e-4' METHODS=fd,colloc
```

### 2. Analyze Results
Open `benchmarks/grid_analysis.ipynb` (read-only; expects pre-generated JSONL):

```bash
JSONL_PATH=benchmarks/results/grid_A1_KC.jsonl jupyter notebook benchmarks/grid_analysis.ipynb
```

Notebook cells produce:
- CSV pivot tables (objectives, ratios, speedups)
- Heatmaps (objective difference, speedup, parity)
- Scatter plots and histograms
- Summary interpretation

**Headless execution (optional):**
```bash
make analyze OUT=benchmarks/results/grid_A1_KC.jsonl METRIC=ratio.pyomo_over_scipy
```

## Core Modules

- **`grid_cli.py`** — CLI for N-dimensional grid generation (replaces legacy `run_grid*.py`)
- **`adapters.py`** — Normalized scipy/Pyomo runners with discretization metadata
- **`scenarios.py`** — Scenario definitions (vial, product, ht, eq_cap, nVial)
- **`schema.py`** — Versioned record serialization (v2: trajectories + hashing)
- **`validate.py`** — Physics checks (mass balance, dryness, constraint violations)
- **`grid_analysis.ipynb`** — Analysis notebook (read-only data loader)
- **`results/`** — Default output directory

## Tasks

- `Tsh` — optimize shelf temperature only (pressure fixed)
- `Pch` — optimize chamber pressure only (shelf fixed)
- `both` — joint optimization (pressure + temperature)

## Output Schema (v2)

Each JSONL record includes:
- `version`: schema version (currently 2)
- `hash.inputs`, `hash.record`: SHA-256 hashes for deduplication
- `environment`: Python, Pyomo, Ipopt versions, OS, hostname, timestamp
- `task`, `scenario`: optimization variant and scenario name
- `grid.param1`, `grid.param2`, ... : swept parameters with paths and values
- `scipy`: `{success, wall_time_s, objective_time_hr, solver, metrics}`
- `pyomo`: same as scipy, plus:
- `discretization`: `{method, n_elements_requested, n_elements_applied, n_collocation, effective_nfe, total_mesh_points}`
- `warmstart_used`: bool
- `failed`: overall failure flag (any solver failed or dryness unmet)

## Migration from Legacy Scripts

**Deprecated (removed Jan 2025):**
- `run_single.py`, `run_batch.py`, `aggregate.py`
- `run_grid.py`, `run_grid_3x3.py`, `summarize_grid.py`

**Replacement:**
- Use `grid_cli.py generate` for all grid generation
- Use `grid_analysis.ipynb` for all analysis (no Python CLI needed)
- Makefile simplified to `make bench` and `make analyze`

## Notes

- **Warmstart disabled by default** for robustness testing; enable with `--warmstart`.
- **Effective-nfe true by default** for collocation parity with FD mesh density.
- **Reuse-first**: if JSONL exists, generation skipped unless `--force` supplied.
- **Trajectories embedded** in records (numpy arrays → lists during serialization).
- **Hashing** prevents duplicate runs (schema v2 `hash.inputs` field).
201 changes: 201 additions & 0 deletions benchmarks/adapters.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,201 @@
"""Adapters normalizing scipy and Pyomo optimizer outputs.

Each adapter returns a dictionary with standardized keys:
- trajectory: np.ndarray (time, Tsub, Tbot, Tsh, Pch_mTorr, flux, frac_dried)
- success: bool
- message: str
- raw: original solver output or model reference
- solver_stats: dict (iterations, evals, etc.)
"""
from __future__ import annotations
import time
from typing import Dict, Any
import numpy as np

from lyopronto import opt_Tsh, opt_Pch, opt_Pch_Tsh, constant
Copy link

Copilot AI Nov 24, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Import of 'constant' is not used.

Suggested change
from lyopronto import opt_Tsh, opt_Pch, opt_Pch_Tsh, constant
from lyopronto import opt_Tsh, opt_Pch, opt_Pch_Tsh

Copilot uses AI. Check for mistakes.
from lyopronto.pyomo_models import optimizers as pyomo_opt

DRYNESS_TARGET = 0.989

# Scipy adapters -------------------------------------------------------------

def scipy_adapter(task: str, vial: Dict[str,float], product: Dict[str,float], ht: Dict[str,float],
eq_cap: Dict[str,float], nVial: int, scenario: Dict[str,Any], dt: float = 0.01) -> Dict[str,Any]:
"""Run scipy baseline optimizer variant for specified task.

task 'Tsh': optimize shelf temperature only (pressure schedule fixed)
task 'Pch': optimize chamber pressure only (shelf temperature schedule fixed)
task 'both': optimize both pressure and temperature concurrently
"""
if task == "Tsh":
# Pressure schedule fixed at 0.1 Torr with long hold allowing completion
Pchamber = {"setpt": [0.1], "dt_setpt": [1800.0], "ramp_rate": 0.5}
# Shelf temperature optimization bounds
Tshelf = {"min": -45.0, "max": 120.0}
runner = opt_Tsh.dry
args = (vial, product, ht, Pchamber, Tshelf, dt, eq_cap, nVial)
elif task == "Pch":
# Pressure optimization lower bound
Pchamber = {"min": 0.05}
# Shelf multi-step schedule with sufficient time for drying completion
# NOTE: dt_setpt in MINUTES (opt_Pch expects minutes, converts internally)
# High resistance products (A1=20) need ~86 hours, use 100 hours for margin
Tshelf = {"init": -35.0, "setpt": [-20.0, 120.0], "dt_setpt": [300.0, 5700.0], "ramp_rate": 1.0}
runner = opt_Pch.dry
args = (vial, product, ht, Pchamber, Tshelf, dt, eq_cap, nVial)
elif task == "both":
Pchamber = {"min": 0.05}
Tshelf = {"min": -45.0, "max": 120.0}
runner = opt_Pch_Tsh.dry
args = (vial, product, ht, Pchamber, Tshelf, dt, eq_cap, nVial)
else:
raise ValueError(f"Unknown task '{task}'")

start = time.perf_counter()
out = runner(*args)
wall = time.perf_counter() - start

traj = out
success = traj.size > 0
message = "scipy run completed"
objective_time_hr = float(traj[-1,0]) if success else None

return {
"trajectory": traj,
"success": success,
"message": message,
"wall_time_s": wall,
"objective_time_hr": objective_time_hr,
"solver": {"status": "n/a", "termination_condition": "n/a"},
"solver_stats": {},
"raw": out,
}

# Pyomo adapters -------------------------------------------------------------

def pyomo_adapter(
task: str,
vial: Dict[str, float],
product: Dict[str, float],
ht: Dict[str, float],
eq_cap: Dict[str, float],
nVial: int,
scenario: Dict[str, Any],
dt: float = 0.01,
warmstart: bool = False,
method: str = "fd", # 'fd' or 'colloc'
n_elements: int = 24,
n_collocation: int = 3,
effective_nfe: bool = True,
) -> Dict[str, Any]:
"""Run Pyomo optimizer counterpart for specified task with discretization controls.

Parameters
----------
method : str
'fd' for finite differences, 'colloc' for orthogonal collocation.
n_elements : int
Base number of finite elements requested.
n_collocation : int
Number of collocation points per element (only if method == 'colloc').
effective_nfe : bool
If True (collocation only), treat n_elements as effective (parity with FD) for reporting.
warmstart : bool
Use staged solve with scipy warmstart trajectory. Default False for robustness benchmarking.
"""
if task == "Tsh":
Pchamber = {"setpt": [0.1], "dt_setpt": [180.0], "ramp_rate": 0.5}
Tshelf = {"min": -45.0, "max": 120.0}
runner = pyomo_opt.optimize_Tsh_pyomo
args = (vial, product, ht, Pchamber, Tshelf, dt, eq_cap, nVial)
elif task == "Pch":
Pchamber = {"min": 0.05}
# Shelf multi-step schedule with sufficient time for drying completion
# NOTE: dt_setpt in MINUTES (Pyomo internally uses same convention as scipy)
# High resistance products (A1=20) need ~86 hours, use 100 hours for margin
Tshelf = {"init": -35.0, "setpt": [-20.0, 120.0], "dt_setpt": [300.0, 5700.0], "ramp_rate": 1.0}
runner = pyomo_opt.optimize_Pch_pyomo
args = (vial, product, ht, Pchamber, Tshelf, dt, eq_cap, nVial)
elif task == "both":
Pchamber = {"min": 0.05, "max": 0.5}
Tshelf = {"min": -45.0, "max": 120.0, "init": -35.0}
runner = pyomo_opt.optimize_Pch_Tsh_pyomo
args = (vial, product, ht, Pchamber, Tshelf, dt, eq_cap, nVial)
else:
raise ValueError(f"Unknown task '{task}'")

use_fd = method.lower() == "fd"
treat_eff = (not use_fd) and bool(effective_nfe)

start = time.perf_counter()
meta: Dict[str, Any] = {}
try:
res = runner(
*args,
n_elements=int(n_elements),
n_collocation=int(n_collocation),
use_finite_differences=use_fd,
treat_n_elements_as_effective=treat_eff,
warmstart_scipy=warmstart,
return_metadata=True,
tee=False,
)
if isinstance(res, dict) and "output" in res:
traj = res["output"]
meta = res.get("metadata", {})
else:
traj = res
success = True
message = "pyomo run completed"
except Exception as e:
traj = np.empty((0, 7))
success = False
message = f"pyomo failure: {e.__class__.__name__}: {e}"[:300]
wall = time.perf_counter() - start

if success and isinstance(traj, np.ndarray) and traj.size:
default_t = float(traj[-1, 0])
else:
default_t = None
objective_time_hr = float(meta.get("objective_time_hr", default_t)) if success else None

solver_info = {
"status": meta.get("status"),
"termination_condition": meta.get("termination_condition"),
"ipopt_iterations": meta.get("ipopt_iterations"),
"n_points": meta.get("n_points"),
"staged_solve_success": meta.get("staged_solve_success"),
}

# Discretization reporting
if use_fd:
total_mesh_points = int(n_elements) + 1 # simple FD time mesh
else:
# For collocation, total interior points per element is n_collocation
# Effective parity: treat effective_nfe True => report n_elements as comparable to FD
total_mesh_points = int(n_elements) * int(n_collocation) + 1
discretization = {
"method": "fd" if use_fd else "colloc",
"n_elements_requested": int(n_elements),
"n_elements_applied": int(n_elements), # could differ if transformation adjusts
"n_collocation": int(n_collocation) if not use_fd else None,
"effective_nfe": bool(treat_eff) if not use_fd else False,
"total_mesh_points": total_mesh_points,
}
# Merge any mesh_info from metadata if present
mesh_info = meta.get("mesh_info")
if isinstance(mesh_info, dict):
discretization.update({k: v for k, v in mesh_info.items() if k not in discretization})

return {
"trajectory": traj,
"success": success,
"message": message,
"wall_time_s": wall,
"objective_time_hr": objective_time_hr,
"solver": solver_info,
"solver_stats": {},
"raw": traj,
"warmstart_used": warmstart,
"discretization": discretization,
}
Loading
Loading