Skip to content

feat: Add inference pipeline skeleton with preprocess, run, and checkpoint loading#25

Open
KrishanYadav333 wants to merge 1 commit intoML4SCI:mainfrom
KrishanYadav333:feat/inference-pipeline
Open

feat: Add inference pipeline skeleton with preprocess, run, and checkpoint loading#25
KrishanYadav333 wants to merge 1 commit intoML4SCI:mainfrom
KrishanYadav333:feat/inference-pipeline

Conversation

@KrishanYadav333
Copy link
Copy Markdown

Summary

Inference pipeline skeleton for the EXXA DDPM denoising workflow.

Adds Inferencer — a thin wrapper that handles everything between a raw
numpy image and a denoised numpy output.

What it does

  • from_checkpoint() — reconstructs a model from a Trainer checkpoint
  • preprocess() — normalises raw input to [-1, 1], handles (H,W) / (C,H,W) / (B,C,H,W) automatically
  • run() — calls model.sample(x) if available, falls back to model(x)
  • postprocess() — maps output back to [0, 1] and returns numpy

Design note

The sample() vs forward() dispatch means this plugs directly into
DDPM.sample() once implemented with zero changes to the inference code.

Tests

19 tests covering instantiation, checkpoint loading, preprocessing edge
cases (constant image, all input shapes), postprocess clamping, and both
dispatch paths. All pass.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant