Skip to content

minebetter/oft_contrlnet

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

ControlNet BOFT/OFT Homework Pipeline

This folder is set up for the mini-project. Most of the boilerplate is already in place, so you can focus on running and modifying experiments instead of wiring everything from scratch.

  • Dataset: oftverse/control-celeba-hq
  • Reference implementation: huggingface/peft/examples/boft_controlnet
  • Training uses torchrun with a simple distributed setup (single or multi-node)

What’s included

The repo already covers the standard workflow:

  • Entry points for training, testing, and evaluation
  • A small wrapper for distributed initialization
  • Auto-fetching of the official boft_controlnet code into vendor/
  • YAML-based configs under configs/
  • Scripts for running on 2×A100

Directory layout

  • configs/train_small.yaml — quick sanity check

  • configs/train_full.yaml — full training run (for submission)

  • configs/test_eval.yaml — test + eval using finetuned weights

  • configs/test_eval_baseline.yaml — base model (no PEFT), for comparison

  • src/distributed.py — distributed init utilities

  • src/fetch_vendor.py — pulls official code into vendor/boft_controlnet

  • src/train.py — training entry point

  • src/test.py — test entry point

  • src/eval.py — evaluation entry point

  • scripts/run_train_2a100.sh — training on 2 GPUs

  • scripts/run_test.sh — run generation

  • scripts/run_eval.sh — run evaluation

  • scripts/compare_ft_vs_base.sh — compare finetuned vs baseline (same seed / max_eval_samples)

  • models/ — checkpoints (optional)

  • datasets/ — local datasets (optional)

  • outputs/ — logs, outputs, checkpoints

Environment setup

python3 -m venv .venv
source .venv/bin/activate
pip install -U pip
pip install -r requirements.txt

Typical workflow (2×A100)

  1. Quick sanity run:
bash scripts/run_train_2a100.sh configs/train_small.yaml
  1. Full training:
bash scripts/run_train_2a100.sh configs/train_full.yaml
  1. Run test generation:
bash scripts/run_test.sh configs/test_eval.yaml
  1. Run evaluation:
bash scripts/run_eval.sh configs/test_eval.yaml

If you only want to evaluate a subset, set max_eval_samples in configs/test_eval.yaml (e.g. 100). Use the same value for both test and eval.

seed controls which samples are picked after shuffling, so keep it fixed if you want reproducible comparisons.

If needed, re-run src/fetch_vendor.py to apply the LOCAL_FIX_MAX_EVAL_SAMPLES patch. Alternatively, delete:

vendor/boft_controlnet/test_controlnet.py
vendor/boft_controlnet/eval.py
vendor/boft_controlnet/utils/args_loader.py

and let it fetch again.

Finetuned vs. baseline (same samples)

  • In configs/test_eval.yaml: Set output_dir and checkpoint_name. Outputs go to output_dir/checkpoint_name/ (predictions in results/ by default).

  • In configs/test_eval_baseline.yaml: No checkpoint is used. With baseline_mode: true, just use output_dir as the root. Predictions go to output_dir/<results_subdir>/ (results or results_baseline).

  • You can still set checkpoint_name in the baseline config if you want a nested folder structure, but it won’t load any weights.

Make sure seed and max_eval_samples match between the two configs so both runs use the same data.

bash scripts/compare_ft_vs_base.sh configs/test_eval.yaml configs/test_eval_baseline.yaml

You can also run src/test.py and src/eval.py manually with each config.

This comparison relies on the LOCAL_FIX_BASELINE_COMPARE patch (handled in fetch_vendor.py).

About

a mini project for generative ai course

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors