Skip to content

123sleaf-123/NRT-Net

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

10 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

NRTNet

Architecture

MLLM Scoring

This repository contains the training, evaluation, checkpoint conversion, and visualization code for NRTNet, a dual-branch low-light image enhancement model. The current codebase is organized around the NRT_Net generator defined in archs/nrt_net_arch.py and integrated into the BasicSR training and testing pipeline.

Highlights

  • BasicSR-based training and evaluation pipeline
  • Dual-branch architecture with illumination features and reflectance prior
  • Score-aware paired dataset loaders
  • Included pretrained checkpoint: checkpoints/nrt_net.pth
  • Optional scripts for supplementary visualization and checkpoint conversion

Repository Layout

NRTNet/
|-- archs/                 # network definitions
|-- data/                  # dataset loaders
|-- losses/                # custom losses
|-- metrics/               # custom metrics
|-- models/                # BasicSR model wrapper
|-- options/               # training / testing YAML configs
|-- datasets/              # pair lists and score CSV files
|-- checkpoints/           # pretrained or training checkpoints
|-- train.py               # training entrypoint
|-- test.py                # evaluation entrypoint
|-- generate_eccv_supp.py  # PDF supplementary figure generation
|-- visualize_dual_branch.py

Environment

The code is intended to run with Python 3.10 and a CUDA-enabled PyTorch environment.

Important compatibility note:

  • A fresh install test showed that basicsr does not work with the latest torchvision releases in a clean environment.
  • In particular, installing the newest torch / torchvision pair caused basicsr import failure with ModuleNotFoundError: torchvision.transforms.functional_tensor.
  • Installing BasicSR directly from the upstream project repository can avoid this issue in practice, for example with pip install git+https://github.com/XPixelGroup/BasicSR.git.
  • mmcv also failed to build in the fresh environment, while mmcv-lite installed successfully.
  • Because of this, do not treat pip install torch torchvision with the newest versions as a guaranteed working setup for this project.

Recommended setup:

conda create -n nrtnet python=3.10 -y
conda activate nrtnet

# Install a BasicSR-compatible torch / torchvision pair first.
# Avoid blindly using the newest torchvision release.
# If you already have a working environment for this project,
# prefer exporting and reusing those exact versions.
pip install torch torchvision

# If the PyPI release of basicsr hits the functional_tensor import error,
# install BasicSR from the upstream project instead.
pip install git+https://github.com/XPixelGroup/BasicSR.git

pip install -r requirements.txt

Notes:

  • basicsr is required because training and testing rely on basicsr.train.train_pipeline and basicsr.test.test_pipeline.
  • If pip install basicsr fails at import time with the functional_tensor error, prefer installing BasicSR from the upstream repository instead of the PyPI release.
  • mmcv-lite, mmengine, and timm are required by modules in archs/attn.py.
  • pyiqa, lpips, pytorch-msssim, kornia, and scipy are used by custom losses and metrics.
  • The requirements.txt file intentionally does not pin torch and torchvision, because they must match your CUDA setup and also remain compatible with basicsr.

Data Preparation

The repository already includes metadata files under datasets/:

  • datasets/pairs/*.txt: paired image lists
  • datasets/score_result_*/**.csv: per-image score annotations

The pair list format is:

path/to/input_image|path/to/target_image

Example:

REED_3_RESIZE_512/training/cnt_343_N3.JPG|REED_3_RESIZE_512/training/cnt_343_P0.5.JPG

The score CSV format is:

image_name,brightness,noise_control,color,texture,naturalness
cnt_0_N3.JPG,0.2767,0.4433,0.4667,0.3333,0.4133

Important:

  • The paths stored in the pair lists are relative to the repository root.
  • To use the provided REED metadata as-is, place the image directory at REED_3_RESIZE_512/ under the project root.
  • For your own datasets, either update the pair-list files or create new ones with the same format.

Training

Training uses the BasicSR pipeline and YAML configs in options/.

Example commands:

python train.py -opt options/train_reed_aes_retinex.yml
python train.py -opt options/train_msec_aes_retinex.yml
python train.py -opt options/train_sice_aes_retinex.yml

Available configs:

  • options/train_reed_aes_retinex.yml: REED training
  • options/train_msec_aes_retinex.yml: MSEC training
  • options/train_sice_aes_retinex.yml: SICE training

Outputs are typically written by BasicSR to directories such as:

  • experiments/
  • tb_logger/
  • results/

Evaluation

Run evaluation with:

python test.py -opt options/test_reed_aes_retinex.yml

The default test config loads:

checkpoints/nrt_net.pth

and evaluates on the validation split specified in options/test_reed_aes_retinex.yml.

Included Checkpoint

The repository includes a packaged checkpoint:

  • checkpoints/nrt_net.pth

In the current configs, this checkpoint is used as path.pretrain_network_g.

Optional Supplementary Scripts

Generate supplementary PDF visualizations:

python generate_eccv_supp.py \
  --image REED_3_RESIZE_512/validation/example.jpg \
  --checkpoint checkpoints/nrt_net.pth \
  --output_dir vis_results/eccv_supp

Visualize intermediate dual-branch features:

python visualize_dual_branch.py \
  --image REED_3_RESIZE_512/validation/example.jpg \
  --checkpoint checkpoints/nrt_net.pth \
  --output_dir vis_results/dual_branch

Convert an older checkpoint to the current NRT_Net naming layout:

python convert_checkpoint.py --input old_checkpoint.pth --output checkpoints/nrt_net.pth

Practical Notes

  • train.py and test.py do not hardcode CUDA_VISIBLE_DEVICES anymore.
  • If you want to select a specific GPU, set CUDA_VISIBLE_DEVICES in your shell before running.
  • train.py also sets HF_ENDPOINT=https://hf-mirror.com.

Citation

If you use this repository in academic work, please cite the corresponding paper or project release.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages