This repository contains the training, evaluation, checkpoint conversion, and visualization code for NRTNet, a dual-branch low-light image enhancement model. The current codebase is organized around the NRT_Net generator defined in archs/nrt_net_arch.py and integrated into the BasicSR training and testing pipeline.
- BasicSR-based training and evaluation pipeline
- Dual-branch architecture with illumination features and reflectance prior
- Score-aware paired dataset loaders
- Included pretrained checkpoint:
checkpoints/nrt_net.pth - Optional scripts for supplementary visualization and checkpoint conversion
NRTNet/
|-- archs/ # network definitions
|-- data/ # dataset loaders
|-- losses/ # custom losses
|-- metrics/ # custom metrics
|-- models/ # BasicSR model wrapper
|-- options/ # training / testing YAML configs
|-- datasets/ # pair lists and score CSV files
|-- checkpoints/ # pretrained or training checkpoints
|-- train.py # training entrypoint
|-- test.py # evaluation entrypoint
|-- generate_eccv_supp.py # PDF supplementary figure generation
|-- visualize_dual_branch.py
The code is intended to run with Python 3.10 and a CUDA-enabled PyTorch environment.
Important compatibility note:
- A fresh install test showed that
basicsrdoes not work with the latesttorchvisionreleases in a clean environment. - In particular, installing the newest
torch/torchvisionpair causedbasicsrimport failure withModuleNotFoundError: torchvision.transforms.functional_tensor. - Installing BasicSR directly from the upstream project repository can avoid this issue in practice, for example with
pip install git+https://github.com/XPixelGroup/BasicSR.git. mmcvalso failed to build in the fresh environment, whilemmcv-liteinstalled successfully.- Because of this, do not treat
pip install torch torchvisionwith the newest versions as a guaranteed working setup for this project.
Recommended setup:
conda create -n nrtnet python=3.10 -y
conda activate nrtnet
# Install a BasicSR-compatible torch / torchvision pair first.
# Avoid blindly using the newest torchvision release.
# If you already have a working environment for this project,
# prefer exporting and reusing those exact versions.
pip install torch torchvision
# If the PyPI release of basicsr hits the functional_tensor import error,
# install BasicSR from the upstream project instead.
pip install git+https://github.com/XPixelGroup/BasicSR.git
pip install -r requirements.txtNotes:
basicsris required because training and testing rely onbasicsr.train.train_pipelineandbasicsr.test.test_pipeline.- If
pip install basicsrfails at import time with thefunctional_tensorerror, prefer installing BasicSR from the upstream repository instead of the PyPI release. mmcv-lite,mmengine, andtimmare required by modules inarchs/attn.py.pyiqa,lpips,pytorch-msssim,kornia, andscipyare used by custom losses and metrics.- The
requirements.txtfile intentionally does not pintorchandtorchvision, because they must match your CUDA setup and also remain compatible withbasicsr.
The repository already includes metadata files under datasets/:
datasets/pairs/*.txt: paired image listsdatasets/score_result_*/**.csv: per-image score annotations
The pair list format is:
path/to/input_image|path/to/target_image
Example:
REED_3_RESIZE_512/training/cnt_343_N3.JPG|REED_3_RESIZE_512/training/cnt_343_P0.5.JPG
The score CSV format is:
image_name,brightness,noise_control,color,texture,naturalness
cnt_0_N3.JPG,0.2767,0.4433,0.4667,0.3333,0.4133Important:
- The paths stored in the pair lists are relative to the repository root.
- To use the provided REED metadata as-is, place the image directory at
REED_3_RESIZE_512/under the project root. - For your own datasets, either update the pair-list files or create new ones with the same format.
Training uses the BasicSR pipeline and YAML configs in options/.
Example commands:
python train.py -opt options/train_reed_aes_retinex.yml
python train.py -opt options/train_msec_aes_retinex.yml
python train.py -opt options/train_sice_aes_retinex.ymlAvailable configs:
options/train_reed_aes_retinex.yml: REED trainingoptions/train_msec_aes_retinex.yml: MSEC trainingoptions/train_sice_aes_retinex.yml: SICE training
Outputs are typically written by BasicSR to directories such as:
experiments/tb_logger/results/
Run evaluation with:
python test.py -opt options/test_reed_aes_retinex.ymlThe default test config loads:
checkpoints/nrt_net.pth
and evaluates on the validation split specified in options/test_reed_aes_retinex.yml.
The repository includes a packaged checkpoint:
checkpoints/nrt_net.pth
In the current configs, this checkpoint is used as path.pretrain_network_g.
Generate supplementary PDF visualizations:
python generate_eccv_supp.py \
--image REED_3_RESIZE_512/validation/example.jpg \
--checkpoint checkpoints/nrt_net.pth \
--output_dir vis_results/eccv_suppVisualize intermediate dual-branch features:
python visualize_dual_branch.py \
--image REED_3_RESIZE_512/validation/example.jpg \
--checkpoint checkpoints/nrt_net.pth \
--output_dir vis_results/dual_branchConvert an older checkpoint to the current NRT_Net naming layout:
python convert_checkpoint.py --input old_checkpoint.pth --output checkpoints/nrt_net.pthtrain.pyandtest.pydo not hardcodeCUDA_VISIBLE_DEVICESanymore.- If you want to select a specific GPU, set
CUDA_VISIBLE_DEVICESin your shell before running. train.pyalso setsHF_ENDPOINT=https://hf-mirror.com.
If you use this repository in academic work, please cite the corresponding paper or project release.

