Abstract: Low-light image enhancement (LLIE) techniques attempt to increase the visibility of images captured in low-light scenarios. However, as a result of enhancement, a variety of image degradations such as noise and color bias are revealed. Furthermore, each particular LLIE approach may introduce a different form of flaw within its enhanced results. To combat these image degradations, post-processing denoisers have widely been used, which often yield oversmoothed results lacking detail. We propose using a diffusion model as a post-processing approach, and we introduce Low-light Post-processing Diffusion Model (LPDM) in order to model the conditional distribution between under-exposed and normally-exposed images. We apply LPDM in a manner which avoids the computationally expensive generative reverse process of typical diffusion models, and post-process images in one pass through LPDM. Extensive experiments demonstrate that our approach outperforms competing post-processing denoisers by increasing the perceptual quality of enhanced low-light images on a variety of challenging low-light datasets. Source code is available at https://github.com/savvaki/LPDM.
The image results for all techniques and datasets are available for download here.
undarken
directories contain $\hat{\boldsymbol{x}}_0^\eta$ for each low-light enhancement technique $\eta$. In other words, these are the results before post-processing.denoised
directories contain the results of LPDM in the lpdm_lol
subdirectory, as well as the results for the ablation studies. For the LOL dataset, the results of NAFNet and BM3D are also provided. BM3D results are provided for different values of $\sigma$. Set up and activate the virtual conda environment:
conda env create -f environment.yml
conda activate lpdm
.ckpt
file and .yaml
file here and place them in the checkpoints
directory:
───checkpoints
├───lpdm_lol.ckpt
└───lpdm_lol.yaml
s
and phi
(φ) parameters in configs/test/denoise.yaml
as desired.test/dark
directorytest/eta
directory. Make sure the images have the same names as the files in test/dark
cd scripts
python denoise_config.py
test/denoised
python denoise_config.py --help
. --device "cuda"
configs/test/metrics.yaml
and specify the denoised image glob path pred_path
and the ground truth glob path target_path
. To use non-reference metrics, remove target_path
from the config.cd scripts
python calculate_metrics.py
test/results.csv
. python calculate_metrics.py --help
Download the LOL Dataset here. Unzip the dataset and place the contents in the datasets/lol
directory:
───datasets
└───lol
├───eval15
│ ├───high
│ └───low
└───our485
├───high
└───low
For the commands below, PyTorch lightning will create a logs
directory where checkpoints will be saved.
For GPU training options pass the --gpu
argument according to PyTorch Lighting documentation. The examples below use one GPU.
Train and log locally:
python main.py --base configs/train/lpdm_lol.yaml --gpu 0,
Train and log metrics with wandb:
WANDB_API_KEY
, WANDB_ENTITY
, WANDB_PROJECT
python main.py --base configs/train/lpdm_lol.yaml --logger_to_use wandb --gpu 0,
An example command to resume a saved checkpoint in the directory RUN_DIRECTORY_NAME_HERE
using wandb logging:
python main.py --resume logs/RUN_DIRECTORY_NAME_HERE --logger_to_use wandb --gpu 0,
To reduce the size of the model, reduce the model_channels
parameter in configs/train/lpdm_lol.yaml
@article{panagiotou2024denoising,
title = {Denoising diffusion post-processing for low-light image enhancement},
journal = {Pattern Recognition},
volume = {156},
pages = {110799},
year = {2024},
issn = {0031-3203},
doi = {https://doi.org/10.1016/j.patcog.2024.110799},
url = {https://www.sciencedirect.com/science/article/pii/S0031320324005508},
author = {Savvas Panagiotou and Anna S. Bosman}
}
This repository is a derivative of the original Stable Diffusion repository.