Yiyu Li · Ke Xu · Gerhard Petrus Hancke · Rynson W.H. Lau
Abstract: Images captured under sub-optimal illumination conditions may contain both over- and under-exposures. Current approaches mainly focus on adjusting image brightness, which may exacerbate color tone distortion in under-exposed areas and fail to restore accurate colors in over-exposed regions. We observe that over- and over-exposed regions display opposite color tone distribution shifts, which may not be easily normalized in joint modeling as they usually do not have "normal-exposed" regions/pixels as reference. In this paper, we propose a novel method to enhance images with both over- and under-exposures by learning to estimate and correct such color shifts. Specifically, we first derive the color feature maps of the brightened and darkened versions of the input image via a UNet-based network, followed by a pseudo-normal feature generator to produce pseudo-normal color feature maps. We then propose a novel COlor Shift Estimation (COSE) module to estimate the color shifts between the derived brightened (or darkened) color feature maps and the pseudo-normal color feature maps. The COSE module corrects the estimated color shifts of the over- and under-exposed regions separately. We further propose a novel COlor MOdulation (COMO) module to modulate the separately corrected colors in the over- and under-exposed regions to produce the enhanced image. Comprehensive experiments show that our method outperforms existing approaches.
To get started, clone this project, create a conda virtual environment using Python 3.9 (or higher versions may do as well), and install the requirements:
git clone https://github.com/yiyulics/CSEC.git
cd CSEC
conda create -n csec python=3.9
conda activate csec
# Change the following line to match your environment
# Refer to https://pytorch.org/get-started/previous-versions/#v1121
conda install pytorch==1.12.1 torchvision==0.13.1 torchaudio==0.12.1 cudatoolkit=11.6 -c pytorch -c conda-forge
pip install pytorch_lightning==1.7.6
pip install -r requirements.txt
To evaluate the trained model, you'll need to do the following steps:
pretrained/
folder.src/config/ds/test.yaml
(if you don't need ground truth images for testing, just leave the GT
value as none
).python src/test.py checkpoint_path=/path/to/checkpoint/filename.ckpt
/path/to/checkpoint/
, a new folder named test_result/
will be created, and all the final enhanced images (*.png
images) will be saved in this folder. Other intermediate results of each image will also be saved in the subfolders of test_result/
(e.g., test_result/normal/
for pseudo-normal images, etc.)To train your own model from scratch, you'll need to do the following steps:
src/config/ds/train.yaml
.src/config/ds/valid.yaml
(if have any).python src/train.py name=your_experiment_name
log/
folder.You may need to reduce the batch size in src/config/config.yaml
to avoid out of memory errors. If you do this, but want to preserve quality, be sure to increase the number of training iterations and decrease the learning rate by whatever scale factor you decrease batch size by.
If you find our work helpful, please cite our paper as:
@inproceedings{li_2024_cvpr_csec,
title = {Color Shift Estimation-and-Correction for Image Enhancement},
author = {Yiyu Li and Ke Xu and Gerhard Petrus Hancke and Rynson W.H. Lau},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
year = {2024}
}
Should you have any questions, feel free to post an issue or contact me at yiyuli.cs@my.cityu.edu.hk.
The project is largely based on LCDPNet. Many thanks to the project for their excellent contributions!