ZhexinLiang / CLIP-LIT

[ICCV 2023, Oral] Iterative Prompt Learning for Unsupervised Backlit Image Enhancement
https://zhexinliang.github.io/CLIP_LIT_page/
269 stars 23 forks source link

Iterative Prompt Learning for Unsupervised Backlit Image Enhancement

Zhexin LiangChongyi LiShangchen ZhouRuicheng FengChen Change Loy
S-Lab, Nanyang Technological University 
:star: Accepted to ICCV 2023, Oral

[Project Page][arXiv][Demo Video]

https://github.com/ZhexinLiang/CLIP-LIT/assets/122451585/a34f6808-e39e-4428-bb16-f607f80a8b9f https://github.com/ZhexinLiang/CLIP-LIT/assets/122451585/91b9fefd-2822-43a9-85d9-cd542875362a CLIP-LIT trained using only hundreds of unpaired images yields favorable results on unseen backlit images captured in various scenarios. :open_book: For more visual results, go checkout our project page. ---

:mega: Updates

:desktop_computer: Requirements

create new anaconda env

conda create -n CLIP_LIT python=3.7 -y conda activate CLIP_LIT

install python dependencies

pip install -r requirements.txt


## :running_woman: Inference

### Prepare Testing Data:
You can put the testing images in the `input` folder. If you want to test the backlit images, you can download the BAID test dataset and the Backlit300 dataset from [[Google Drive](https://drive.google.com/drive/folders/1tnZdCxmWeOXMbzXKf-V4HYI4rBRl90Qk?usp=sharing) | [BaiduPan (key:1234)](https://pan.baidu.com/s/1bdGTpVeaHNLWN4uvYLRXXA)].

### Testing:

python test.py

The path of input images and output images and checkpoints can be changed. 

Example usage:

python test.py -i ./Backlit300 -o ./inference_results/Backlit300 -c ./pretrained_models/enhancement_model.pth


## :train: Training

### Prepare Training Data and the initial weights:
You should download the backlit and reference image dataset and put it under the repo. In our experiment, we randomly select 380 backlit images from BAID training dataset and 384 well-lit images from DIV2K dataset as the unpaired training data. We provide the training data we use at [[Google Drive](https://drive.google.com/drive/folders/1X1tawqmUsn69T24VmHSl_qmEFxGLzMf0?usp=sharing) | [BaiduPan (key:1234)](https://pan.baidu.com/s/1a0_mUpoFJszjH1eHfBbJPw)] for your reference.

You should also download the initial prompt pair checkpoint (`init_prompt_pair.pth`) from [[Release](https://github.com/ZhexinLiang/CLIP-LIT/releases/tag/v1.0.0) | [Google Drive](https://drive.google.com/drive/folders/1mImPIUaYbXfZ_CHPvdNK-xKrt94abQO5?usp=sharing) | [BaiduPan (key:1234)](https://pan.baidu.com/s/1H4lOrLaYlS0PYTF4pgfSDw)] and put it into `pretrained_models/init_pretrained_models` folder.

After the data and the initial model weights are prepared, you can use the command to change the training data path, fine-tune the prompt and train the model.

If you don't want to download the initial prompt pair, you can train without the initial checkpoints using the command below. But in this way, the number of the total iterations should be at least $50K$ based on our experiments.

### Commands
Example usage:

python train.py -b ./train_data/BAID_380/resize_input/ -r ./train_data/DIV2K_384/

There are other arguments you may want to change. You can change the hyperparameters using the cmd line.

For example, you can use the following command to **train from scratch**.

python train.py \ -b ./train_data/BAID_380/resize_input/ \ -r ./train_data/DIV2K_384/ \ --train_lr 0.00002 \ --prompt_lr 0.000005 \ --eta_min 5e-6 \ --weight_decay 0.001 \ --num_epochs 3000 \ --num_reconstruction_iters 1000 \ --num_clip_pretrained_iters 8000 \ --train_batch_size 8 \ --prompt_batch_size 16 \ --display_iter 20 \ --snapshot_iter 20 \ --prompt_display_iter 20 \ --prompt_snapshot_iter 100 \ --load_pretrain False \ --load_pretrain_prompt False

Here are the explanation for important arguments:
- `b`: path to the input images (backlit images).
- `r`: path to the reference images (well-lit images).
- `train_lr`: the learning rate for the enhancement model training.
- `prompt_lr`: the learning rate for the prompt pair learning.
- `num_epochs`: the number of total training epoches. For the default setting, 1 epoch = $\frac{number\ of\ training\ images}{batch\ size}$ = 46 iteartions. 
- `num_reconstruction_iters`: the number of iterations for the reconstruction stage of the enhancement network (i.e. the initial enhancement network training), included in the `num_epochs`.
- `num_clip_pretrained_iters`: the number of iterations for the prompt initialization, included in the `num_epochs`.
- `train_batch_size`: the batch size for the enhancement model training.
- `prompt_batch_size`: the batch size for the prompt pair training.
- `display_iter`: the frequency to display the training log during the enhancement model training.
- `snapshot_iter`: the frequency to save the checkpoint during the enhancement model training.
- `prompt_display_iter`: the frequency to display the training log during the prompt pair learning.
- `prompt_snapshot_iter`: the frequency to save the checkpoint during the prompt pair learning.
- `load_pretrain`: whether to load the pretrained enhancement model.
- `load_pretrain_prompt`: whether to load the pretrained prompt pair.

## :love_you_gesture: Citation
If you find our work useful for your research, please consider citing the paper:

@inproceedings{liang2023iterative, title={Iterative prompt learning for unsupervised backlit image enhancement}, author={Liang, Zhexin and Li, Chongyi and Zhou, Shangchen and Feng, Ruicheng and Loy, Chen Change}, booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision}, pages={8094--8103}, year={2023} }



### Contact
If you have any questions, please feel free to reach out at `zhexinliang@gmail.com`. 

<!-- ## :newspaper_roll: License

Distributed under the S-Lab License. See `LICENSE` for more information.

## :raised_hands: Acknowledgements

This study is supported by NTU NAP, MOE AcRF Tier 2 (T2EP20221-0033), and under the RIE2020 Industry Alignment Fund – Industry Collaboration Projects (IAF-ICP) Funding Initiative, as well as cash and in-kind contribution from the industry partner(s).

This project is built on source codes shared by [Style -->