zkawfanx / StableLLVE

Learning Temporal Consistency for Low Light Video Enhancement from Single Images (CVPR2021)
MIT License
145 stars 21 forks source link

StableLLVE

This is a Pytorch implementation of "Learning Temporal Consistency for Low Light Video Enhancement from Single Images" in CVPR 2021, by Fan Zhang, Yu Li, Shaodi You and Ying Fu.

Paper and Supplemental

Requirements

Usage

Training

First, prepare your own traning data and put it in the folder ./data. By default, the code takes input images and ground truth from ./data/train and ./data/gt and you can also change the path in train.py and dataloader.py.

Second, you need to predict plausible optical flow for your ground truth images and put it in the folder ./data/flow. In our paper, we first perform instance segmentation to get object masks using the opensource toolkit detectron2. Then we utilize the pretrained CMP model to generate the optical flow we need.

Update:

Finally, you can train models on your own data by running

cd StableLLVE
python train.py 

You can replace the U-Net with your own model for low light image enhancement. The model will be saved in the folder ./logs.

Testing

You can put your test images into the folder ./data/test and just run

cd StableLLVE
python test.py

Model

Bibtex

If you find this repo useful for your research, please consider citing our paper.

@InProceedings{Zhang_2021_CVPR,
    author    = {Zhang, Fan and Li, Yu and You, Shaodi and Fu, Ying},
    title     = {Learning Temporal Consistency for Low Light Video Enhancement From Single Images},
    booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
    month     = {June},
    year      = {2021},
    pages     = {4967-4976}
}