rfuruta / pixelRL

137 stars 31 forks source link

Fully Convolutional Network with Multi-Step Reinforcement Learning for Image Processing

This is the official implementation of the paper in AAAI2019 and TMM2020. We provide the sample codes for training and testing and pretrained models on Gaussian denoising.

Requirements

You can install the required libraries by the command pip install -r requirements.txt. We checked this code on cuda-10.0 and cudnn-7.3.1.

Folders

The folder denoise contains the training and test codes and pretrained models without convGRU and reward map convolution (please see Table 2 in our paper). denoise_with_convGRU contains the ones with convGRU, and denoise_with_convGRU_and_RMC contains the ones with both convGRU and reward map convolution.

Usage

Training

If you want to train the model without convGRU and reward map convolution, please go to denoise and run train.py.

cd denoise
python train.py

Test with pretrained models

If you want to test the pretrained model without convGRU and reward map convolution,

cd denoise
python test.py

Note

Although we used BSD68 training set and Waterloo exploration database for training in our paper, this sample code contains only BSD68 training set (428 images in BSD68/gray/train). Therefore, to reproduce our results (Table 2 in our paper) by running train.py, please download Waterloo exploration database and add it into training set by yourself.

The pretraind models were trained on both BSD68 training set and Waterloo exploration database, so test.py can reproduce our results (Table 2 in our paper).

References

We used the publicly avaliable models of [Zhang+, CVPR17] as the initial weights of our model except for the convGRU and the last layers. We obtained the weights from here and converted them from MatConvNet format to caffemodel in order to read them with Chainer.

We obtained the BSD68 dataset from

Our implementation is based on a3c.py in ChainerRL library and the following articles. We would like to thank them.

Citation

If you use our code in your research, please cite our paper.

@inproceedings{aaai_furuta_2019,
    author={Ryosuke Furuta and Naoto Inoue and Toshihiko Yamasaki},
    title={Fully Convolutional Network with Multi-Step Reinforcement Learning for Image Processing},
    booktitle={AAAI Conference on Artificial Intelligence (AAAI)},
    year={2019}
}
@article{furuta2020pixelrl,
    title={PixelRL: Fully Convolutional Network with Reinforcement Learning for Image Processing},
    author={Ryosuke Furuta and Naoto Inoue and Toshihiko Yamasaki},
    journal={IEEE Transactions on Multimedia (TMM)},
    year={2020},
    volume={22},
    number={7},
    pages={1704-1719}
}

Update on June 26, 2020

Sample codes for image restoration and color enhancement are available on the links below.

Image restoration: https://1drv.ms/u/s!Ar-9gQOTm4zvj71tUM4y_23NV3zUZw
Note that only BSD68 dataset is included. To reproduce our results, please download and add Waterloo exploration database and ILSVRC2015 val set to the list of training images.

Color enhancement: https://1drv.ms/u/s!Ar-9gQOTm4zvj71scx63dJ80aH59xg
To run this code, please download the color enhamcement dataset from here.