qibao77 / CFSNet

pytorch code of "CFSNet: Toward a Controllable Feature Space for Image Restoration"(ICCV2019)
62 stars 4 forks source link

CFSNet

pytorch code of "CFSNet: Toward a Controllable Feature Space for Image Restoration"(ICCV2019)

[Paper][arXiv][Poster]

Coupling module architecture.

The framework of our proposed controllable feature space network (CFSNet). The details about our proposed CFSNet can be found in our main paper.

If you find our work useful in your research or publications, please consider citing:

@InProceedings{Wang_2019_ICCV,
author = {Wang, Wei and Guo, Ruiming and Tian, Yapeng and Yang, Wenming},
title = {CFSNet: Toward a Controllable Feature Space for Image Restoration},
booktitle = {The IEEE International Conference on Computer Vision (ICCV)},
month = {October},
year = {2019}
}

@article{wang2019cfsnet,
  title={CFSNet: Toward a Controllable Feature Space for Image Restoration},
  author={Wang, Wei and Guo, Ruiming and Tian, Yapeng and Yang, Wenming},
  journal={arXiv preprint arXiv:1904.00634},
  year={2019}
}

Contents

  1. Illustration
  2. Testing
  3. Training
  4. Results

Illustration

The overall code framework:

          CFSNet ── codes
                      └── data
                           ├──deblock_dataset.py
                           ├──denoise_dataset.py
                           ├──sr_dataset.py
                           └── util.py
                      ├──models
                           └── modules
                                  ├── architecture.py
                                  ├── block.py
                                  ├── loss.py
                                  ├── main_net.py
                                  └── tuning_blocks.py
                           ├── CFSNet_model.py
                      ├──settings
                           ├── test
                           ├── train
                           └──options.py
                      ├──utils
                           ├── logger.py
                           └── util.py
                      ├── train.py
                      └── test.py

Dependencies and Installation

GPU: cuda 8.0. Python 3.6.3 (Recommend to use Anaconda)
Packages:

pip install torch-0.4.1-cp36-cp36m-manylinux1_x86_64.whl
pip install torchvision-0.2.1-py2.py3-none-any.whl
pip install opencv-python
pip install opencv-contrib-pytho
pip install lmdb
pip install scikit_image-0.13.1-cp36-cp36m-manylinux1_x86_64.whl
pip install tensorboard-logger

(some .whl packages can be found here: Google Cloud Disk)

Testing

We provide some trained models of CFSNet, and you can download them from Google Cloud Disk.

  1. Prepare datasets according to task type. (Some datasets can be found in Google Cloud Disk.)
  2. Modify the configuration file (settings/test/*.json) according to your personal situation. (please refer to 'settings/test' for instructions.)
  3. Run following command for evaluation:
    python test.py -opt settings/test/*.json 

Training

  1. Prepare training datasets according to task type.
  2. Modify the configuration file (settings/test/*.json) according to your personal situation. (Please refer to 'settings/train' for instructions.)
  3. Run the following command for training:
    python train.py -opt settings/train/*.json

    Note: Our training process consists of two steps, first training the main branch, then training the tuning branch. Therefore, you need to modify the configutation file at different training stages.

Results

Visual Results

Perceptual and distortion balance of “215”, “211” and “268” (PIRM test dataset) for 4× image super-resolution.

Smooth control of distortion and perceptual with control scalar α_in from 0 to 1.

Color image denoising results with unknown noise level σ = 15 (first two rows) and σ = 40 (last two rows). α_in = 0.5 and α_in = −0.3 correspond to the highest PSNR results, respectively.

Smooth control of noise reduction and detail preservation with control scalar α_in from 0 to 1.

JPEG image artifacts removal results of “house” and “ocean” (LIVE1) with unknown quality factor 20. α_in = 0.5 corresponds to the highest PSNR results, and the best visual results are marked with red boxes.

Visual results of single image super-resolution with unseen degradation (the blur kernel is 7 × 7 Gaussian kernel with standard deviation 1.6, the scale factor is 3). α_in = 0.15 corresponds to the highest PSNR results.

Smooth control of blur removal and detail sharpening with control scalar α_in from 0 to 0.8.

Acknowledgement

Thank BasicSR. They provide many useful codes which facilitate our work.