huawei-noah / Pretrained-IPT

Apache License 2.0
423 stars 63 forks source link

PWC PWC

Pre-Trained Image Processing Transformer (IPT)

By Hanting Chen, Yunhe Wang, Tianyu Guo, Chang Xu, Yiping Deng, Zhenhua Liu, Siwei Ma, Chunjing Xu, Chao Xu, Wen Gao. [arXiv]

We study the low-level computer vision task (such as denoising, super-resolution and deraining) and develop a new pre-trained model, namely, image processing transformer (IPT). We present to utilize the well-known ImageNet benchmark for generating a large amount of corrupted image pairs. The IPT model is trained on these images with multi-heads and multi-tails. The pre-trained model can therefore efficiently employed on desired task after fine-tuning. With only one pre-trained model, IPT outperforms the current state-of-the-art methods on various low-level benchmarks.

MindSpore Code

Requirements

Dataset

The benchmark datasets can be downloaded as follows:

For super-resolution:

Set5, Set14, B100, Urban100.

Please refer to EDSR-PyTorch

For denoising:

CBSD68, Urban100.

For deraining:

Rain100.

Please refer to PReNet

The result images are converted into YCbCr color space. The PSNR is evaluated on the Y channel only.

Script Description

This is the inference script of IPT, you can following steps to finish the test of image processing tasks, like SR, denoise and derain, via the corresponding pretrained models.

Script Parameter

For details about hyperparameters, see option.py.

Evaluation

Pretrained models

The pretrained models are available in google drive

Evaluation Process

Inference example: For SR x2,x3,x4:

python main.py --dir_data $DATA_PATH --pretrain $MODEL_PATH --data_test Set5+Set14+B100+Urban100 --scale $SCALE --test_only

Note: path of images should be like: $MODEL_PATH/benchmark/Set5/HR/XXX.png and $MODEL_PATH/benchmark/Set5/LR_bicubic/XXX.png

For Denoise 30,50:

python main.py --dir_data $DATA_PATH --pretrain $MODEL_PATH --data_test CBSD68+Urban100 --scale 1 --denoise --sigma $NOISY_LEVEL --test_only

Note: path of images should be like: $MODEL_PATH/benchmark/CBSD68/XXX.png

For derain:

python main.py --dir_data $DATA_PATH --pretrain $MODEL_PATH --scale 1 --derain --test_only

Note: path of images should be like: $MODEL_PATH/Rain100L/rain-XXX.png and $MODEL_PATH/Rain100L/norain-XXX.png

Results

| Method | Scale | Set5 | Set14 | B100 | Urban100 | | :-----------------------------------------------------: | :-------: | :----------: | :----------: | :----------: | :----------: | | [VDSR](https://github.com/twtygqyy/pytorch-vdsr) | X2 | 37.53 | 33.05 | 31.90 | 30.77 | | [EDSR](https://github.com/sanghyun-son/EDSR-PyTorch) | X2 | 38.11 | 33.92 | 32.32 | 32.93 | | [RCAN](https://github.com/yulunzhang/RCAN) | X2 | 38.27 | 34.12 | 32.41 | 33.34 | | [RDN](https://github.com/yulunzhang/RDN) | X2 | 38.24 | 34.01 | 32.34 | 32.89 | | [OISR-RK3](https://github.com/HolmesShuan/OISR-PyTorch) | X2 | 38.21 | 33.94 | 32.36 | 33.03 | | [RNAN](https://github.com/yulunzhang/RNAN) | X2 | 38.17 | 33.87 | 32.32 | 32.73 | | [SAN](https://github.com/hszhao/SAN) | X2 | 38.31 | 34.07 | 32.42 | 33.1 | | [HAN](https://github.com/wwlCape/HAN) | X2 | 38.27 | 34.16 | 32.41 | 33.35 | | [IGNN](https://github.com/sczhou/IGNN) | X2 | 38.24 | 34.07 | 32.41 | 33.23 | | IPT (ours) | X2 | **38.37** | **34.43** | **32.48** | **33.76** | | Method | Scale | Set5 | Set14 | B100 | Urban100 | | :-----------------------------------------------------: | :-------: | :----------: | :----------: | :----------: | :----------: | | [VDSR](https://github.com/twtygqyy/pytorch-vdsr) | X3 | 33.67 | 29.78 | 28.83 | 27.14 | | [EDSR](https://github.com/sanghyun-son/EDSR-PyTorch) | X3 | 34.65 | 30.52 | 29.25 | 28.80 | | [RCAN](https://github.com/yulunzhang/RCAN) | X3 | 34.74 | 30.65 | 29.32 | 29.09 | | [RDN](https://github.com/yulunzhang/RDN) | X3 | 34.71 | 30.57 | 29.26 | 28.80 | | [OISR-RK3](https://github.com/HolmesShuan/OISR-PyTorch) | X3 | 34.72 | 30.57 | 29.29 | 28.95 | | [RNAN](https://github.com/yulunzhang/RNAN) | X3 | 34.66 | 30.52 | 29.26 | 28.75 | | [SAN](https://github.com/hszhao/SAN) | X3 | 34.75 | 30.59 | 29.33 | 28.93 | | [HAN](https://github.com/wwlCape/HAN) | X3 | 34.75 | 30.67 | 29.32 | 29.10 | | [IGNN](https://github.com/sczhou/IGNN) | X3 | 34.72 | 30.66 | 29.31 | 29.03 | | IPT (ours) | X3 | **34.81** | **30.85** | **29.38** | **29.49** | | Method | Scale | Set5 | Set14 | B100 | Urban100 | | :-----------------------------------------------------: | :-------: | :----------: | :----------: | :----------: | :----------: | | [VDSR](https://github.com/twtygqyy/pytorch-vdsr) | X4 | 31.35 | 28.02 | 27.29 | 25.18 | | [EDSR](https://github.com/sanghyun-son/EDSR-PyTorch) | X4 | 32.46 | 28.80 | 27.71 | 26.64 | | [RCAN](https://github.com/yulunzhang/RCAN) | X4 | 32.63 | 28.87 | 27.77 | 26.82 | | [SAN](https://github.com/hszhao/SAN) | X4 | **32.64** | 28.92 | 27.78 | 26.79 | | [RDN](https://github.com/yulunzhang/RDN) | X4 | 32.47 | 28.81 | 27.72 | 26.61 | | [OISR-RK3](https://github.com/HolmesShuan/OISR-PyTorch) | X4 | 32.53 | 28.86 | 27.75 | 26.79 | | [RNAN](https://github.com/yulunzhang/RNAN) | X4 | 32.49 | 28.83 | 27.72 | 26.61 | | [HAN](https://github.com/wwlCape/HAN) | X4 | **32.64** | 28.90 | 27.80 | 26.85 | | [IGNN](https://github.com/sczhou/IGNN) | X4 | 32.57 | 28.85 | 27.77 | 26.84 | | IPT (ours) | X4 | **32.64** | **29.01** | **27.82** | **27.26** |

Citation

@misc{chen2020pre,
      title={Pre-Trained Image Processing Transformer}, 
      author={Chen, Hanting and Wang, Yunhe and Guo, Tianyu and Xu, Chang and Deng, Yiping and Liu, Zhenhua and Ma, Siwei and Xu, Chunjing and Xu, Chao and Gao, Wen},
      year={2021},
      eprint={2012.00364},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}

Acknowledgement