This repository is for COLA-Net introduced in the following paper: Chong Mou, Jian Zhang, Xiaopeng Fan, Hangfan Liu, and Ronggang Wang, "COLA-Net: Collaborative Attention Network for Image Restoration", (IEEE Transactions on Multimedia 2021)
The code is built based on RNAN.
The training datasets are available at DIV2K and SIDD.
In this paper we propose a model dubbed COLA-Net to exploit both local attention and non-local attention to restore image content in areas with complex textures and highly repetitive details, respectively. It is important to note that this combination is learnable and self-adaptive. To be concrete, for local attention operation, we apply local channel-wise attention on different scales to enlarge the size of receptive field of local operation, while for non-local attention operation, we develop a novel and robust patch-wise non-local attention model for constructing long-range dependence between image patches to restore every patch by aggregating useful information (self-similarity) from the whole image.
The pre-trained models are available at Google Drive and PKU Drive.
If you find the code helpful in your resarch or work, please cite the following papers.
@inproceedings{zhang2019rnan,
title={Residual Non-local Attention Networks for Image Restoration},
author={Zhang, Yulun and Li, Kunpeng and Li, Kai and Zhong, Bineng and Fu, Yun},
booktitle={ICLR},
year={2019}
}
@article{mou2021cola, title={COLA-Net: Collaborative Attention Network for Image Restoration}, author={Chong, Mou and Jian, Zhang and Xiaopeng, Fan and Hangfan, Liu and Ronggang, Wang}, journal={IEEE Transactions on Multimedia}, year={2021} }
## Acknowledgements
This code is built on [RNAN (PyTorch)](https://github.com/yulunzhang/RNAN). We thank the authors for sharing their codes of RNAN.