nnizhang / S2MA

source code of "Learning Selective Self-Mutual Attention for RGB-D Saliency Detection" (CVPR2020)
63 stars 8 forks source link

S2MA

source code for our CVPR 2020 paper “Learning Selective Self-Mutual Attention for RGB-D Saliency Detection” by Nian Liu, Ni Zhang and Junwei Han.

created by Ni Zhang, email: nnizhang.1995@gmail.com

Usage

Requirement

  1. pytorch 0.4.1
  2. torchvision 0.1.8

Training

  1. download the RBD-D datasets [baidu pan fetch code: chdz | Google drive] and pretrained VGG model [baidu pan fetch code: dyt4 | Google drive], then put them in the ./RGBdDataset_processed directory and ./pretrained_model directory, respectively.
  2. run python generate_list.py to generate the image lists.
  3. modify codes in the parameter.py
  4. start to train with python train.py

Testing

  1. download our models [baidu pan fetch code: ly9k | Google drive] and put them in the ./models directory. After downloading, you can find two models (S2MA.pth and S2MA_DUT.pth). S2MA_DUT.pth is used for testing on the DUT-RGBD dataset and S2MA.pth is used for testing on the rest datasets.
  2. modify codes in the parameter.py
  3. start to test with python test.py and the saliency maps will be generated in the ./output directory.

Our saliency maps can be download from [baidu pan fetch code: frzb | Google drive].

Acknowledgement

We use some opensource codes from Non-local_pytorch, denseASPP. Thanks for the authors.

Citing our work

If you think our work is helpful, please cite


@inproceedings{liu2020S2MA, 
  title={Learning Selective Self-Mutual Attention for RGB-D Saliency Detection}, 
  author={Liu, Nian and Zhang, Ni and Han, Junwei},
  booktitle={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition},
  pages={13756--13765},
  year={2020}
}