junfu1115 / DANet

Dual Attention Network for Scene Segmentation (CVPR2019)
MIT License
2.41k stars 483 forks source link

Dual Attention Network for Scene Segmentation(CVPR2019)

Jun Fu, Jing Liu, Haijie Tian, Yong Li, Yongjun Bao, Zhiwei Fang,and Hanqing Lu

Introduction

We propose a Dual Attention Network (DANet) to adaptively integrate local features with their global dependencies based on the self-attention mechanism. And we achieve new state-of-the-art segmentation performance on three challenging scene segmentation datasets, i.e., Cityscapes, PASCAL Context and COCO Stuff-10k dataset.

image

Cityscapes testing set result

We train our DANet-101 with only fine annotated data and submit our test results to the official evaluation server.

image

Updates

2020/9Renew the code, which supports Pytorch 1.4.0 or later!

2020/8:The new TNNLS version DRANet achieves 82.9% on Cityscapes test set (submit the result on August, 2019), which is a new state-of-the-arts performance with only using fine annotated dataset and Resnet-101. The code will be released in DRANet.

2020/7:DANet is supported on MMSegmentation, in which DANet achieves 80.47% with single scale testing and 82.02% with multi-scale testing on Cityscapes val set.

2018/9:DANet released. The trained model with ResNet101 achieves 81.5% on Cityscapes test set.

Usage

  1. Install pytorch

    • The code is tested on python3.6 and torch 1.4.0.
    • The code is modified from PyTorch-Encoding.
  2. Clone the resposity

    git clone https://github.com/junfu1115/DANet.git 
    cd DANet 
    python setup.py install
  3. Dataset

    • Download the Cityscapes dataset and convert the dataset to 19 categories.
    • Please put dataset in folder ./datasets
  4. Evaluation for DANet

    • Download trained model DANet101 and put it in folder ./experiments/segmentation/models/

    • cd ./experiments/segmentation/

    • For single scale testing, please run:

    • CUDA_VISIBLE_DEVICES=0,1,2,3 python test.py --dataset citys --model danet --backbone resnet101 --resume  models/DANet101.pth.tar --eval --base-size 2048 --crop-size 768 --workers 1 --multi-grid --multi-dilation 4 8 16 --os 8 --aux --no-deepstem
    • Evaluation Result

      The expected scores will show as follows: DANet101 on cityscapes val set (mIoU/pAcc): 79.93/95.97(ss)

  5. Evaluation for DRANet

    • Download trained model DRANet101 and put it in folder ./experiments/segmentation/models/

    • Evaluation code is in folder ./experiments/segmentation/

    • cd ./experiments/segmentation/

    • For single scale testing, please run:

    • CUDA_VISIBLE_DEVICES=0,1,2,3 python test.py --dataset citys --model dran --backbone resnet101 --resume  models/dran101.pth.tar --eval --base-size 2048 --crop-size 768 --workers 1 --multi-grid --multi-dilation 4 8 16 --os 8 --aux
    • Evaluation Result

      The expected scores will show as follows: DRANet101 on cityscapes val set (mIoU/pAcc): 81.63/96.62 (ss)

Citation

if you find DANet and DRANet useful in your research, please consider citing:

@article{fu2020scene,
  title={Scene Segmentation With Dual Relation-Aware Attention Network},
  author={Fu, Jun and Liu, Jing and Jiang, Jie and Li, Yong and Bao, Yongjun and Lu, Hanqing},
  journal={IEEE Transactions on Neural Networks and Learning Systems},
  year={2020},
  publisher={IEEE}
}
@inproceedings{fu2019dual,
  title={Dual attention network for scene segmentation},
  author={Fu, Jun and Liu, Jing and Tian, Haijie and Li, Yong and Bao, Yongjun and Fang, Zhiwei and Lu, Hanqing},
  booktitle={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition},
  pages={3146--3154},
  year={2019}
}

Acknowledgement

Thanks PyTorch-Encoding, especially the Synchronized BN!