Multi-class Token Transformer for Weakly Supervised Semantic Segmentation.
Fig.1 - Overview of MCTformer
2023-08-08: MCTformer+ on Arxiv
Ubuntu 18.04, with Python 3.6 and the following python dependencies.
pip install -r requirements.txt
Download the PASCAL VOC 2012 development kit.
wget http://host.robots.ox.ac.uk/pascal/VOC/voc2012/VOCtrainval_11-May-2012.tar
tar –xvf VOCtrainval_11-May-2012.tar
Download augmented annoations SegmentationClassAug.zip
from SBD dataset via this link.
Make your data directory like this below
VOCdevkit/
└── VOC2012
├── Annotations
├── ImageSets
├── JPEGImages
├── SegmentationClass
├── SegmentationClassAug
└── SegmentationObject
bash run_mct_plus.sh
Step 1: Run the run.sh script for training MCTformer, visualizing and evaluating the generated class-specific localization maps.
bash run.sh
Model | Backbone | Google drive |
---|---|---|
MCTformer-V1 | DeiT-small | Weights |
MCTformer-V2 | DeiT-small | Weights |
Step 2: Run the run_psa.sh script for using PSA to post-process the seeds (i.e., class-specific localization maps) to generate pseudo ground-truth segmentation masks. To train PSA, the pre-trained classification weights were used for initialization.
bash run_psa.sh
Step 3: For the segmentation part, run the run_seg.sh script for training and testing the segmentation model. When training on VOC, the model was initialized with the pre-trained classification weights on VOC.
bash run_seg.sh
Run run_coco.sh for training MCTformer and generating class-specific localization maps. The class label numpy file can be download here. The trained MCTformer-V2 model is here.
bash run_coco.sh
If you have any questions, you can either create issues or contact me by email lian.xu@uwa.edu.au
Please consider citing our paper if the code is helpful in your research and development.
@inproceedings{xu2022multi,
title={Multi-class Token Transformer for Weakly Supervised Semantic Segmentation},
author={Xu, Lian and Ouyang, Wanli and Bennamoun, Mohammed and Boussaid, Farid and Xu, Dan},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={4310--4319},
year={2022}
}