MingXiangL / AttentionShift

Official Implementation of AttentionShift: Iteratively Estimated Part-based Attention Map for Pointly Supervised Instance Segmentation
Apache License 2.0
214 stars 28 forks source link

AttentionShift

image

Performance on Pascal VOC and MS-COCO:

image image

MAE pretrained link

wget https://dl.fbaipublicfiles.com/mae/pretrain/mae_pretrain_vit_base.pth

Installation

Just run the install.sh

Training

To train AttentionShift, run: Just run run_train.py

The annotations can be obtained here

Note: use_checkpoint is used to save GPU memory. Please refer to this page for more details.

Apex:

We use apex for mixed precision training by default. To install apex, run:

git clone https://github.com/NVIDIA/apex
cd apex
pip install -v --disable-pip-version-check --no-cache-dir --global-option="--cpp_ext" --global-option="--cuda_ext" ./

If you would like to disable apex, modify the type of runner as EpochBasedRunner and comment out the following code block in the configuration files:

# do not use mmdet version fp16
fp16 = None
optimizer_config = dict(
    type="DistOptimizerHook",
    update_interval=1,
    grad_clip=None,
    coalesce=True,
    bucket_size_mb=-1,
    use_fp16=True,
)