JamesLiang819 / Instance_Unique_Querying

[NeurIPS 2022 Spotlight] Learning Equivariant Segmentation with Instance-Unique Querying
21 stars 0 forks source link
transformation-equivariance uniqueness-queries-learning

Learning Equivariant Segmentation with Instance-Unique Querying

Python 3.7

Overview of our new training framework for query-based instance segmentation.

This is official repo for Learning Equivariant Segmentation with Instance-Unique Querying. Our full implementation will be availble at mmdetection for easy-to-use, stay tuned!

Abstract

Prevalent state-of-the-art instance segmentation methods fall into a query-based scheme, in which instance masks are derived by querying the image feature using a set of instance-aware embeddings. In this work, we devise a new training framework that boosts query-based models through discriminative query embedding learning. It explores two essential properties, namely dataset-level uniqueness and transformation equivariance, of the relation between queries and instances. First, our algorithm uses the queries to retrieve the corresponding instances from the whole training dataset, instead of only searching within individual scenes. As querying instances across scenes is more challenging, the segmenters are forced to learn more discriminative queries for effective instance separation. Second, our algorithm encourages both image (instance) representations and queries to be equivariant against geometric transformations, leading to more robust, instance-query matching.

Installation

This implementation is built on mmdetection and AdelaiDet. Many thanks to the authors for the efforts.

conda create --name <env> --file requirements.txt

Training

We use the slurm system to train our model. Slurm is a good job scheduling system for computing clusters.

On a cluster managed by Slurm, you can use slurm_train.sh to spawn training jobs. It supports both single-node and multi-node training.

The basic usage is as follows.

OMP_NUM_THREADS=1 [GPUS=${GPUS}] ./tools/slurm_train.sh ${PARTITION} ${JOB_NAME} ${CONFIG_FILE} ${WORK_DIR}

When using Slurm, the port option need to be set in one of the following ways:

  1. Set the port through --options. This is more recommended since it does not change the original configs.

    OMP_NUM_THREADS=1 CUDA_VISIBLE_DEVICES=0,1,2,3 GPUS=4 ./tools/slurm_train.sh ${PARTITION} ${JOB_NAME} config1.py ${WORK_DIR} --options 'dist_params.port=29500'
    OMP_NUM_THREADS=1 CUDA_VISIBLE_DEVICES=4,5,6,7 GPUS=4 ./tools/slurm_train.sh ${PARTITION} ${JOB_NAME} config2.py ${WORK_DIR} --options 'dist_params.port=29501'
  2. Modify the config files to set different communication ports.

    In config1.py, set

    dist_params = dict(backend='nccl', port=29500)

    In config2.py, set

    dist_params = dict(backend='nccl', port=29501)

    Then you can launch two jobs with config1.py and config2.py.

    OMP_NUM_THREADS=1 CUDA_VISIBLE_DEVICES=0,1,2,3 GPUS=4 ./tools/slurm_train.sh ${PARTITION} ${JOB_NAME} config1.py ${WORK_DIR}
    OMP_NUM_THREADS=1 CUDA_VISIBLE_DEVICES=4,5,6,7 GPUS=4 ./tools/slurm_train.sh ${PARTITION} ${JOB_NAME} config2.py ${WORK_DIR}

Note that:

Citation

@article{wang2022learning,
  title={Learning Equivariant Segmentation with Instance-Unique Querying},
  author={Wang, Wenguan and Liang, James and Liu, Dongfang},
  booktitle={Advances in Neural Information Processing Systems},
  year={2022}
}