fcdl94 / CoMFormer

Official implementation of "CoMFormer: Continual Learning in Semantic and Panoptic Segmentation"
https://arxiv.org/abs/2211.13999
Other
36 stars 3 forks source link

CoMFormer: Continual Learning in Semantic and Panoptic Segmentation

Fabio Cermelli, Matthieu Cord, Arthur Douillard

[ arXiv ] [ BibTeX ]

Installation

See installation instructions.

Getting Started

Prepare the datasets

See Preparing Datasets for Mask2Former.

How to configure the methods:

Per-Pixel baseline: MODEL.MASK_FORMER.PER_PIXEL True

Mask-based methods: MODEL.MASK_FORMER.SOFTMASK True MODEL.MASK_FORMER.FOCAL True

CoMFormer: CONT.DIST.PSEUDO True CONT.DIST.KD_WEIGHT 10.0 CONT.DIST.UKD True CONT.DIST.KD_REW True

MiB: CONT.DIST.KD_WEIGHT 200.0 CONT.DIST.UKD True CONT.DIST.UCE True

PLOP: CONT.DIST.PSEUDO True CONT.DIST.PSEUDO_TYPE 1 CONT.DIST.POD_WEIGHT 0.001

How to run experiments:

ADE Semantic Segmenation:

ADE Panoptic Segmenation:

Citing CoMFormer

If you use CoMFormer in your research, please use the following BibTeX entry.

@article{cermelli2023comformer,
  title={CoMFormer: Continual Learning in Semantic and Panoptic Segmentation},
  author={Fabio Cermelli and Matthieu Cord and Arthur Douillard},
  journal={IEEE/CVF Computer Vision and Pattern Recognition Conference},
  year={2023}
}

Acknowledgement

The code is largely based on Mask2Former.