This is an official implementation of the paper "Saving 100x Storage: Prototype Replay for Reconstructing Training Sample Distribution in Class-Incremental Semantic Segmentation", accepted by NeurIPS 2023. [paper]
This repository has been tested with the following environment:
conda create -n star python=3.8.13
conda activate star
conda install pytorch==1.12.1 torchvision==0.13.1 torchaudio==0.12.1 cudatoolkit=11.3 -c pytorch
conda install pandas==2.0.3
We use augmented 10,582 training samples and 1,449 validation samples for PASCAL VOC 2012. You can download the original dataset in here. To train our model with augmented samples, please download labels of augmented samples ('SegmentationClassAug') and file names ('train_aug.txt'). The structure of data path should be organized as follows:
└── ./datasets/PascalVOC2012
├── Annotations
├── ImageSets
│ └── Segmentation
│ ├── train_aug.txt
│ └── val.txt
├── JPEGImages
├── SegmentationClass
├── SegmentationClassAug
└── SegmentationObject
We use 20,210 training samples and 2,000 validation samples for ADE20K. You can download the dataset in here. The structure of data path should be organized as follows:
└── ./datasets/ADE20K
├── annotations
├── images
├── objectInfo150.txt
└── sceneCategories.txt
To train our model on the PASCAL VOC 2012 dataset, follow these example commands:
GPU=0
BS=24
SAVEDIR='saved_voc'
TASKSETTING='disjoint'
TASKNAME='15-5'
INIT_LR=0.001
LR=0.0001
MEMORY_SIZE=0 # 50 for STAR-M
NAME='STAR'
python train_voc.py -c configs/config_voc.json \
-d ${GPU} --multiprocessing_distributed --save_dir ${SAVEDIR} --name ${NAME} \
--task_name ${TASKNAME} --task_setting ${TASKSETTING} --task_step 0 --lr ${INIT_LR} --bs ${BS}
python train_voc.py -c configs/config_voc.json \
-d ${GPU} --multiprocessing_distributed --save_dir ${SAVEDIR} --name ${NAME} \
--task_name ${TASKNAME} --task_setting ${TASKSETTING} --task_step 1 --lr ${LR} --bs ${BS} --freeze_bn --mem_size ${MEMORY_SIZE}
To train our model on the PASCAL VOC 2012 dataset, follow these example commands:
GPU=0,1
BS=12 # Total 24
SAVEDIR='saved_ade'
TASKSETTING='overlap'
TASKNAME='50-50'
INIT_LR=0.0025
LR=0.00025
MEMORY_SIZE=0
NAME='STAR'
python train_ade.py -c configs/config_ade.json \
-d ${GPU} --multiprocessing_distributed --save_dir ${SAVEDIR} --name ${NAME} \
--task_name ${TASKNAME} --task_setting ${TASKSETTING} --task_step 0 --lr ${INIT_LR} --bs ${BS}
python train_ade.py -c configs/config_ade.json \
-d ${GPU} --multiprocessing_distributed --save_dir ${SAVEDIR} --name ${NAME} \
--task_name ${TASKNAME} --task_setting ${TASKSETTING} --task_step 1 --lr ${LR} --bs ${BS} --freeze_bn --mem_size ${MEMORY_SIZE}
python train_ade.py -c configs/config_ade.json \
-d ${GPU} --multiprocessing_distributed --save_dir ${SAVEDIR} --name ${NAME} \
--task_name ${TASKNAME} --task_setting ${TASKSETTING} --task_step 2 --lr ${LR} --bs ${BS} --freeze_bn --mem_size ${MEMORY_SIZE}
To facilitate ease of use, you can directly run the .sh
files located in the ./scripts/
directory. These files offer complete commands for training on both datasets.
To evaluate on the PASCAL VOC 2012 dataset, execute the following command:
python eval_voc.py --device 0 --test --resume path/to/weight.pth
Or, download our pretrained weights and corresponding config.json
files provided below. Ensure that the config.json file is located in the same directory as the weight file.
Method (Overlapped) |
19-1 (2 steps) |
15-5 (2 steps) |
15-1 (6 steps) |
10-1 (11 steps) |
5-3 (6 steps) |
---|---|---|---|---|---|
STAR | 76.61 | 74.86 | 72.90 | 64.86 | 64.54 |
STAR-M | 77.02 | 75.80 | 74.03 | 66.60 | 65.65 |
Method (Disjoint) |
19-1 (2 steps) |
15-5 (2 steps) |
15-1 (6 steps) |
---|---|---|---|
STAR | 76.38 | 73.48 | 70.77 |
STAR-M | 76.73 | 73.79 | 71.18 |
To evaluate on the ADE20K dataset, execute the following command:
python eval_ade.py --device 0 --test --resume path/to/weight.pth
Or, download our pretrained weights and corresponding config.json
files provided below. Ensure that the config.json file is located in the same directory as the weight file.
Method (Disjoint) |
100-50 (2 steps) |
100-10 (6 steps) |
50-50 (3 steps) |
---|---|---|---|
STAR | 36.39 | 34.91 | 34.44 |
@inproceedings{chen2023saving,
title={Saving 100x Storage: Prototype Replay for Reconstructing Training Sample Distribution in Class-Incremental Semantic Segmentation},
author={Chen, Jinpeng and Cong, Runmin and Luo, Yuxuan and Ip, Horace Ho Shing and Kwong, Sam},
booktitle={NeurIPS},
year={2023}
}