naderAsadi / Probing-Continual-Learning

9 stars 0 forks source link

Confusion of PRD in Class incremental learning scenarios #2

Open Logictaosen007 opened 1 year ago

Logictaosen007 commented 1 year ago

Excuse me, could you please tell me how to reproduce the results in Figure 4 and Table 2 of the paper? Your research is so helpful to mine.

Logictaosen007 commented 1 year ago

mainly about the Class-incremental learning of PRD

Zi-Jian-Gao commented 1 year ago

Hi! I have similar confusions! Do you find the way to reproduce?

zhangziyi1670 commented 1 year ago

I want to reproduce the result of Class Incremental Learning on Split-Cifar-100 for 20 tasks. I set parameter --task_incremental to 0, but the final avg acc is 12.85, which is far from the result of Table 1 (27.8). I want to know how to reproduce the result of Class Incremental Learning.

naderAsadi commented 1 year ago

Hi, thanks for pointing out this issue. There were some issues in the code, probably due to quick ablation experiments near the submission deadline and rebuttal. I debugged some parts of the code, please rerun using the following command. Note that there is still ~1-2% drop in CIL results of all of the baselines but the relative performance of the methods is as reported.

python main.py --method=repe --dataset=cifar100 --data_root="path/to/data" --model=resnet18 --nf=64 --use_augs=1 --projection_size=128 --projection_hidden_size=512 --task_incremental=0 --keep_training_data=0 --multilinear_eval=0 --singlelinear_eval=1 --n_tasks=20 --n_epochs=100 --n_warmup_epochs=120 --batch_size=128 --lr=0.003 --supcon_temperature=0.1 --distill_coef=4.0 --prototypes_coef=2.0 --prototypes_lr=0.005 --distill_temp=0.1