yulu0724 / SDC-IL

Semantic Drift Compensation for Class-Incremental Learning (CVPR2020)
115 stars 34 forks source link

Result only with the Pretrain model #7

Open sega-hsj opened 4 years ago

sega-hsj commented 4 years ago

I delete the metric learning part of the code and run the CIFAR100 experiment, so all the model is the Pretrain model at test time. It seems like I get the result as good as when running the code with (metric learning+SDC), Is there anything wrong?

Weighted Accuracy of Model 0 is 0.722 Weighted Accuracy of Model 1 is 0.680 Weighted Accuracy of Model 2 is 0.636 Weighted Accuracy of Model 3 is 0.605 Weighted Accuracy of Model 4 is 0.580 Weighted Accuracy of Model 5 is 0.559 Weighted Accuracy of Model 6 is 0.536 Weighted Accuracy of Model 7 is 0.519 Weighted Accuracy of Model 8 is 0.495 Weighted Accuracy of Model 9 is 0.478 Weighted Accuracy of Model 10 is 0.466

ChunyunShen commented 4 years ago

I also have this question. I found Pretrain model can get similar (slightly worse) results with SDC on CUB200.

wusuoweima commented 3 years ago

I got bad experiment results on CIFAR100 experiment with the run_demo.sh. What are your settings for metric_learning+SDC on CIFAR100 and other datasets? @sega-hsj @ChunyunShen

yulu0724 commented 3 years ago

In Table 1 we show results of 'E-pre', it is indeed sometimes better than fine-tuning with metric leaning. Applying other methods(LWF, MAS, EWC) and our SDC could over perform the pre-trained model results. Classification with metric learning is indeed harder than softmax-based networks (reported in some papers)in general.