Closed ykarmesh closed 4 years ago
I re-run the iCaRL code of Rebuffi and copy the randomly generated class order.
It's not really interesting to cherry-pick the best one, furthermore all algo should be evaluated with the same class order, thus which one doesn't matter as long as it is the same.
Yeah, that makes sense. Thanks a lot!
Hi,@arthurdouillard For ImageNet100, I have the similar confusion. How to determine which 100 categories to choose. I noticed that in the implementation, there are two ImageNet100 train txts, including train_100.txt and train_100_ucir.txt(Corresponding to class ImageNet100 and class ImageNet100UCIR). Does PODNet use the train_100.txt rather than train_100_ucir.txt? Moreover, is train_100.txt also from iCaRL code? Thanks in advance.
Yeah, the imagenet subset from UCIR (train_100_ucir.txt
) and the one I used (train_100.txt
) are different. I took mine from some random github project, simply because I didn't found UCIR's subset at first.
PODNet, and all others compared models in the paper (UCIR included) are trained on train_100.txt
. So the comparison is fair.
You can compare UCIR results in my paper and in the original paper to see the difference of using train_100.txt
vs train_100_ucir.txt
. It's not a lot.
I see that you have defined a specific class order in the iCIFAR100 dataset class. Was it something that gave the best result and you found that out empirically or is it due to some other reason?