arthurdouillard / incremental_learning.pytorch

A collection of incremental learning paper implementations including PODNet (ECCV20) and Ghost (CVPR-W21).
MIT License
390 stars 59 forks source link

PODNet - NME accuracy for ImageNet #53

Closed ashok-arjun closed 2 years ago

ashok-arjun commented 2 years ago

Hi Arthur @arthurdouillard ,

I see that your paper PODNet reports PODNet-CNN and PODNet-NME accuracies from CIFAR-100.

But for ImageNet, it reports only PODNet-CNN accuracy. Could you please point out why? If there are available, can you please provide the numbers?

Thank you!

arthurdouillard commented 2 years ago

Hey,

I don't have all results, but a PODNet-NME with POD-channels (see table 3b) has for avg on ImageNet100 50/25/10/5 steps: 53.89 / 63.33 / 70.30 / 73.13.

So the results with POD-spatial should be higher but I don't think I ever run them. Nor on ImageNet1000.

Overall, you should expect better perfs than UCIR (both NME and CNN) but it is worse than PODNet-CNN.

I've found on large scale datasets learning the classifier end-to-end was better than relying on doing a KNN on the embedding space.

Does that answer your question? So I couldn't give more results.

ashok-arjun commented 2 years ago

Yes, that answers. Thank you very much @arthurdouillard