arthurdouillard / incremental_learning.pytorch

A collection of incremental learning paper implementations including PODNet (ECCV20) and Ghost (CVPR-W21).
MIT License
390 stars 59 forks source link

fixed memory in PODNet NME config #38

Closed eddielyc closed 3 years ago

eddielyc commented 3 years ago

Hi, I came across the problem when trying to reproduce PODNet NME with PyTorch 1.7 and launch another run with PyTorch 1.2 as you suggested. The command is here. Then I successfully got the similar results 60.41 +/- 0.86. However, I found this in the log file:

2021-07-12:11:02:42 [podnet.py]: Now 40 examplars per class.

which was printed before first incremental task and I believe the ablation studies followed the fixed memory protocol. In the command above, fixed-memory option is missing and self._fixed_memory = args.get("fixed_memory", True) might fail to work as expected because if fixed_memory is missing in command, parser would automatically set fixed_memory=False. After manually set fixed_memory=True, PODNet NME still got 56.91 +/- 0.6. I wonder if there is something wrong with my config or something else to improve the results.

Thanks :)

arthurdouillard commented 3 years ago

Hey,

You're right about the error in the option file. Thanks for pointing it out. With this fix, I also get around 57%.

I think I did a mistake in the hyperparameter of NME. I'm sure that we should get results above 60% in fixed memory with NME because I tried comparing fix memory vs free memory (see table 5a) so I didn't have the error you pointed out at the time of paper writing.

I'm going to look into it and come back with an answer. Sorry for the inconvenience :)

arthurdouillard commented 3 years ago

Ok I found my mistake: PODNet NME doesn't need a finetuning because it doesn't use the classifier for inference.

See my new options file here (https://github.com/arthurdouillard/incremental_learning.pytorch/commit/cb6efdde32cd1cdd4b6cbebf97d4818b9c95097d). With that I got 61.5% over three runs with NME with Fixed Memory.

Thanks again for poiting out this! I should be more careful when cleaning my code for public release...