arthurdouillard / incremental_learning.pytorch

A collection of incremental learning paper implementations including PODNet (ECCV20) and Ghost (CVPR-W21).
MIT License
383 stars 60 forks source link

About UCIR hyperparameter #40

Closed eddielyc closed 3 years ago

eddielyc commented 3 years ago

Thanks again for fixing PODNet NME, the new config file produced impressive results 🎉 .

But when I tried to reproduce UCIR, I found another problem. It seems that self._lambda = args.get("base_lambda", 5) self._nb_negatives = args.get("nb_negatives", 2) self._margin = args.get("ranking_margin", 0.2) (model.ucir.py, line 66) try to get hyperparameters from self._use_ranking and self._use_less_forget which both are dict instances, not from args.

And based on the suggestion here, I wonder when UCIR uses NME for eval by default, should I discard fine-tuning as PODNet NME did.

I really appreciate your PODNet and other reproductions, this repo helps me a lot, a huge thank you :)

arthurdouillard commented 3 years ago

Happy that you managed to reproduce PODNet!

But when I tried to reproduce UCIR, I found another problem. It seems that self._lambda = args.get("base_lambda", 5) self._nb_negatives = args.get("nb_negatives", 2) self._margin = args.get("ranking_margin", 0.2) (model.ucir.py, line 66) try to get hyperparameters from self._use_ranking and self._use_less_forget which both are dict instances, not from args.

Indeed, there is again an error how I'm handling the hyperparameters. Good catch! It has been fixed.

And based on the suggestion here, I wonder when UCIR uses NME for eval by default, should I discard fine-tuning as PODNet NME did.

This is a good remark and I don't know the answer... The finetuning phase was actually not mentionned in UCIR paper, but only hidden somewhere in the code. I cannot find it anymore, but maybe if you look at the original codebase you can see?