Open zqiao11 opened 1 year ago
Hi @zqiao11
Thanks for your interest in our work!
For your first question: As I mentioned in the comments on the code, the current GitHub version is simplified to speed up the training and evaluation. For the implementation of the detailed bilevel optimization framework, you may refer to this project.
For your second question: Yes. There are some unused variables in the code. We will clean them in the future.
If you have any further questions, please do not hesitate to contact me.
Best,
Yaoyao
Hi @yaoyao-liu
Thanks for your kind and prompt reply. I will check that project for BOP implementation. There are some other issues that I am confused about:
The prototypes
seems to be all the training data (see here). And it is used to calculate the current feature mean of old classes after each incremental task (see here). Mnemonic exemplars are only used to update the corresponding samples in prototypes
and all historical samples are always stored (see here). That seems to violate the basic setup of continual learning, i.e. no access of historical data.
It seems that another augment/view of samples is used to calculate the means of features (here). What is the motivation for doing this?
Thank you. Looking forward to your reply.
Best regards, Zhongzheng
Hello @zqiao11
Thank you so much for your question.
It is true that prototypes
contains all training data. However, we will not use all of them. Instead, we use alpha_dr_herding
to indicate the indexes of the selected exemplars, and we only use the selected exemplars to compute the mean.
The above implementations follow the LUCIR code :https://github.com/hshustc/CVPR19_Incremental_Learning
We're sorry that we didn't add comments to this project. Instead, you may see the code for AANets: https://github.com/yaoyao-liu/class-incremental-learning/blob/main/adaptive-aggregation-networks/trainer/base_trainer.py. We include detailed comments and explanations.
If you have any further questions, please do not hesitate to contact me.
Best regards,
Yaoyao
Hello @yaoyao-liu,
Thank you so much for your kind and comprehensive explanations. Now I have a thorough understanding of the codes and the pipeline. Indeed, the stored historical training data is not involved during the incremental tasks. My previous misunderstanding led to the mistake.
I sincerely appreciate your prompt response and assistance.
Best regards, Zhongzheng
Hi @yaoyao-liu, thanks for your interesting work! I am interested in the BOP mnemonics training and willing to make further extensions on it! I found some problems when I checked the codes of Mnemonics Training. It seems that
trainer/mnemonics.py
is not the complete version, and it does not follow the training strategies described in the paper.For example, I couldn't find the binary-level optimization of mnemonics exemplars. There seems to be only 1-level optimization of mnemonics based on NCE classification. But there should be another level before that, which is training for a temporary model on the exemplars (Eq.8) and unroll all training gradients? Also, I couldn't find the process of splitting exemplars and adjusting the mnemonics of old classes. Another issue is that some arguments are defined but are not used, such as self.mnemonics_lrs?
Could you please help to explain my doubts? I am really interested in the implementation of solving BOP and mnemonics training. I apologize if my understanding is wrong. Thank you very much!