Closed kcs6568 closed 2 years ago
This is a dictonnary, that is usually defined in an option file.
It is used here https://github.com/arthurdouillard/incremental_learning.pytorch/blob/master/inclearn/models/lwf.py#L143 and there https://github.com/arthurdouillard/incremental_learning.pytorch/blob/master/inclearn/models/lwf.py#L152 to regulate the value of the temperature and the lambda factor of the distillation loss for LwF.
And here https://github.com/arthurdouillard/incremental_learning.pytorch/blob/master/inclearn/models/lwm.py#L154 for LwM.
You can take inspiration from the other models' options file.
Beware, you are probably confusing LwF (what you write) and LwM (the code you pasted). These are two different models.
I think my implementation of LwF works ok, but I've never been able to make LwM works as well as the original paper. To be honest, I'm not sure anyone managed to do it and the official code was never released so...
Does that answer your question?
Hello, I'm a beginner at continuous learning. First of all, thank you for allowing me to use your wonderful platform.
I have a question while looking at your code.
I was going to proceed with the LwF study. In your code, the variable "distillation_config" exists in the LwF class.
May I know what the role of this variable is? Also, is it a valid task to learn only the LwF model in your code?