Closed HenryZ94264 closed 2 years ago
This technique is introduced in IL2M [1] paper. If you reduce the training epochs after learning Task 1, a model effectively use the learned features from first tasks, and due to the small training epochs, this can reduce the overfitting to further tasks. As a result, reducing the training epochs can reduce the catastrophic forgetting in incremental learning Thanks! [1] Eden Belouadah and Adrian Popescu. Il2m: Class incremental learning with dual memory. In The IEEE International Conference on Computer Vision (ICCV), October 2019
Thanks for your prompt reply! I'm now running SS-IL in CIFAR-100, it seems to have better score without using epoch decay.
In the official code, file main.py contains code like this in line 59 to 64
I'm wondering why reducing the training epochs, could you please explain the meaning of doing so? Thanks!