DonkeyShot21 / cassle

Official repository for the paper "Self-Supervised Models are Continual Learners" (CVPR 2022)
MIT License
117 stars 18 forks source link

Some question about lower and upper bounds #15

Closed AndrewTal closed 1 year ago

AndrewTal commented 1 year ago

Hi,

截屏2023-06-02 12 56 14

I have some questions regarding the calculation of upper and lower bounds, taking class incremental learning as an example:

In supervised learning, the lower bound (Fine-tuning) is performed in a task-specific manner, i.e., Task 1 fine-tuning -> Task 2 fine-tuning ...; whereas the upper bound (offline) involves training a model by integrating all the data together.

Regarding SimCLR, my understanding is that the lower bound (Fine-tuning) corresponds to SSL (Self-Supervised Learning) stage, where it undergoes Task 1 SSL -> Task 2 SSL ..., followed by Linear Evaluation. The upper bound (offline) involves performing SSL on the entire dataset and then conducting Linear Evaluation. I'm not sure if my understanding is correct ?

DonkeyShot21 commented 1 year ago

Yes, you understood correctly. To summarize, everything is the same as in supervised continual learning, except the following: 1) we swap the supervised loss with the SSL loss 2) we use linear evaluation to assess the quality of the features at the end of training

In addition, when calculating forgetting, since we need the accuracy after each task, we also perform linear evaluation on the intermediate checkpoints.

AndrewTal commented 1 year ago

Clear, Thanks!