Closed AndrewTal closed 1 year ago
Yes, you understood correctly. To summarize, everything is the same as in supervised continual learning, except the following: 1) we swap the supervised loss with the SSL loss 2) we use linear evaluation to assess the quality of the features at the end of training
In addition, when calculating forgetting, since we need the accuracy after each task, we also perform linear evaluation on the intermediate checkpoints.
Clear, Thanks!
Hi,
I have some questions regarding the calculation of upper and lower bounds, taking class incremental learning as an example:
In supervised learning, the lower bound (Fine-tuning) is performed in a task-specific manner, i.e., Task 1 fine-tuning -> Task 2 fine-tuning ...; whereas the upper bound (offline) involves training a model by integrating all the data together.
Regarding SimCLR, my understanding is that the lower bound (Fine-tuning) corresponds to SSL (Self-Supervised Learning) stage, where it undergoes Task 1 SSL -> Task 2 SSL ..., followed by Linear Evaluation. The upper bound (offline) involves performing SSL on the entire dataset and then conducting Linear Evaluation. I'm not sure if my understanding is correct ?