Closed hiteshvaidya closed 9 months ago
Thanks for taking a look at our work!
The opt.training_data_type
is used in data_utils.py
in L312 [here]. The data_utils.py
is responsible for configuring the training and validation data. If you specify opt.training_data_type
to be seq
, it will lead to different configuration for the data loader in this file.
For your question re. the validation function, our validation function evaluates on the iid data samples, and is different from the task-based continual learning papers. The Section 3 "training and evaluation protocol" in the SCALE paper provides a detailed explanation.
I'll leave this issue open for a week. Please let me know if you have any follow-up questions.
I want to obtain a task matrix of evaluation accuracies of SCALE and further check other metrics like Backward Transfer and Learning Accuracy. In order to obtain that, I guess I will have to change the val_loader and knn_train_loader to return task/class wise samples?
If I am not mistaken, the knn_task_eval() does perform task wise evaluation but it calculates an average accuracy over samples from all tasks and not give a task-wise accuracy vector? The eval_forget() function does calculate forgetting measure but is never referenced in the code.
Yes, your understands are correct.
Hello,
I read the paper and went through the code and realized that the sequential data training never occurs in main_supcon.py.
run_scale.sh
sets--training_data_type
toclass_iid
when we set data stream as 'seq'. However, thistraining_data_type
is never used in the code.As per the paper, 'seq' setting passes the samples in a class incremental when we set
seq
data stream. However, the dataloader loads data in iid form and does not train the network on a single sequential class number (eg. 0,1,2,...9). Therefore, even the validation function evaluates the model on iid data samples and does not generate any task matrix like reported in many continual/lifelong learning papers GEM.