GMvandeVen / continual-learning

PyTorch implementation of various methods for continual learning (XdG, EWC, SI, LwF, FROMP, DGR, BI-R, ER, A-GEM, iCaRL, Generative Classifier) in three different scenarios.
MIT License
1.54k stars 310 forks source link

0 accuracy values for task-free setting #28

Closed hiteshvaidya closed 9 months ago

hiteshvaidya commented 9 months ago

Hello,

I tried compare_task_free.py and main_task_free.py script for a setting where task boundaries are not available and --iters=1 and --budget=0 but such a setting either throws errors or gives 0 accuracy values for all tasks and 1.0 for last class of CIFAR10. I set --contexts 10 for this experiment. I would highly appreciate your help in this matter.

Thank you!

GMvandeVen commented 9 months ago

Hi, for the experiment you describe (with one class per task and no storing of data from past tasks), I would indeed expect that for many continual learning methods the result should be a model that only predicts the last class. Regarding the errors, if you give some more details I can see whether I could help.

hiteshvaidya commented 9 months ago

Thanks for replying @GMvandeVen, I need to recreate the errors and will post them as soon as I find them again. In the meantime, could you please also share if there is any way to obtain a task matrix of accuracies to obtain metrics like BWT, FWT, Forgetting Measure and Learning Accuracy?

GMvandeVen commented 9 months ago

To compute a task matrix of accurcies, you can use the flag --results-dict when running main.py. At the end of each task, the accuracy is then computed for each task so far and stored in 'plotting_dict': https://github.com/GMvandeVen/continual-learning/blob/11215d2ba745d504740d763156f5a38fecf25a49/main.py#L343-L345

If you want to do the above while also computing the accuracy for future tasks, you can change this if-statement: https://github.com/GMvandeVen/continual-learning/blob/11215d2ba745d504740d763156f5a38fecf25a49/eval/evaluate.py#L91-L98

Hope this helps!

hiteshvaidya commented 9 months ago

Here's the error that I got,

(cl-pytorch) [hvaidya@forest.usf.edu@GPU12 continual-learning]$ ./compare_task_free.py --experiment=CIFAR10 --scenario=class --iters 1 --budget 1 --contexts 10 --replay none --joint --stream academic-setting
usage: ./compare_task_free.py [-h] [--seed SEED] [--n-seeds N_SEEDS] [--no-gpus] [--no-save] [--full-stag STAG] [--full-ltag LTAG] [--data-dir D_DIR] [--model-dir M_DIR] [--plot-dir P_DIR] [--results-dir R_DIR]
                              [--time] [--visdom] [--results-dict] [--acc-n ACC_N] [--experiment {splitMNIST,permMNIST,CIFAR10,CIFAR100}] [--stream {fuzzy-boundaries,academic-setting,random}] [--fuzziness ITERS]
                              [--scenario {task,domain,class}] [--contexts N] [--iters ITERS] [--batch BATCH] [--no-norm] [--conv-type {standard,resNet}] [--n-blocks N_BLOCKS] [--depth DEPTH]
                              [--reducing-layers RL] [--channels CHANNELS] [--conv-bn CONV_BN] [--conv-nl {relu,leakyrelu}] [--global-pooling] [--fc-layers FC_LAY] [--fc-units N] [--fc-drop FC_DROP]
                              [--fc-bn FC_BN] [--fc-nl {relu,leakyrelu,none}] [--z-dim Z_DIM] [--singlehead] [--lr LR] [--optimizer {adam,sgd}] [--momentum MOMENTUM] [--pre-convE] [--convE-ltag LTAG]
                              [--seed-to-ltag] [--freeze-convE] [--recon-loss {MSE,BCE}] [--update-every N] [--replay-update N] [--xdg] [--gating-prop PROP] [--fc-units-sep N] [--epsilon EPSILON] [--c SI_C]
                              [--temp TEMP] [--budget BUDGET] [--eps-agem EPS_AGEM] [--eval-s EVAL_S] [--fc-units-gc N] [--fc-lay-gc N] [--z-dim-gc N] [--no-context-spec] [--no-si] [--no-agem]
./compare_task_free.py: error: argument --replay-update: invalid int value: 'none'

I am trying to recreate a setting where there are no task boundaries provided and no replay

GMvandeVen commented 9 months ago

The function ./compare_task_free.py does not have an option --replay. By giving --replay none as input, you set --replay-update to none, which is not a valid value for that option.

hiteshvaidya commented 9 months ago

So can I still do no replay and one class per task with no task boundaries with ./compare_task_free.py ? Apart from the changes you suggested in main.py, are there any other changes needed so I could obtain a task matrix for all methods with compare_task_free.py?

Thanks! for your help

GMvandeVen commented 9 months ago

In principle you can use ./compare_task_free.py with one class per task and no replay, but note that a substantial amount of the methods that are compared in this script expect to store data and/or use replay.

Regarding obtaining the task matrices, with the changes I described it should indeed be possible to obtain such task matrices, although you of course have to make a few changes to the code yourself to then obtain them in the format you want.

hiteshvaidya commented 9 months ago

I made changes to https://github.com/GMvandeVen/continual-learning/issues/28#issuecomment-1838970197 and removed the or (i+1 <= current_context) so that a task matrix is stored in the store/results folder. But, the results folder still has text files with only single accuracy value and not a task matrix. I would highly appreaciate your help here @GMvandeVen

GMvandeVen commented 9 months ago

The values of the task matrix should then be stored in the dictionary plotting_dict. This dictionary is not written out to a text file by default, you would have to change the code yourself to do that.