Closed arnabphoenix closed 1 year ago
The experiment you are running is CIFAR-100 with 4 tasks of 25 classes each, and not on the same proposed ordering as in the paper. In our survey we provided results for 10 tasks of 10 classes and for 11 tasks with 50 classes first and then 5 classes for the rest. Therefore, not sure what you want to compare to. Please provide more context.
TAw Forg is task-aware forgetting. TAg Forg is task-agnostic forgetting.
Respected Sir, Thank You very much for clarifying and giving the direction. Now the accuracies are matching
============================================================================================================ Arguments = approach: lwf batch_size: 64 clipping: 10000 datasets: ['cifar100'] eval_on_train: False exp_name: None fix_bn: False gpu: 0 gridsearch_tasks: -1 keep_existing_head: False last_layer_analysis: False log: ['disk'] lr: 0.1 lr_factor: 3 lr_min: 0.0001 lr_patience: 5 momentum: 0.0 multi_softmax: False nc_first_task: None nepochs: 200 network: resnet32 no_cudnn_deterministic: False num_tasks: 4 num_workers: 4 pin_memory: False pretrained: False results_path: ../results save_models: False seed: 0 stop_at_task: 0 use_valid_only: False warmup_lr_factor: 1.0 warmup_nepochs: 0 weight_decay: 0.0
Approach arguments = T: 2 lamb: 1
Exemplars dataset arguments = exemplar_selection: random num_exemplars: 0 num_exemplars_per_class: 0
[(0, 25), (1, 25), (2, 25), (3, 25)] ACCURACY COMING IS