Closed zhukaii closed 4 years ago
Sorry for the long delay.
I've spotted at least one thing wrong in the configuration: "fixed_memory": true
.
This means that even in the first tasks, classes only have a fixed amount of memory, while iCaRL originally used a degressive amount of images per class as more classes came.
If you set this variable to False, I'm sure results will improve.
Thanks!
Hi,I run your code for the icarl and got an average of 60.59. Each curve's dot (10 classes) is also slightly lower than the image in Fig. 2 in icarl paper. ([0.889, 0.778, 0.693, 0.625, 0.575, 0.564, 0.523, 0.499, 0.477, 0.436]) My config is as follows: "config": { "model": "icarl", "convnet": "rebuffi", "dropout": 0.0, "herding": null, "memory_size": 2000, "temperature": 1, "fixed_memory": true, "dataset": "cifar100", "increment": 10, "batch_size": 128, "workers": 0, "threads": 1, "validation": 0.0, "random_classes": false, "max_task": null, "onehot": false, "initial_increment": 10, "sampler": null, "data_path": "datasets/CIFAR100", "lr": 2.0, "weight_decay": 5e-05, "scheduling": [ 49, 63 ], "lr_decay": 0.2, "optimizer": "sgd", "epochs": 70, "label": "icarl_cifar100_9steps", "autolabel": false, "seed": 1, "seed_range": null, "options": [ "options/icarl/icarl_cifar100.yaml", "options/data/cifar100_3orders.yaml" [87, 0, ...] ], "save_model": "never", "dump_predictions": false, "logging": "info", "resume": null, "resume_first": false, "recompute_meta": false, "no_benchmark": false, "detect_anomaly": false, How can I modify it to get a average result of about 64%? Thanks!