G-U-N / ECCV22-FOSTER

The official implementation for ECCV22 paper: "FOSTER: Feature Boosting and Compression for Class-Incremental Learning" in PyTorch.
MIT License
51 stars 4 forks source link

Unable to reproduce paper results #7

Closed LLD2 closed 1 year ago

LLD2 commented 1 year ago

Hello, thanks for your sharing. I ran the code and found that the result of "foster" on "cifar100" is worse than that in the paper. I ran the B0-10steps, the top1 result using CNN is :[93.5, 80.6, 76.97, 71.8, 69.84, 67.5, 65.63, 62.32, 61.04, 59.47] NME:[92.7, 78.5, 73.67, 65.65, 62.32, 58.38, 56.76, 53.09, 51.79, 49.15]. Then I average the CNN result. The outcome is 70.867 which is lower than paper's result (72.90). my json file is as follows. I would appreciate it if you could help me. { "prefix": "init10_steps10", "dataset": "cifar100", "memory_size": 2000, "memory_per_class": 20, "fixed_memory": true, "shuffle": true, "init_cls": 10, "increment": 10, "model_name": "foster", "convnet_type": "resnet32", "device": ["0"], "seed": [1993], "beta1":0.94, "beta2":0.97, "oofc":"ft", "is_teacher_wa":false, "is_student_wa":false, "lambda_okd":1, "wa_value":1, "init_epochs": 200, "init_lr" : 0.1, "init_weight_decay" : 5e-4, "boosting_epochs" : 170, "compression_epochs" : 130, "lr" : 0.1, "batch_size" : 128, "weight_decay" : 5e-4, "num_workers" : 8, "T" : 2 }

G-U-N commented 1 year ago

Hi, thanks for your interest!

It seems that you forget to set the "fixed_memory" to false when training on B0 protocols. Please check the ECCV22-issue4 to understand the reason.

Hope this clarifies your queries.

LLD2 commented 1 year ago

Thanks again for your kind reply. I try again with "fix_memory" to false. Although the average incremental score is higher than before, I still can not get the score in the paper. The result of CNN top1 I get is [93.5, 82.95, 78.53, 72.08, 69.64, 67.27, 66.29, 63.11, 61.78, 60.36], and the average score is 71.551 however in the paper the score is 72.90. Here is my training log. Thanks a lot for your help. init0_incremental10.txt

G-U-N commented 1 year ago
  1. Please try to slightly increase the values of beta1 and beta2, such as (0.94, 0 98) or (0.95, 0.97).
  2. Keep your pytorch version consistent, I noticed that the higher version of pytorch might cause performance decline.
LLD2 commented 1 year ago

Sorry to bother you. I have tried to increase the values of beta1 and beta2. For (0.94, 0.98) the outcome is [94.0, 83.05, 78.67, 73.8, 70.66, 68.78, 66.93, 63.75, 61.91, 60.66] (72.221). For (0.95,0.97) the outcome is [94.0, 83.85, 79.27, 72.65, 70.84, 68.63, 67.34, 64.7, 62.8, 61.09] (72.517). The result in your paper is (72.9). All experiments are in torch1.8.1. I would deeply appreciate you if you send me the trained log.txt or your trained model.pth. init0_increment10_step10_0.94_0.98_1993_foster_resnet32_cifar100_10_10_0.94_0.98_False_False.log init0_increment10_step10_0.95_0.97_1993_foster_resnet32_cifar100_10_10_0.95_0.97_False_False.log

G-U-N commented 1 year ago

base4_alphadecay_0.05_1993_baseline4_cosine_resnet32_cifar100_10_10_0.98_0.97_False_False_1.log

anhongchap commented 1 year ago

您好,我看了您提供的log.txt文件但是其中"compression_epochs"是训练170次,您提供的代码不应该是130次吗?能否麻烦您解惑呢,麻烦您了!