arthurdouillard / incremental_learning.pytorch

A collection of incremental learning paper implementations including PODNet (ECCV20) and Ghost (CVPR-W21).
MIT License
383 stars 60 forks source link

Trying to reproduce BiC #35

Open billpsomas opened 3 years ago

billpsomas commented 3 years ago

Hello Arthur,

Congratulations for the contribution in IL.

I am trying to reproduce the BiC results running the following command: python -minclearn --options options/bic/bic_cifar100.yaml options/data/cifar100_3orders.yaml --increment 10 --initial-increment 50 --fixed-memory --temperature 2 --data-path data/ --device 0

The average incremental accuracy I am getting is ~51%, which is ~5% lower than this reported on paper. Is there anything wrong with the command?

Thank you in advance :)

arthurdouillard commented 3 years ago

It does seem good to me.

I have found some performance loss if using recent pytorch versions (no idea why...), have you tried with torch==1.2.0?

billpsomas commented 3 years ago

Oh really? I am using torch 1.7.1. I will try with an older version and we'll see! Quite weird, though...

arthurdouillard commented 3 years ago

At least, the version had a significant impact on PODNet NME (https://github.com/arthurdouillard/incremental_learning.pytorch/issues/31#issuecomment-766335321).

zhengjin11 commented 2 years ago

Oh really? I am using torch 1.7.1. I will try with an older version and we'll see! Quite weird, though...

emmm, have you fixed the problem? I have same problem.....

arthurdouillard commented 2 years ago

Hello,

No, I haven't looked into the problem, have you try using torch==1.2.0 as I've suggested? Otherwise, you can look at DER which modify this repository, and propose their own BiC (slightly different than mine): https://github.com/Rhyssiyan/DER-ClassIL.pytorch

If you are just looking to add more baselines to your paper, I'd suggest you to WA instead (also see DER's repo). Like BiC it's a classifier recalibration, but simpler and more efficient.

PS: if you find something to improve my version of BiC, please share it with us.

zhengjin11 commented 2 years ago

Thanks for your reply. I have try two environment version.

1.python == 3.6.13 torch==1.2.0 run command:

CUDA_VISIBLE_DEVICES=0 python -minclearn --options options/bic/bic_cifar100.yaml  options/data/cifar100_1orders.yaml \
    --initial-increment 50 --increment 10 --fixed-memory\
    --device 0 --label  bic_base50_inc10_torch_1.2.0 \
    --data-path /data/Public/Datasets/cifar-100-python  --save-model task

result in last setp:

      "task_id": 5,
      "accuracy": {
        "total": 0.371,
        "00-09": 0.411,
        "10-19": 0.353,
        "20-29": 0.227,
        "30-39": 0.312,
        "40-49": 0.372,
        "50-59": 0.37,
        "60-69": 0.178,
        "70-79": 0.282,
        "80-89": 0.373,
        "90-99": 0.834
      },
      "incremental_accuracy": 0.5001666666666668,
      "accuracy_top5": {
        "total": 0.705
      },
      "incremental_accuracy_top5": 0.803,

2.python==3.8.5 torch==1.8.1 run command:

CUDA_VISIBLE_DEVICES=0 python -minclearn --options options/bic/bic_cifar100.yaml  options/data/cifar100_1orders.yaml \
    --initial-increment 50 --increment 10 --fixed-memory\
    --device 0 --label  bic_base50_inc10_torch_1.8.1 \
    --data-path /data/Public/Datasets/cifar-100-python  --save-model task

result in last setp:

      "task_id": 5,
      "accuracy": {
        "total": 0.38,
        "00-09": 0.432,
        "10-19": 0.334,
        "20-29": 0.204,
        "30-39": 0.297,
        "40-49": 0.355,
        "50-59": 0.42,
        "60-69": 0.283,
        "70-79": 0.329,
        "80-89": 0.29,
        "90-99": 0.851
      },
      "incremental_accuracy": 0.5095,
      "accuracy_top5": {
        "total": 0.72
      },
      "incremental_accuracy_top5": 0.8260000000000001,

It seems like that there is no much difference . emmmmmm,Both of their incremental_accuracy in last setp are about 0.50 which lower than the result reported in paper

Simple-meimei commented 1 year ago

Hello, my dear author.

It seems that your BIC implementation does not train the parameters of the bias layer, or I may not find it. Can you give me a suggestion.Thanks a lot.