Closed muyuuuu closed 1 year ago
这是来自QQ邮箱的假期自动回复邮件。 您好,我最近正在休假中,无法亲自回复您的邮件。我将在假期结束后,尽快给您回复。
It seems that you failed at the base training stage, did you wrongly modify the code for base training?
It seems that you failed at the base training stage, did you wrongly modify the code for base training?
yes, In line 104: https://github.com/G-U-N/PyCIL/blob/master/models/fetril.py#L104
I removed this line's comment, otherwise an error will be reported
your training log?
your training log?
sorry, it lost. please wait 10 minutes
sorry
final result:
2022-12-19 13:51:09,278 [trainer.py] => CNN top1 curve: [80.42, 71.2, 66.36, 62.06, 59.57, 56.16]
2022-12-19 13:51:09,278 [trainer.py] => CNN top5 curve: [96.72, 91.63, 89.34, 87.46, 85.46, 83.33]
I had set all BN layers : track_running_stats=False, so acc is bad. set track_running_stats=True will got good performance.
so, why does this happen, in order to compare my algorithm fairly, I should set track_running_stats=False for all BN Layers.
Glad to see you have reproduced the ideal results in our framework.
For your question:
Setting track_runing_stats
to False
will cause the running_mean
and running_var
being frozen, which typically makes the optimization in CNNs much harder especially when training from scratch. You'd better learn more about how BN works in neural networks.
wait.....
look this repo(task increment learning): https://github.com/sahagobinda/GPM/blob/main/main_cifar100.py
all of BN layers are set track_runing_stats to false, but got good performance.......
maybe this is another question?
I have no responsibility to answer questions outside our framework, thanks.
I'm sorry I was negligent
config:
final result:
I can provide log if you need.