Open Seyoung9304 opened 2 years ago
Hi, this is a very nice work! Thanks for your contribution.
In your code, I found that you freeze every batch normalization layer when you finetune the model.
if args.stage != 'chairs': model.module.freeze_bn()
I wonder why you froze BN layer when you fine tune the model. Is there any theoretical or experimental reason for it?
Thank you in advance :)
Hi, this is a very nice work! Thanks for your contribution.
In your code, I found that you freeze every batch normalization layer when you finetune the model.
I wonder why you froze BN layer when you fine tune the model. Is there any theoretical or experimental reason for it?
Thank you in advance :)