Markin-Wang / FFVT

[BMVC 2021] The official PyTorch implementation of Feature Fusion Vision Transformer for Fine-Grained Visual Categorization
Other
48 stars 10 forks source link

About loading the ImageNet Pretrain #7

Open fuyimin96 opened 1 year ago

fuyimin96 commented 1 year ago
  1. Hi, when I saw your code, I found that only the first 11 layers of the transformer are loaded (when feature_fusion == True) And the "ff_last_layer" and "ff_encoder_norm" are trained from scratch, am I right?
  2. If so, what is the performance when loading 12th layer weights to ff_last_layer and norm to off_encoder_norm? Thanks
Markin-Wang commented 1 year ago

Hi, thanks for your interest. Yes exactly, we only load the pretrained weights of the first 11 layers when feature fusion is enabled. The intuition behind is that the input distribution of the ff_last_layer and the pretraiend layer is quite different. Sorry, we didn't conduct the experients of loading all the 12 layers weights. Loading the pretrained norm weights give slightly worser results on some datasets if I remember correctly.

fuyimin96 commented 1 year ago

Thanks for your reply