floodsung / LearningToCompare_FSL

PyTorch code for CVPR 2018 paper: Learning to Compare: Relation Network for Few-Shot Learning (Few-Shot Learning part)
MIT License
1.04k stars 268 forks source link

Why no model.eval() in test code? #12

Open tlittletime opened 6 years ago

tlittletime commented 6 years ago

thanks for your code! but a problem confuse me. why didn't you use feature_encoder.eval() and relation_network.eval() in your test code? it actually has an impact on the results.

floodsung commented 6 years ago

Yes, I tried and interestingly I have a better result when not using eval()

ehsanmok commented 5 years ago

@floodsung You need to use eval(), otherwise you won't fix the BachNorm statistics. See this

Bigwode commented 5 years ago

@ehsanmok agree,but I test it 10% test accuracy drop.

IbsenChan commented 5 years ago

@floodsung If you do not use eval() in test phase, you will determine the BatchNorm statistics through seeing other query images. This is cheating to some extent, because each query image only see the support images in each episode. Determining the BatchNorm statistic through a big batch of query images will trun few-shot learning task into a many-shot learning task.

itongworld commented 5 years ago

@IbsenChan BUT why the degradation happens when using eval()? Theoretically Relation Net should behave better than ProtoNet, i.e. performance over 65% (training protonet with 5-way 5-shot), according to its novel idea of learning a metric in the paper. But with eval() I only get performance no higher than 60%.

Do you have any ideas about the degradation? Is it the problem of implementation or the idea itself?

YuwenXiong commented 3 years ago

@tlittletime @Bigwode @IbsenChan @itongworld I haven't tested the code yet, but I believe that is because the authors wrongly set momentum=1 for all batch norm layers, which makes the BN layers always save the current batch's stats and discard all previous stats. This might be the possible reason why do not use eval will yield better results.