Sha-Lab / FEAT

The code repository for "Few-Shot Learning via Embedding Adaptation with Set-to-Set Functions"
MIT License
418 stars 84 forks source link

The ProtoNet achieves the best Acc just after 1 epoch based on the given pre-trained ResNet12 weights. #45

Closed JimZAI closed 3 years ago

JimZAI commented 3 years ago

Hi, Dr. Ye Your nice work brings me a lot of inspiratons. Thank you very much!

I have tried to reproduct the results of the ProtoNet with the pre-trained weights of ResNet12 you provided. However, i find the model achieves the best acc just after 1 epoch using the the following scripts. In addition, when i look at the curves of val_acc and val_loss, i found with the val_loss decrease, the val_acc also decrease. Is it normal?

Thank you so much!

Scripts used in my experiments: python train_fsl.py --max_epoch 200 --model_class ProtoNet--backbone_class Res12 --dataset MiniImageNet --way 5 --eval_way 5 --shot 5 --eval_shot 5 --query 15 --eval_query 15 --balance 0.1 --temperature 64 --temperature2 32 --lr 0.0002 --lr_mul 10 --lr_scheduler step --step_size 40 --gamma 0.5 --gpu 7 --init_weights ./saves/initialization/miniimagenet/Res12-pre.pth --eval_interval 1 --use_euclidean

VicaYang commented 3 years ago

Same here. I asked the author and here is the reply:

Q: So is it normal that the model achieves the best accuracy in the first couple epochs?

A: Some methods could get the best results in the first couple of epochs. The temperature is important. For example, we set temperature = 64 for FEAT to soften the logit. The best results could be obtained around 50 epochs.

Tsingularity commented 2 years ago

same here