Sha-Lab / FEAT

The code repository for "Few-Shot Learning via Embedding Adaptation with Set-to-Set Functions"
MIT License
418 stars 84 forks source link

the pre-training hyperparameters for ResNet12 on CUB #74

Closed DongYY127 closed 2 years ago

DongYY127 commented 2 years ago

My problem is the same as #69, In the experiment, I found that the pre-training strategy had a significant effect on the final experimental results, And I want to recreate ResNet12 pre-trained weights for CUB. Could you please give some suggestions for the hyperparameters that pretraining ResNet12 on CUB?

like the pretrain ResNet12 on MiniImageNet "python pretrain.py --batch_size 128 --max_epoch 500 --lr 0.1 --dataset MiniImageNet --backbone_class Res12 --schedule 350 400 440 460 480 --gamma 0.1"

Han-Jia commented 2 years ago

Sorry for the late reply.

For CUB, please try lr = 0.1, epoch = 600, schedule = 400, 500, 550, 580, gamma=0.1