Sha-Lab / FEAT

The code repository for "Few-Shot Learning via Embedding Adaptation with Set-to-Set Functions"
MIT License
418 stars 84 forks source link

About pre-training #5

Closed ChengJiacheng closed 5 years ago

ChengJiacheng commented 5 years ago

Hi Fusheng,

We have released the pre-trained model's weight here. Our pretraining procedure is basically the same as this, trained on 64 training classes and validated on 16 validation classes. Model selection is based on nearest neighbor classifier's result (accuracy) on 16 validation classes. Please refer to this repo for your reference of implementation details.

Originally posted by @hexiang-hu in https://github.com/Sha-Lab/FEAT/issues/1#issuecomment-450611114

Hi hexiang,

I am planning to implement the pretraining by myself. The repo you list seems not applicable for training a wide residual net used in the paper Hence, I am not sure if I understand the pretraining procedure correctly. Is it true that I just need to add a 640*64 linear layer after the embedding (model.endoder) and then train the 64-way classifier under the cross-entropy loss?

Any information will be greatly appreciated. Thanks!