Open zhouhuiwang opened 3 years ago
Hey!
Thanks for your positive comments about my code! I'm really glad you like it.
The retraining is actually an optional/additional step.
Once you run training.py, you have a trained model. But, the authors who proposed the Prototypical Networks decided to introduce this extra refinement to the model. The idea is to train the model on T* (training + validation sets as a new training set) for more n + p epochs, where n is the best epoch of the training and p is the patience of the early stopping mechanism that is employed in the training process.
You see, as I wanted to replicate their experiments, I had to create retraining.py.
Once again, thanks for your feedback!
Thank you for your reply! In fact, I just started about few-shot learning recently. And the prototypical network is a classic model. So your code help me a lot.
I notice you update the protonet_weighted. But, I have not found how the weights be obtained. And, I have read the paper you recommended. In that paper, the weights are obtained by concatenate feature and Linear network. I try do as it, but the results is not good.
Besides, I found about that Ominglot dataset is not used in recent paper. In contrast, CUB dataset is another popular dataset in few-shot learning. I have not found the result of protonet in CUB. Would you like update about this in your github?
Thank you again. Thank you very much.
------------------ 原始邮件 ------------------ 发件人: "giovcandido/prototypical-networks-project" @.>; 发送时间: 2021年11月1日(星期一) 晚上9:13 @.>; @.**@.>; 主题: Re: [giovcandido/prototypical-networks-project] about retraining.py (Issue #3)
Hey!
Thanks for your positive comments about my code! I'm really glad you like it.
The retraining is actually an optional/additional step.
Once you run training.py, you have a trained model. But, the authors who proposed the Prototypical Networks decided to introduce this extra refinement to the model. The idea is to train the model on T* (training + validation sets as a new training set) for more n + p epochs, where n is the best epoch of the training and p is the patience of the early stopping mechanism that is employed in the training process.
You see, as I wanted to replicate their experiments, I had to create retraining.py.
Once again, thanks for your feedback!
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub, or unsubscribe. Triage notifications on the go with GitHub Mobile for iOS or Android.
Hi! Sorry for the late reply. Not sure I'll be able to do that now. Feel free to fork the code and add your changes.
hello: Thank for your code. I am a student and I download your code last month. It is great and the results is good. But I want to know when I run training.py. why I still need to run retraining.py. you know, sometimes, it takes more time. Thank you.