Closed bfshi closed 4 years ago
For few-shot learning, the test accuracy has some variance, depending on different test episodes selected. One suggestion is to early-stop the training using the validation set, e.g., evaluate the model on checkpoint that has the best validation accuracy.
THanks. I've tried testing the model with best val result. However, the situation still exists. Btw, there is a accuracy gap of 2%~5% between val and test result. Is that normal? THanks for helping.
The gap between val and test result is normal. Can you leave your email? I can send the my tf code for semi-supervised exp.
Ok thanks! my email is 1159516387bs at gmail dot com
Sent.
Thank you kindly!
Hello! I am sorry to bother the closed issue. Can I have the TPN-semi code as well? I am not sure how to modify the TPN code to TPN-semi. Thank you!
Hi @Hugo101, can you leave your email?
Hi @csyanbin , my email is lichangbin101@gmail.com Thank you so much!
Hello @csyanbin
Please give me the code of TPN-semi
my email: abdoh86@gmail.com
thanks
Hello @csyanbin, I am very interested in your work on TPN-Semi and would like to learn more about it. Would it be possible for you to share the source code with me? I would greatly appreciate it. My email is jingmingxu97@gmail.com
Best regards.
Hi, congrats on the work! I got some problem when running TPN in semi-supervision setting on miniImagenet. I split each class into 40% : 60% for labeled and unlabeled data, and train with extra 5 unlabeled data for each support class (same as the setting in https://arxiv.org/abs/1803.00676). However, I only got accuracy of ~51.3, which is 1.5 lower than your paper. Do you know why this may happen?