I am trying to re-implement this paper in Pytorch! And I know this codebase is based on the official meta-transfer learning implementation (https://github.com/yaoyao-liu/meta-transfer-learning) since it's basically the same group of authors.
I hope to ask for some few clarifications.
The total number of tasks during meta-training is equal to the number of meta-train iterations right? Which is 15000? Also the total number of tasks during meta-testing AND meta-validation is still 600? Both the same as in the MTL implementation?
Moreover, the number of shots of the support and query during meta-train in one-shot setting is 1 (or 5 for 5-shot) and 15 respectively right? While during meta-validation and meta-testing.,the number of instances per class (shot) for the support set and query is uniformly 1 (or 5 for 5-shot)?
Hello, first great work!
I am trying to re-implement this paper in Pytorch! And I know this codebase is based on the official meta-transfer learning implementation (https://github.com/yaoyao-liu/meta-transfer-learning) since it's basically the same group of authors.
I hope to ask for some few clarifications.
The total number of tasks during meta-training is equal to the number of meta-train iterations right? Which is 15000? Also the total number of tasks during meta-testing AND meta-validation is still 600? Both the same as in the MTL implementation?
Moreover, the number of shots of the support and query during meta-train in one-shot setting is 1 (or 5 for 5-shot) and 15 respectively right? While during meta-validation and meta-testing.,the number of instances per class (shot) for the support set and query is uniformly 1 (or 5 for 5-shot)?
Thank you for your time.