yaoyao-liu / meta-transfer-learning

TensorFlow and PyTorch implementation of "Meta-Transfer Learning for Few-Shot Learning" (CVPR2019)
https://lyy.mpi-inf.mpg.de/mtl/
MIT License
747 stars 149 forks source link

About the correspondence between the phases proposed in the paper and the phases in the code #62

Open luhexin opened 1 year ago

luhexin commented 1 year ago

I appreciate your contribution, and I've been studying this paper recently, but have a few questions

Question 1: In the paper,pipeline of proposed few-shot learning method, including three phases: (a) DNN training on large-scale data (b) Meta-transfer learning (c) meta-test

framework

In README.md you mention the pre-train phase meta-train phase and meta-test phase A%5M6RQUV)(C@4X27Q_}1FT Is the pre-training phase equivalent to DNN training on large-scale data? meta-train phase = Meta-transfer learning? meta-test phase = meta-test?

Question 2: https://github.com/yaoyao-liu/meta-transfer-learning/blob/835b6bbac9fb81ce2ce7de89cbe367aaf7bfd42c/pytorch/trainer/meta.py#L239-L294

Does the above code correspond to “classifier fine-tuning” in the (c)meta-test phase? Line 258 in this code sets the model to eval mode and does not fine-tune the base-learner

Question 3: Data set in the code is divided into three parts: training set, validation set and test set. Are the train samples in (a), the meta-batches in (b) and the train samples in (c) sampled from the training set? Are the test samples in (c) sampled from the test set?

yaoyao-liu commented 1 year ago

Hello @luhexin

Thanks for your interest in our work. The answers to your questions are as follows:

If you have any further questions, please feel free to contact me.

Best,

Yaoyao

luhexin commented 1 year ago

@yaoyao-liu Thanks for your reply. I'm still confused about some things.

Q1.In the phase of classifier fine-tuning, is base learner fine-tuned with test set?

Q2.In the phase of classifier fine-tuning, I locate this function, but I still don't know how it update the parameters of the base learner. Are the parameters optimized according to loss in this code? https://github.com/yaoyao-liu/meta-transfer-learning/blob/835b6bbac9fb81ce2ce7de89cbe367aaf7bfd42c/pytorch/models/mtl.py#L84

Q3.The final evaluation phase does not update the parameters of the meta-learner and the base learner. Where does this phase correspond to the code?