Closed AristotelisPap closed 6 years ago
Hi Chelsea,
Thank you very much for your detailed response. I think I understood your explanations but I have one more question regarding the supervised classification setting. Suppose you want to do N-way K-shot classification. In your experiments, is it the case that the batch size of the inner gradient is N x K? (I ask this question because when you call the main.py you define batch size = 1 but later you define the tensor as batch size x number of classes and from what you told me that you define as a task in the classification setting, I think N x K is the correct answer.) Moreover, during test time, do you still use the same batch size, i.e. N x K? Last, during test time, you simply use gradient descent, right?
Thank you in advance.
Aris Papadopoulos
Yes, that is correct
Both the questions and answers are very good, which help me understand the paper better. Many thanks!
Hi Chelsea,
I am new to the field and after reading your paper, I have a few questions regarding that.
1) In section 2.2, at the point where the algorithm suggests to compute the adapted parameters with gradient descent, the paper states that it is possible to use multiple gradient updates than just one. How can someone extend it for multiple gradients? If I get it correctly, you mean to do several updates (more than 1) calculating theta_i' with gradient descent and then go to the meta-update stage? If yes, then the meta-update stage will need to calculate gradients of 3rd order or more,right?
2) What do you consider as a task in the classifcation setting? Is it the case that each class represents a task? In the regression setting, does the distribution over tasks correspond to the joint distribution of amplitude and phase?
3) How "similar" the tasks into the distribution should be in order for the MAML algorithm to be effective? Is there a measure of similarity?
4) How did you apply the first-order approximation of MAML? The paper states than in that case "the second derivatives are omitted. Note that the resulting method still computes the meta-gradient at the post-update parameter values theta_i', which provides for effective meta-learning." So, what is the difference with the original algorithm? Which step changes?
5) I can see your algorithm as a way of trying to predict (using gradient through a gradient) what are the best directions to move now such that the model will be close to the optimal parameters of the future (optimality is defined depending on which tasks will be shown at test time). Is this a correct intuition?
Thank you in advance for your time.
Aris Papadopoulos