cbfinn / maml

Code for "Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks"
MIT License
2.55k stars 604 forks source link

A few questions regarding the paper #40

Closed AristotelisPap closed 6 years ago

AristotelisPap commented 6 years ago

Hi Chelsea,

I am new to the field and after reading your paper, I have a few questions regarding that.

1) In section 2.2, at the point where the algorithm suggests to compute the adapted parameters with gradient descent, the paper states that it is possible to use multiple gradient updates than just one. How can someone extend it for multiple gradients? If I get it correctly, you mean to do several updates (more than 1) calculating theta_i' with gradient descent and then go to the meta-update stage? If yes, then the meta-update stage will need to calculate gradients of 3rd order or more,right?

2) What do you consider as a task in the classifcation setting? Is it the case that each class represents a task? In the regression setting, does the distribution over tasks correspond to the joint distribution of amplitude and phase?

3) How "similar" the tasks into the distribution should be in order for the MAML algorithm to be effective? Is there a measure of similarity?

4) How did you apply the first-order approximation of MAML? The paper states than in that case "the second derivatives are omitted. Note that the resulting method still computes the meta-gradient at the post-update parameter values theta_i', which provides for effective meta-learning." So, what is the difference with the original algorithm? Which step changes?

5) I can see your algorithm as a way of trying to predict (using gradient through a gradient) what are the best directions to move now such that the model will be close to the optimal parameters of the future (optimality is defined depending on which tasks will be shown at test time). Is this a correct intuition?

Thank you in advance for your time.

Aris Papadopoulos

cbfinn commented 6 years ago
  1. No, if you go through the math, only 2nd-order terms arise, even when using multiple gradient steps. This is because the gradient steps after the first step are not w.r.t. theta.
  2. In the few-shot supervised setting, a task is a set of N classes. Hence, if a dataset has 1200 classes, then there are 1200 choose N tasks total.
  3. In general, there is no known measure of similarity. The test tasks and the training tasks should be sampled from the same distribution.
  4. See the stop_grad flag in this repo. We stop the gradient through the gradient term, cutting the dependency of grad_theta on theta.
  5. Roughly, yes. The directions come from the gradient, so the algorithm is trying to find an initialization such that the gradient points in correct directions. In practice, there is not just one optimum, but a whole space of optimum, particularly with large & deep, overparameterized neural networks. In this way, Figure 1 can be misleading; the optimization is more flexible than it looks.
AristotelisPap commented 6 years ago

Hi Chelsea,

Thank you very much for your detailed response. I think I understood your explanations but I have one more question regarding the supervised classification setting. Suppose you want to do N-way K-shot classification. In your experiments, is it the case that the batch size of the inner gradient is N x K? (I ask this question because when you call the main.py you define batch size = 1 but later you define the tensor as batch size x number of classes and from what you told me that you define as a task in the classification setting, I think N x K is the correct answer.) Moreover, during test time, do you still use the same batch size, i.e. N x K? Last, during test time, you simply use gradient descent, right?

Thank you in advance.

Aris Papadopoulos

cbfinn commented 6 years ago

Yes, that is correct

hoangcuong2011 commented 5 years ago

Both the questions and answers are very good, which help me understand the paper better. Many thanks!