Open WHQ1111 opened 3 years ago
Sorry for the confusion. The naming "maml" here indicates that we want to enable the gradient back-propagation in the learning-to-generalize (or learning-to-learn) training process.
Sorry for the confusion. The naming "maml" here indicates that we want to enable the gradient back-propagation in the learning-to-generalize (or learning-to-learn) training process.
So why we use the Feature Wise Transformation module when we want to enable the gradient back-propagation in the learning-to-generalize (or learning-to-learn) training process? Is this back-propagation process similar to calculating the gradient along the inner loop in MAML?
Yeah you are right. The code is adopted from the MAML implementation, so does the naming. :-)
As we can see, the line 280 and the line 283 of the file 'methods/backbone.py' mean that we use the Feature Wise Transformation module in MAML not the metric-based models. But this contradicts the paper, doesn't it?