Closed valencebond closed 4 years ago
another question is about “inheriting a good metric”. As the paper said, when Meta-Baseline uses the metric consistent to a good one in Classifier-Baseline,
, but the metric in classifier-baseline and meta-baseline is nn.Linear() and cosine distance respectively. althought there are some relevance, i do not think nn.Linear() and cosine dist are consistent. Maybe your point is it is more consistance than L2 or Euclidean distance ?
Hi, thanks for your interest in our work.
Does this mean that we have no need to continue training after classifier-baseline stage? No. Meta learning stage should help, which is already shown in the experiment results (did you get the same results as those reported in the paper?). Novel class generalization reaches the peak with different speeds in different settings, sometimes the peak is at epoch 1 because we are studying the long-term trend that we choose large epoch size.
"Inheriting metric" refers to inheriting a evaluation metric of Classifier-Baseline. Linear is the training metric.
thanks for your explanation.
thanks for your good paper and code. i have a little confusion about base and novel class generalization in figure3 and figure 1b. if i understand correctly, the gap between base and novel class generalization happens in meta-baseline stage after classifier-baseline stage. And the novel class generalization, which is also the performance we care about most, reach the peak at the first epoch. So, does this mean that we have no need to continue training after classifier-baseline stage ? And i find the meta-learning stage in meta-baseline only have effects in miniImageNet, i.e. inceasting training epoch can get performance improvement, but no effect in tieredImageNet and ImageNet-800.
if there is something wrong, please correct me.