第1モデルだけでもMAMLを超える成果。 They propose SKD, a two-step model that incorporates knowledge distillation(KD) into Few-shot learning (Self supervised knowledge distillation). In the first model, they added the loss to predict rotation in addition to CE. And in the second model, the KD is done so that the same output is achieved with augmented images.The results of the first model alone exceed MAML.
TL;DR
第1モデルだけでもMAMLを超える成果。 They propose SKD, a two-step model that incorporates knowledge distillation(KD) into Few-shot learning (Self supervised knowledge distillation). In the first model, they added the loss to predict rotation in addition to CE. And in the second model, the KD is done so that the same output is achieved with augmented images.The results of the first model alone exceed MAML.
Why it matters:
Paper URL
https://arxiv.org/abs/2006.09785
Submission Dates(yyyy/mm/dd)
Authors and institutions
Methods
Results
Comments