AkiraTOSEI / ML_papers

ML_paper_summary(in Japanese)
5 stars 1 forks source link

Self-supervised Knowledge Distillation for Few-shot Learning #93

Open AkiraTOSEI opened 3 years ago

AkiraTOSEI commented 3 years ago

TL;DR

第1モデルだけでもMAMLを超える成果。 They propose SKD, a two-step model that incorporates knowledge distillation(KD) into Few-shot learning (Self supervised knowledge distillation). In the first model, they added the loss to predict rotation in addition to CE. And in the second model, the KD is done so that the same output is achieved with augmented images.The results of the first model alone exceed MAML. image

Why it matters:

Paper URL

https://arxiv.org/abs/2006.09785

Submission Dates(yyyy/mm/dd)

Authors and institutions

Methods

Results

Comments