They proposed SimCLRv2, which uses only a small number of labels and performs as well or better than supervised learning. It consists of three stages: unsupervised learning, FineTune, and self-training distillation using unlabeled data. Basically, larger models are better.
TL;DR
They proposed SimCLRv2, which uses only a small number of labels and performs as well or better than supervised learning. It consists of three stages: unsupervised learning, FineTune, and self-training distillation using unlabeled data. Basically, larger models are better.
Why it matters:
Paper URL
https://arxiv.org/abs/2006.10029
Submission Dates(yyyy/mm/dd)
Authors and institutions
Methods
Results
Comments