Closed georgiaolympia closed 6 years ago
I think it's an interesting paper. We can either discuss basic one-shot learning from another paper or from the Deep Learning book by Goodfellow. In the chapter about Representation Learning, there's a bunch about zero-shot & one-shot learning.
We can also use one of these papers for further input on one-shot-learning:
http://vision.stanford.edu/documents/Fei-FeiFergusPerona2006.pdf https://arxiv.org/pdf/1707.05562.pdf
Just briefly skimming the paper, I found the content is very interesting to me:
I'd like to hear what are the foundations and supporting points (e.g. experiments design) for their arguments.
PDF: https://arxiv.org/abs/1703.03400 Code: https://github.com/cbfinn/maml Results: https://sites.google.com/view/maml Reason: Interesting approach to meta-learning that provides fast adaption to different tasks, presented at ICML 2017
For a valuable discussion some background in one-shot learning seems useful. I therefore suggest reading a paper on one-shot learning first and then moving on to this paper's approach.