LMU-AIJC / lmu-aijc.github.io

Website through which we share materials created by members of the AI Journal Club at Ludwig Maximilian University of Munich
https://lmu-aijc.github.io
0 stars 0 forks source link

Model-Agnostic Meta-Learning for Fast Adaption of Deep Networks #2

Closed georgiaolympia closed 6 years ago

georgiaolympia commented 6 years ago

PDF: https://arxiv.org/abs/1703.03400 Code: https://github.com/cbfinn/maml Results: https://sites.google.com/view/maml Reason: Interesting approach to meta-learning that provides fast adaption to different tasks, presented at ICML 2017

For a valuable discussion some background in one-shot learning seems useful. I therefore suggest reading a paper on one-shot learning first and then moving on to this paper's approach.

EmCity commented 6 years ago

I think it's an interesting paper. We can either discuss basic one-shot learning from another paper or from the Deep Learning book by Goodfellow. In the chapter about Representation Learning, there's a bunch about zero-shot & one-shot learning.

georgiaolympia commented 6 years ago

We can also use one of these papers for further input on one-shot-learning:

http://vision.stanford.edu/documents/Fei-FeiFergusPerona2006.pdf https://arxiv.org/pdf/1707.05562.pdf

changkun commented 6 years ago

Just briefly skimming the paper, I found the content is very interesting to me:

  1. without any model assumption;
  2. no extra param introduced while meta-learning;
  3. use known "knowledge"(optimization process) to optimize learning process (same to human learning)

I'd like to hear what are the foundations and supporting points (e.g. experiments design) for their arguments.