The original code for the paper "How to train your MAML" along with a replication of the original "Model Agnostic Meta Learning" (MAML) paper in Pytorch.
Previous code was using dictionary to store current parameters (names_weights).
However, apply_inner_loop_update highly depends on the order of this dictionary key or value.
It would cause issue #9 or #11 because of the order mismatch between training time and test time.
So the dictionary is simply replaced with OrderedDict in order to preserve its key/value order.
In my local version, the inner loop optimizer actually uses keys for robustness against this problem. I'll have a look at your proposal and potentially integrate it with my own.
Previous code was using dictionary to store current parameters (
names_weights
). However,apply_inner_loop_update
highly depends on the order of this dictionary key or value. It would cause issue #9 or #11 because of the order mismatch between training time and test time. So the dictionary is simply replaced with OrderedDict in order to preserve its key/value order.issue: #9, #11