AntreasAntoniou / HowToTrainYourMAMLPytorch

The original code for the paper "How to train your MAML" along with a replication of the original "Model Agnostic Meta Learning" (MAML) paper in Pytorch.
https://arxiv.org/abs/1810.09502
Other
773 stars 137 forks source link

Getting lesser accuracy on omniglot dataset than mentioned in the original paper. #20

Open devil10 opened 5 years ago

devil10 commented 5 years ago

When trained the maml++ model on omniglot dataset (1 shot 20 way) with seed 0, I get 96.59 accuracy on test data which is more than 1% less than the accuracy claimed in the original paper. What might be the issue ?

AntreasAntoniou commented 5 years ago

The paper results represent the mean and standard deviation of the test accuracy performance over 3 seeds. You might have to do the same to replicate the results.

devil10 commented 5 years ago

I get 96.65 as my mean across three seeds and 0.0587 as my std across three seeds. Pytorch Version -> 1.0.1.post2 Cuda Version -> 9.0.176 Cudnn Version -> 7402

AntreasAntoniou commented 5 years ago

I wonder if there are any minor seed differences between different PyTorch versions. I'll need to have a more thorough look.

On Wed, 4 Sep 2019 at 18:13, Asad Karim notifications@github.com wrote:

I get 96.65 as my mean across three seeds and 0.0587 as my std across three seeds. Pytorch Version -> 1.0.1.post2 Cuda Version -> 9.0.176 Cudnn Version -> 7402

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/AntreasAntoniou/HowToTrainYourMAMLPytorch/issues/20?email_source=notifications&email_token=ACSK4NX7ZWF6CN7H6OOFOCDQH7UCXA5CNFSM4ISEIQO2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD54JSKA#issuecomment-527997224, or mute the thread https://github.com/notifications/unsubscribe-auth/ACSK4NXESHH46XU7UAHJKILQH7UCXANCNFSM4ISEIQOQ .