Open devil10 opened 5 years ago
The paper results represent the mean and standard deviation of the test accuracy performance over 3 seeds. You might have to do the same to replicate the results.
I get 96.65 as my mean across three seeds and 0.0587 as my std across three seeds. Pytorch Version -> 1.0.1.post2 Cuda Version -> 9.0.176 Cudnn Version -> 7402
I wonder if there are any minor seed differences between different PyTorch versions. I'll need to have a more thorough look.
On Wed, 4 Sep 2019 at 18:13, Asad Karim notifications@github.com wrote:
I get 96.65 as my mean across three seeds and 0.0587 as my std across three seeds. Pytorch Version -> 1.0.1.post2 Cuda Version -> 9.0.176 Cudnn Version -> 7402
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/AntreasAntoniou/HowToTrainYourMAMLPytorch/issues/20?email_source=notifications&email_token=ACSK4NX7ZWF6CN7H6OOFOCDQH7UCXA5CNFSM4ISEIQO2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD54JSKA#issuecomment-527997224, or mute the thread https://github.com/notifications/unsubscribe-auth/ACSK4NXESHH46XU7UAHJKILQH7UCXANCNFSM4ISEIQOQ .
When trained the maml++ model on omniglot dataset (1 shot 20 way) with seed 0, I get 96.59 accuracy on test data which is more than 1% less than the accuracy claimed in the original paper. What might be the issue ?