Open yxchng opened 2 years ago
I have the same request. Thanks.
@yxchng did you get an answer? Do you have a requirements.txt or something similar to share?
---- Replied Message ---- | From | @.> | | Date | 03/01/2023 14:51 | | To | @.> | | Cc | @.***> | | Subject | Re: [facebookresearch/mae] Do you mind sharing your system environment used to reproduce the results in this repo? (Issue #91) |
@yxchng did you get an answer? Do you have a requirements.txt or something similar to share?
— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you are subscribed to this thread.Message ID: @.***>
Do you mind sharing your system environment used to reproduce the results in this repo?
My 5 complete runs (from pretrain to finetune, following instructions in PRETRAIN.md and FINETUNE.md) of base, large and huge gives the following results:
average-base: 83.454 average-large: 85.637 average-huge: 86.882
This is quite a bit far from base: 83.6 | large: 85.9 | huge: 86.9, especially base and large.
Do you have any idea what might have caused the differences?
Based on this thread https://github.com/facebookresearch/mae/issues/30, it seems like this is a widespread problem. Maybe we need some specific versions of certain libraries or frameworks to reproduce the result.
The instruction in FINETUNE.md are different from the hyperparameters used in the paper though
which should i use? the results above are reproduced following exactly instruction in the PRETRAIN.md and FINETUNE.md