Closed brando90 closed 1 year ago
Reproducibility is not broken: shuffling the classes like we do closely follows the official MAML implementation.
If you want exactly the same splits across runs, you can set the state of the RNG before instantiating the benchmark or implement your own benchmark with your desired splits.
Reproducibility is not broken: shuffling the classes like we do closely follows the official MAML implementation.
If you want exactly the same splits across runs, you can set the state of the RNG before instantiating the benchmark or implement your own benchmark with your desired splits.
@seba-1511 I don't think this is a maml thing or not as far as I understand. It means we aren't using the same data set on each run/experiment. In mini-imagenet for example the splits for the images are pre-fixed. Why would we do this seemingly (to me) arbitrary randomization? doesn't it make the experiments uncomparable?
since idk if someone will respond: https://stats.stackexchange.com/questions/592229/why-does-the-maml-split-the-omniglot-data-set-randomly-on-every-run, https://www.reddit.com/r/pytorch/comments/y3ftqz/is_the_reproducibility_of_omniglot_completely/
Doesn't this make experiments accross papers incomparable. It could be that the original maml paper did something wrong so quoating it doesn't really help (me).
commenting that line out for me until this is explained.
I saw this line of code:
https://github.com/learnables/learn2learn/blob/0b9d3a3d540646307ca5debf8ad9c79ffe975e1c/learn2learn/vision/benchmarks/omniglot_benchmark.py#L37
when I was unioning my datasets and noticed that the labels were not consecutive in omniglot...is this an accident or how are we guaranteed omniglot's reproducibility is not broken? (since this code doesn't not set the seeds it must be up to the user...perhaps I'm totally wrong though.)