Closed ZhouYzzz closed 5 years ago
This behavior cannot be controlled through manually seeding, this is what makes me uncomfortable. I think everything should be in order before e.g. torch.DataLoader
, or at least repeatable under the same seed.
A simple solution is to change this line to
self.seq_dict = OrderedDict(sorted(self._cache_meta().items()))
Thanks for reporting the randomness issue and also proposing a solution. Using OrderedDict
is a great idea for reproducibility. We'll add this feature in our later revision.
Hi @ZhouYzzz, the seq_dict
is already an OrderedDict in the old implementation. We have fixed the randomness by replace line 73 to
seq_dict = json.load(f, object_pairs_hook=OrderedDict)
You could access the revision using pip install --upgrade git+https://github.com/got-10k/toolkit.git@master
.
Hi @ZhouYzzz, the
seq_dict
is already an OrderedDict in the old implementation. We have fixed the randomness by replace line 73 toseq_dict = json.load(f, object_pairs_hook=OrderedDict)
You could access the revision using
pip install --upgrade git+https://github.com/got-10k/toolkit.git@master
.
That is great, thank you!
I have been troubled by the randomness of
ImageNetVID
, and finally found the reason. In some versions of python, e.g. python 3.5, the cache files in JSON format will be unorderly loaded. This won't happen when we use python 2.7 or python 3.6. This greatly prevented me from reproducing my experiments, since the random order of the training data between runs will lead to different gradients in early epochs when training SiamFC. I suggest we may cache the dataset in a more stable way, e.g. using numpy or cpickle, and usingOrderedDict
or else.Details:
gives
ILSVRC2015_train_00000000.0
twice when using python 3.6 (completely in order), givesILSVRC2015_train_00646001.0
twice when using python 2.7 (not in order but repeatable), but givesILSVRC2015_train_00053009.1
andILSVRC2015_train_00047000.2
when using python 3.5.