Closed chandramouli-sastry closed 5 months ago
MLCommons CLA bot All contributors have signed the MLCommons CLA ✍️ ✅
Hello! Thanks so much for publicly sharing the checksums. It's incredibly helpful to be able to check that datasets were correctly downloaded.
I've run the checksumdir
command on my directories, and get completely different values? (incl ogbg, etc)
Assuming not all my datasets were downloaded incorrectly, I am curious if there could be another reason for the difference in checksum values. Do you have any idea what could be causing this? Thanks!!
Hi! Thanks for giving this a try! Yes, there is certainly something else going on -- but, i don't know what could be causing this! Did you check the output of the tree commands? Could you share your checksum of a couple simple ones like ogbg/criteo1tb? I don't know how to debug this but perhaps having your checksum values might help in some way :)
Hi @chandramouli-sastry, thanks a lot for adding this. I think this could be very beneficial. However, I am also getting different checksums, while the tree output (and file sizes and numbers) are identical. Below, I am posting my checksums, perhaps they are identical to @tfaod ?
4808a6652bc4d129c1638ac55b219bfe
3764a73cdc19d7572c042ce19e59c74b
(but this is after generating the wmt_sentencepiece_model
)cd8c6452d9fa5fe89d050df969e98f70
checksumdir imagenet_v2
: a7f24a2250469706827eb2dff360590d
.aeb5217d11610ab6c679df572faadc7e
. But here I also get a different output when running the tree --filelimit 30
. It reports that 885 (not 347) entries exceeds the filelimit, which is also aligned with the total of 885 files I find via find -type f | wc -l
.071e7582d63c92e51797f3f11967fb74
(but this is after generating spm_model.vocab
. The tree
output is identical to what you reported, minus the additional spm_model.vocab
file.@tfaod did you check the number of files (e.g. via find -type f | wc -l
) and total file size (e.g. via du -sch --apparent-size librispeech/
)vs. what we report? This was intended as a first check whether all the data downloading worked.
Thanks for the quick reply!
EDIT: re: @fsschneider - I've the same checksum on criteo1tb, but not ogbg or wmt (with generated model file). I'll generate the rest of the checksums to compare
I've checked all our 1/ file counts (find -type f | wc -l) and 2/ dir size du -sch --apparent-size wmt/), and they are all consistent with the mlcommons provided values.
I've included my outputs for the tree and checksumdir commands for ogbg. wmt, and criteo1tb.
There are 2 differences in the tree results:
1) criteo1tb displays 885 entries exceeds filelimit
rather than 347
from the README
2) wmt has an additional file wmt_sentencepiece_model
that the README has in the final directory
but not the tree output
tree $DATA_DIR/ogbg --filelimit 30
└── ogbg_molpcba
└── 0.1.3
├── dataset_info.json
├── features.json
├── metadata.json
├── ogbg_molpcba-test.tfrecord-00000-of-00001
├── ogbg_molpcba-train.tfrecord-00000-of-00008
├── ogbg_molpcba-train.tfrecord-00001-of-00008
├── ogbg_molpcba-train.tfrecord-00002-of-00008
├── ogbg_molpcba-train.tfrecord-00003-of-00008
├── ogbg_molpcba-train.tfrecord-00004-of-00008
├── ogbg_molpcba-train.tfrecord-00005-of-00008
├── ogbg_molpcba-train.tfrecord-00006-of-00008
├── ogbg_molpcba-train.tfrecord-00007-of-00008
└── ogbg_molpcba-validation.tfrecord-00000-of-00001
2 directories, 13 files
* `checksumdir $DATA_DIR/ogbg: 88420b94329a574d9308360dacf0778f`
## criteo1tb
* `tree $DATA_DIR/criteo1tb --filelimit 30`
criteo1tb [885 entries exceeds filelimit, not opening dir]
0 directories, 0 files
* `checksumdir $DATA_DIR/criteo1tb: aeb5217d11610ab6c679df572faadc7e`
wmt:
* tree wmt --filelimit 30
```wmt
├── wmt14_translate
│ └── de-en
│ └── 1.0.0
│ ├── dataset_info.json
│ ├── features.json
│ ├── wmt14_translate-test.tfrecord-00000-of-00001
│ ├── wmt14_translate-train.tfrecord-00000-of-00016
│ ├── wmt14_translate-train.tfrecord-00001-of-00016
│ ├── wmt14_translate-train.tfrecord-00002-of-00016
│ ├── wmt14_translate-train.tfrecord-00003-of-00016
│ ├── wmt14_translate-train.tfrecord-00004-of-00016
│ ├── wmt14_translate-train.tfrecord-00005-of-00016
│ ├── wmt14_translate-train.tfrecord-00006-of-00016
│ ├── wmt14_translate-train.tfrecord-00007-of-00016
│ ├── wmt14_translate-train.tfrecord-00008-of-00016
│ ├── wmt14_translate-train.tfrecord-00009-of-00016
│ ├── wmt14_translate-train.tfrecord-00010-of-00016
│ ├── wmt14_translate-train.tfrecord-00011-of-00016
│ ├── wmt14_translate-train.tfrecord-00012-of-00016
│ ├── wmt14_translate-train.tfrecord-00013-of-00016
│ ├── wmt14_translate-train.tfrecord-00014-of-00016
│ ├── wmt14_translate-train.tfrecord-00015-of-00016
│ └── wmt14_translate-validation.tfrecord-00000-of-00001
├── wmt17_translate
│ └── de-en
│ ├── wmt14_translate-train.tfrecord-00000-of-00016
│ ├── wmt14_translate-train.tfrecord-00001-of-00016
│ ├── wmt14_translate-train.tfrecord-00002-of-00016
│ ├── wmt14_translate-train.tfrecord-00003-of-00016
│ ├── wmt14_translate-train.tfrecord-00004-of-00016
│ ├── wmt14_translate-train.tfrecord-00005-of-00016
│ ├── wmt14_translate-train.tfrecord-00006-of-00016
│ ├── wmt14_translate-train.tfrecord-00007-of-00016
│ ├── wmt14_translate-train.tfrecord-00008-of-00016
│ ├── wmt14_translate-train.tfrecord-00009-of-00016
│ ├── wmt14_translate-train.tfrecord-00010-of-00016
│ ├── wmt14_translate-train.tfrecord-00011-of-00016
│ ├── wmt14_translate-train.tfrecord-00012-of-00016
│ ├── wmt14_translate-train.tfrecord-00013-of-00016
│ ├── wmt14_translate-train.tfrecord-00014-of-00016
│ ├── wmt14_translate-train.tfrecord-00015-of-00016
│ └── wmt14_translate-validation.tfrecord-00000-of-00001
├── wmt17_translate
│ └── de-en
│ └── 1.0.0
│ ├── dataset_info.json
│ ├── features.json
│ ├── wmt17_translate-test.tfrecord-00000-of-00001
│ ├── wmt17_translate-train.tfrecord-00000-of-00016
│ ├── wmt17_translate-train.tfrecord-00001-of-00016
│ ├── wmt17_translate-train.tfrecord-00002-of-00016
│ ├── wmt17_translate-train.tfrecord-00003-of-00016
│ ├── wmt17_translate-train.tfrecord-00004-of-00016
│ ├── wmt17_translate-train.tfrecord-00005-of-00016
│ ├── wmt17_translate-train.tfrecord-00006-of-00016
│ ├── wmt17_translate-train.tfrecord-00007-of-00016
│ ├── wmt17_translate-train.tfrecord-00008-of-00016
│ ├── wmt17_translate-train.tfrecord-00009-of-00016
│ ├── wmt17_translate-train.tfrecord-00010-of-00016
│ ├── wmt17_translate-train.tfrecord-00011-of-00016
│ ├── wmt17_translate-train.tfrecord-00012-of-00016
│ ├── wmt17_translate-train.tfrecord-00013-of-00016
│ ├── wmt17_translate-train.tfrecord-00014-of-00016
│ ├── wmt17_translate-train.tfrecord-00015-of-00016
│ └── wmt17_translate-validation.tfrecord-00000-of-00001
└── wmt_sentencepiece_model
6 directories, 41 files
checksumdir $DATA_DIR/wmt: 5921e54f13a9968d31dc2e3eec4f9f34
Thanks @fsschneider for generating the checksums! I think all of this suggests that the data downloaded on kasimbeg-8 in /home/kasimbeg/data is incomplete/corrupted -- i had to re-download and extract fastmri and that seems to match with the one generated by Frank, so thats good! Its also good that the checksum obtained by @tfaod on criteo matches with that of Frank's! I wanted to avoid considering the vocab files in the checksum because the serialized data might not be consistent across runs -- but not sure!
I mainly only wrote this script and copy-pasted the outputs into the readme:
import os
for dirname in glob.glob("data/*"):
os.system(f"tree {dirname} --filelimit 30")
os.system(f"checksumdir {dirname}")
I think that we could then append the output of this code on the data directory we believe is correctly downloaded at the end of the README?
@chandramouli-sastry did you detect any differences on kasimbeg-8 with the directory structure and file sizes that @fsschneider reported in the README before running your script? If so can you please document them in this thread.
To resolve this I think we should use @fsschneider's data setup as the source of truth. @fsschneider could you start a new PR that contains just the hash commands and results?
I just downloaded ogbg
twice on the same computer (within seconds). Even without any apparent differences, the checksum provided by checksumdir
doesn't match. So I am assuming (judging by the fact that we (mainly) see differences for TFDS datasets) that TFDS uses a timestamp (from downloading) or something similarly non-deterministic within the .tfrecord
files.
As a result, I don't think that the checksums by checksumdir
provide meaningful information. I would suggest closing this PR and not providing checksums.
@priyakasimbeg I closed this PR. Feel free to reopen if you disagree.
Addresses https://github.com/mlcommons/algorithmic-efficiency/issues/647