mlcommons / algorithmic-efficiency

MLCommons Algorithmic Efficiency is a benchmark and competition measuring neural network training speedups due to algorithmic improvements in both training algorithms and models.
https://mlcommons.org/en/groups/research-algorithms/
Apache License 2.0
321 stars 62 forks source link

Dataset Checksums #708

Closed chandramouli-sastry closed 5 months ago

chandramouli-sastry commented 5 months ago

Addresses https://github.com/mlcommons/algorithmic-efficiency/issues/647

github-actions[bot] commented 5 months ago

MLCommons CLA bot All contributors have signed the MLCommons CLA ✍️ ✅

tfaod commented 5 months ago

Hello! Thanks so much for publicly sharing the checksums. It's incredibly helpful to be able to check that datasets were correctly downloaded.

I've run the checksumdir command on my directories, and get completely different values? (incl ogbg, etc) Assuming not all my datasets were downloaded incorrectly, I am curious if there could be another reason for the difference in checksum values. Do you have any idea what could be causing this? Thanks!!

chandramouli-sastry commented 5 months ago

Hi! Thanks for giving this a try! Yes, there is certainly something else going on -- but, i don't know what could be causing this! Did you check the output of the tree commands? Could you share your checksum of a couple simple ones like ogbg/criteo1tb? I don't know how to debug this but perhaps having your checksum values might help in some way :)

fsschneider commented 5 months ago

Hi @chandramouli-sastry, thanks a lot for adding this. I think this could be very beneficial. However, I am also getting different checksums, while the tree output (and file sizes and numbers) are identical. Below, I am posting my checksums, perhaps they are identical to @tfaod ?

@tfaod did you check the number of files (e.g. via find -type f | wc -l) and total file size (e.g. via du -sch --apparent-size librispeech/)vs. what we report? This was intended as a first check whether all the data downloading worked.

tfaod commented 5 months ago

Thanks for the quick reply!

EDIT: re: @fsschneider - I've the same checksum on criteo1tb, but not ogbg or wmt (with generated model file). I'll generate the rest of the checksums to compare

I've checked all our 1/ file counts (find -type f | wc -l) and 2/ dir size du -sch --apparent-size wmt/), and they are all consistent with the mlcommons provided values.

I've included my outputs for the tree and checksumdir commands for ogbg. wmt, and criteo1tb. There are 2 differences in the tree results: 1) criteo1tb displays 885 entries exceeds filelimit rather than 347 from the README 2) wmt has an additional file wmt_sentencepiece_model that the README has in the final directory but not the tree output

ogbg

2 directories, 13 files

* `checksumdir $DATA_DIR/ogbg: 88420b94329a574d9308360dacf0778f`

## criteo1tb
* `tree $DATA_DIR/criteo1tb --filelimit 30`

criteo1tb [885 entries exceeds filelimit, not opening dir]

0 directories, 0 files

* `checksumdir $DATA_DIR/criteo1tb: aeb5217d11610ab6c679df572faadc7e`

wmt:
* tree wmt --filelimit 30
```wmt
├── wmt14_translate
│   └── de-en
│       └── 1.0.0
│           ├── dataset_info.json
│           ├── features.json
│           ├── wmt14_translate-test.tfrecord-00000-of-00001
│           ├── wmt14_translate-train.tfrecord-00000-of-00016
│           ├── wmt14_translate-train.tfrecord-00001-of-00016
│           ├── wmt14_translate-train.tfrecord-00002-of-00016
│           ├── wmt14_translate-train.tfrecord-00003-of-00016
│           ├── wmt14_translate-train.tfrecord-00004-of-00016
│           ├── wmt14_translate-train.tfrecord-00005-of-00016
│           ├── wmt14_translate-train.tfrecord-00006-of-00016
│           ├── wmt14_translate-train.tfrecord-00007-of-00016
│           ├── wmt14_translate-train.tfrecord-00008-of-00016
│           ├── wmt14_translate-train.tfrecord-00009-of-00016
│           ├── wmt14_translate-train.tfrecord-00010-of-00016
│           ├── wmt14_translate-train.tfrecord-00011-of-00016
│           ├── wmt14_translate-train.tfrecord-00012-of-00016
│           ├── wmt14_translate-train.tfrecord-00013-of-00016
│           ├── wmt14_translate-train.tfrecord-00014-of-00016
│           ├── wmt14_translate-train.tfrecord-00015-of-00016
│           └── wmt14_translate-validation.tfrecord-00000-of-00001
├── wmt17_translate
│   └── de-en
│           ├── wmt14_translate-train.tfrecord-00000-of-00016
│           ├── wmt14_translate-train.tfrecord-00001-of-00016
│           ├── wmt14_translate-train.tfrecord-00002-of-00016
│           ├── wmt14_translate-train.tfrecord-00003-of-00016
│           ├── wmt14_translate-train.tfrecord-00004-of-00016
│           ├── wmt14_translate-train.tfrecord-00005-of-00016
│           ├── wmt14_translate-train.tfrecord-00006-of-00016
│           ├── wmt14_translate-train.tfrecord-00007-of-00016
│           ├── wmt14_translate-train.tfrecord-00008-of-00016
│           ├── wmt14_translate-train.tfrecord-00009-of-00016
│           ├── wmt14_translate-train.tfrecord-00010-of-00016
│           ├── wmt14_translate-train.tfrecord-00011-of-00016
│           ├── wmt14_translate-train.tfrecord-00012-of-00016
│           ├── wmt14_translate-train.tfrecord-00013-of-00016
│           ├── wmt14_translate-train.tfrecord-00014-of-00016
│           ├── wmt14_translate-train.tfrecord-00015-of-00016
│           └── wmt14_translate-validation.tfrecord-00000-of-00001
├── wmt17_translate
│   └── de-en
│       └── 1.0.0
│           ├── dataset_info.json
│           ├── features.json
│           ├── wmt17_translate-test.tfrecord-00000-of-00001
│           ├── wmt17_translate-train.tfrecord-00000-of-00016
│           ├── wmt17_translate-train.tfrecord-00001-of-00016
│           ├── wmt17_translate-train.tfrecord-00002-of-00016
│           ├── wmt17_translate-train.tfrecord-00003-of-00016
│           ├── wmt17_translate-train.tfrecord-00004-of-00016
│           ├── wmt17_translate-train.tfrecord-00005-of-00016
│           ├── wmt17_translate-train.tfrecord-00006-of-00016
│           ├── wmt17_translate-train.tfrecord-00007-of-00016
│           ├── wmt17_translate-train.tfrecord-00008-of-00016
│           ├── wmt17_translate-train.tfrecord-00009-of-00016
│           ├── wmt17_translate-train.tfrecord-00010-of-00016
│           ├── wmt17_translate-train.tfrecord-00011-of-00016
│           ├── wmt17_translate-train.tfrecord-00012-of-00016
│           ├── wmt17_translate-train.tfrecord-00013-of-00016
│           ├── wmt17_translate-train.tfrecord-00014-of-00016
│           ├── wmt17_translate-train.tfrecord-00015-of-00016
│           └── wmt17_translate-validation.tfrecord-00000-of-00001
└── wmt_sentencepiece_model

6 directories, 41 files
chandramouli-sastry commented 5 months ago

Thanks @fsschneider for generating the checksums! I think all of this suggests that the data downloaded on kasimbeg-8 in /home/kasimbeg/data is incomplete/corrupted -- i had to re-download and extract fastmri and that seems to match with the one generated by Frank, so thats good! Its also good that the checksum obtained by @tfaod on criteo matches with that of Frank's! I wanted to avoid considering the vocab files in the checksum because the serialized data might not be consistent across runs -- but not sure!

I mainly only wrote this script and copy-pasted the outputs into the readme:

import os
for dirname in glob.glob("data/*"):
    os.system(f"tree {dirname} --filelimit 30")
    os.system(f"checksumdir {dirname}")

I think that we could then append the output of this code on the data directory we believe is correctly downloaded at the end of the README?

priyakasimbeg commented 5 months ago

@chandramouli-sastry did you detect any differences on kasimbeg-8 with the directory structure and file sizes that @fsschneider reported in the README before running your script? If so can you please document them in this thread.

To resolve this I think we should use @fsschneider's data setup as the source of truth. @fsschneider could you start a new PR that contains just the hash commands and results?

fsschneider commented 5 months ago

I just downloaded ogbg twice on the same computer (within seconds). Even without any apparent differences, the checksum provided by checksumdir doesn't match. So I am assuming (judging by the fact that we (mainly) see differences for TFDS datasets) that TFDS uses a timestamp (from downloading) or something similarly non-deterministic within the .tfrecord files.

As a result, I don't think that the checksums by checksumdir provide meaningful information. I would suggest closing this PR and not providing checksums.

fsschneider commented 5 months ago

@priyakasimbeg I closed this PR. Feel free to reopen if you disagree.