EleutherAI / lm-evaluation-harness

A framework for few-shot evaluation of language models.
https://www.eleuther.ai
MIT License
6.99k stars 1.87k forks source link

Pile tasks on big-refactor use dataset_names from old dataset loader that don't exist on HF #731

Open yeoedward opened 1 year ago

yeoedward commented 1 year ago

Task example: https://github.com/EleutherAI/lm-evaluation-harness/blob/big-refactor/lm_eval/tasks/pile/pile_arxiv.yaml#L7

HF dataset: https://huggingface.co/datasets/EleutherAI/pile

Original dataset loader prior to big-refactor: https://github.com/EleutherAI/lm-evaluation-harness/blob/master/lm_eval/datasets/pile/pile.py

@haileyschoelkopf mentioned that using this loading script should work if we upload it to HF and point the Pile tasks to that new dataset.

pratyushmaini commented 1 year ago

Adding the file "pile.py" at "lm-evaluation-harness/EleutherAI/the_pile/the_pile.py" does indeed fix the issue. Additionally changing the test split to "test" in pile_arxiv.yaml (line 9)

This recipe works pretty fast, but I observe this strange trend where the first few samples are processed slow (which is understandable), the middle samples are processed at an extremely fast speed, and then in the end the last few samples again take a lot of time. When using "accelerate launch" this almost halts forever (I eventually killed the process after waiting for a few minutes), whereas using a single GPU does allow me to get final output.

pratyushmaini commented 1 year ago

Just an update to the above. Since PILE is no longer public now, you may want to modify the _URLS to your local path to pile. This is line 44 of the current file pile.py

_URLS = { "validation": "/data/the_pile/val.jsonl.zst", "test": "/data/the_pile/test.jsonl.zst", }

Also, there have been some changes to the repo since the last comment. The file should be placed at "lm-evaluation-harness/EleutherAI/pile/pile.py"