Open LHB-kk opened 5 months ago
Hi!
You should be able to forgo --check_integrity
and things will run fine (and scores won't be affected). (That flag runs some of our test files, which might have since been updated.)
I will look into fixing the root cause of this problem!
Hi, I did the following, but got an ERROR: file or directory not found: /data/liuhuanbin/code/assessments/lm-evaluation-harness/tests/test_version_stable.py git clone https://github.com/EleutherAI/lm-evaluation-harness cd lm-evaluation-harness pip install -e .
lm_eval --model hf \ --model_args pretrained=~/llama-2/ \ --tasks minerva_math,gsm8k \ --gen_kwargs top_k=1 \ --batch_size auto:4 \ --output_path result/ \ --device cuda:0 \ --limit 10 \ --check_integrity \ --log_samples \ --use_cache cache_db/ \ --verbosity DEBUG \ --trust_remote_code \ --seed 42
Can you help me? Looking forward to your reply. Thanks!