OpenLLaMA, a permissively licensed open source reproduction of Meta AI’s LLaMA 7B trained on the RedPajama dataset
7.36k
stars
374
forks
source link
Do you mind providing the exact command or script used to produce the evaluation results? #60
Closed
yxchng closed 1 year ago
Please follow the serving and evaluation documentation of EasyLM to run the evaluation we did. A much simpler way is to use lm-eval-harness. Just remember to turn off the fast tokenizer as transformers fast tokenizer has a bug,