OpenLMLab / LOMO

LOMO: LOw-Memory Optimization
MIT License
978 stars 68 forks source link

Instructions for evaluation datasets #64

Closed KaiLv69 closed 1 year ago

KaiLv69 commented 1 year ago

Hi,

Thanks for your help, I can run the finetune with alpaca datasets now, this one should be the AlpacaFarm in the paper, right?

By the way, do you mind provide the instructions for other datasets? like MMLU, BBH, GSM8K, HumanEval

Thanks Xuan

Originally posted by @shawnricecake in https://github.com/OpenLMLab/LOMO/issues/63#issuecomment-1785545543

KaiLv69 commented 1 year ago

@shawnricecake The instructions can be found in configs/datasets folder, e.g. https://github.com/KaiLv69/opencompass/blob/main/configs/datasets/gsm8k/gsm8k_gen_collie.py

KaiLv69 commented 1 year ago

We only train the model on alpaca dataset and eval the trained model on various benchmarks.

shawnricecake commented 1 year ago

We only train the model on alpaca dataset and eval the trained model on various benchmarks.

lol, awesome!

and I will try to follow the instructions for eval on other datasets