allenai / OLMo

Modeling, training, eval, and inference code for OLMo
https://allenai.org/olmo
Apache License 2.0
4.2k stars 392 forks source link

Finetuning config file #609

Open joellliu opened 4 weeks ago

joellliu commented 4 weeks ago

❓ The question

Hi, I am wondering if you can provide your config file for finetuning on the Tulu V2 dataset? It would be helpful for reproducing the finetuning results. In addition, have you tried SFT on the 1B model? How's the result? Thank you!

2015aroras commented 3 weeks ago

Have you looked at https://github.com/allenai/open-instruct? It might have what you're looking for.

joellliu commented 2 weeks ago

Have you looked at https://github.com/allenai/open-instruct? It might have what you're looking for.

@2015aroras Thanks for your reply! It seems the open-instruct repo is mainly for DPO. I wonder if you can provide the supervised finetuning config for training on the Tuluv2 dataset. Thank you!