epfLLM / Megatron-LLM

distributed trainer for LLMs
Other
529 stars 76 forks source link

Can you send me the complete parameters related to training llama2 using finetune. py? #8

Closed brewswang closed 1 year ago

brewswang commented 1 year ago

Can you send me the complete parameters related to training llama2 using finetune. py?

andreaskoepf commented 1 year ago

You find parameters in finetune.sh#L67-L104. In general the process seems to be weights2megatron -> prallelize -> finetune (you need an indexed dataset).