hiyouga / LLaMA-Factory

Unified Efficient Fine-Tuning of 100+ LLMs (ACL 2024)
https://arxiv.org/abs/2403.13372
Apache License 2.0
34.45k stars 4.25k forks source link

Do you support for full parameters pre-training? #5459

Closed lingchensanwen closed 2 months ago

lingchensanwen commented 2 months ago

Hi team,

I'm wondering if you support full parameters pre-training? I've checked the files but only can find supports for lora pre-training and sft parameters full training. If you support, could you point me to where? Thank you very much!

hiyouga commented 2 months ago

stage: pt finetuning_type: full