Open GuodongQi opened 6 months ago
For fine-tuning from a pretrained weight, all you need to do is add --pretrained_checkpoint
We currently have no plans to incorporate LORA, but it should be relatively straightforward to add LORA training to the code using Hugging Face's PEFT library.
Thanks for the nice work! Do the authors plan to release codes for finetune or lora?