Closed HARISHSENTHIL closed 1 month ago
Hi everyone,
I recently fine-tuned a large language model (LLM) using Unsloth, and I now want to further fine-tune the same model with additional data.
How can i do that ?
@HARISHSENTHIL You can reload the save model, skip get_peft_model and continue training! See https://github.com/unslothai/unsloth/wiki#loading-lora-adapters-for-continued-finetuning
get_peft_model
thank you
Hi everyone,
I recently fine-tuned a large language model (LLM) using Unsloth, and I now want to further fine-tune the same model with additional data.
How can i do that ?