unslothai / unsloth

Finetune Llama 3, Mistral, Phi & Gemma LLMs 2-5x faster with 80% less memory
https://unsloth.ai
Apache License 2.0
12.57k stars 817 forks source link

Further finetuneing #721

Open rezzie-rich opened 1 week ago

rezzie-rich commented 1 week ago

If i finetune an unsloth model and push it to huggingface, can i further finetune the trained model later using unsloth? If so, then how since the examples only load the models using 'fastlanguagemodel'. Can you please give an example?

Also, unsloth seems to be the best tool available for training using the least memory. It would be great if you guys made an Udemy course for unsloth. I'm confident it will be greatly appreciated by the community.

danielhanchen commented 1 week ago

Interesting idea on Udemy!! You're looking for https://github.com/unslothai/unsloth/wiki#loading-lora-adapters-for-continued-finetuning :)

rezzie-rich commented 1 week ago

Thank you, but it's still kinda confusing 😭.

Having a detailed tutorial, hence the course would be great for the newcomers.

danielhanchen commented 6 days ago

Oh so sorry - hmm we might make some Youtube videos :)

rezzie-rich commented 6 days ago

Will be much appreciated by many 🫶 lol