unslothai / unsloth

Finetune Llama 3.2, Mistral, Phi, Qwen 2.5 & Gemma LLMs 2-5x faster with 80% less memory
https://unsloth.ai
Apache License 2.0
18.59k stars 1.3k forks source link

Do unsloth work on non lora case? #458

Open rangehow opened 6 months ago

rangehow commented 6 months ago

I have tried unsloth+qlora, it's cool and brings considerable speedup and vram reduction. But I am not sure if this repo is useful for full fine-tune after searching repo/website/benchmark. So can someone familiar with unsloth can grant a suggestion about this? Thanks for your wonderful work : )

danielhanchen commented 6 months ago

Sadly currently full finetuning isnt yet supported - some Unsloth community members have tried doing it, and it does converge, albeit the layernorms are not trained