unslothai / unsloth

Finetune Llama 3.2, Mistral, Phi, Qwen 2.5 & Gemma LLMs 2-5x faster with 80% less memory
https://unsloth.ai
Apache License 2.0
18.59k stars 1.3k forks source link

[Feature Request] Mixtral training support #31

Open epicfilemcnulty opened 11 months ago

epicfilemcnulty commented 11 months ago

For reference, LLaMA-Factory claims that using their toolkit you can QLoRA fine-tune mixtral with 28GB of VRAM.

danielhanchen commented 11 months ago

@epicfilemcnulty We're working on it for a later release!!