unslothai / unsloth

Finetune Llama 3.2, Mistral, Phi, Qwen & Gemma LLMs 2-5x faster with 80% less memory
https://unsloth.ai
Apache License 2.0
17.96k stars 1.25k forks source link

Unsloth currently does not support multi GPU setups in unsloth-2024.8 #859

Open songkq opened 3 months ago

songkq commented 3 months ago

@danielhanchen Hi, I remember that unsloth-2024.1 supports multi-GPUs training llama with deepspeed without raising this RuntimeError('Unsloth currently does not support multi GPU setups - but we are working on it!'). Is it my illusion? Or did I use it incorrectly before?

Which is latest support multi-GPUs version of unsloth?

danielhanchen commented 3 months ago

@songkq Oh sorry currently multi GPU is still in a beta mode - we're currently trying it out with a few Unsloth community members. No I don't think Deepspeed ever worked - the results might be incorrect (but unsure), since Unsloth never had multi GPU support - but tbh I'm unsure since it's been a while back now. Sorry on the issue!

songkq commented 3 months ago

unsloth-2024.1-site-packages.tar.zip

@danielhanchen Thanks for explanation. Here is the site-package of unsloth-2024.1 python environment. I did successfully use the llama_factory, unsloth and deepspeed zero2 for training llama in multi GPUs. Could you please check it out?

thusinh1969 commented 1 month ago

unsloth-2024.1-site-packages.tar.zip

@danielhanchen Thanks for explanation. Here is the site-package of unsloth-2024.1 python environment. I did successfully use the llama_factory, unsloth and deepspeed zero2 for training llama in multi GPUs. Could you please check it out?

Really Unsloth with LLaMA-Factory supports multi-GPU now ? @danielhanchen It would be great. Are you releasing multi-GPU now ? It would help a lot, and we are happy to pay for it.

Thanks, Steve