unslothai / unsloth

Finetune Llama 3.2, Mistral, Phi & Gemma LLMs 2-5x faster with 80% less memory
https://unsloth.ai
Apache License 2.0
16.51k stars 1.14k forks source link

multi-gpu error #724

Open beginerJSM opened 3 months ago

beginerJSM commented 3 months ago

I run your provided code of llama-3-8b with only one gpu but an error of multi-gpu running happens. The error info is as follows:

RuntimeError Traceback (most recent call last) Cell In[8], line 1 ----> 1 trainer_stats = trainer.train()

File :36, in train(self, resume_from_checkpoint, trial, ignore_keys_for_eval, **kwargs)

RuntimeError: Unsloth currently does not work on multi GPU setups - sadly we are a 2 brother team so enabling it will require much more work, so we have to prioritize. Please understand! We do have a separate beta version, which you can contact us about! Thank you for your understanding and we appreciate it immensely!

danielhanchen commented 3 months ago

Sorry as the message states, we haven't supported multi GPU setups yet sorry!

songkq commented 2 months ago

@danielhanchen Hi, I remember that unsloth-2024.1 supports multi-GPUs training llama with deepspeed without raising this RuntimeError('Unsloth currently does not support multi GPU setups - but we are working on it!'). Is it my illusion? Or did I use it incorrectly before?

danielhanchen commented 2 months ago

Sorry - oh yep I saw you made another issue - tbh I don't think Deepspeed's results are correct, since I haven't verified it myself - but in the meantime, multi GPU is still in a beta mode with some community members, so it's still unstable!