unslothai / unsloth

Finetune Llama 3.1, Mistral, Phi & Gemma LLMs 2-5x faster with 80% less memory
https://unsloth.ai
Apache License 2.0
15.76k stars 1.07k forks source link

Any solution for MultiGPU #107

Open gotzmann opened 8 months ago

gotzmann commented 8 months ago

I'm new to the land of LLM fine-tuning and after trying LLaMA-Factory and Axolotl I've started adopting Unsloth for better performance on memory limited cards like RTX A6000 with 48Gb.

But now I reach the point where there it's too limiting for me to use just one GPU and I should find the way to use Unsloth with multiple GPUs or move somewhere.

Please give some information is it possible to use Unsloth with Accelerate or DeepSpeed in multiGPU configuration?

I'm waiting for native multiGPU support release, just want to start using any other solution right now.

danielhanchen commented 8 months ago

@gotzmann Thanks for using Unsloth again!! :) Sadly multi GPU is not yet supported for now - we're working on it for a future release in the OSS version

gotzmann commented 8 months ago

@danielhanchen

Do you think it's possible with LLaMA-Factory? Seems like there some hope there:

https://github.com/hiyouga/LLaMA-Factory/wiki/Performance-Comparison#nvidia-a100--2

danielhanchen commented 8 months ago

@gotzmann We're actively working with the Llama Factory team - for now it's in experimental mode - there will be intermittent seg faults randomnly for now and I haven't confirmed myself if the training losses match exactly, ie if the results are even accurate / correct - so wait patiently for news coming very soon :))

MUZAMMILPERVAIZ commented 7 months ago

Is multi-gpu available or not?

danielhanchen commented 7 months ago

@MUZAMMILPERVAIZ It's under active development. So currently no.

m626zNq commented 5 months ago

@danielhanchen Any news yet?

kdcyberdude commented 4 months ago

Do you have any projected timelines for this?

danielhanchen commented 4 months ago

No sorry currently not - so many new model releases and bugs - sorry I can't keep up and so I have to prioritize since sadly Unsloth's team is just 2 people (me and my bro), and I primarily focus on algos - so please be patient! Apologies again!

chintan-ushur commented 4 months ago

@danielhanchen -- As unsloth's one of the primary strengths lie in fine tuning and multi-gpu is one of the most important feature, I think we should prioritize this.

I can contribute/assist to the multi-gpu backlogs, thanks.