Open gotzmann opened 8 months ago
@gotzmann Thanks for using Unsloth again!! :) Sadly multi GPU is not yet supported for now - we're working on it for a future release in the OSS version
@danielhanchen
Do you think it's possible with LLaMA-Factory? Seems like there some hope there:
https://github.com/hiyouga/LLaMA-Factory/wiki/Performance-Comparison#nvidia-a100--2
@gotzmann We're actively working with the Llama Factory team - for now it's in experimental mode - there will be intermittent seg faults randomnly for now and I haven't confirmed myself if the training losses match exactly, ie if the results are even accurate / correct - so wait patiently for news coming very soon :))
Is multi-gpu available or not?
@MUZAMMILPERVAIZ It's under active development. So currently no.
@danielhanchen Any news yet?
Do you have any projected timelines for this?
No sorry currently not - so many new model releases and bugs - sorry I can't keep up and so I have to prioritize since sadly Unsloth's team is just 2 people (me and my bro), and I primarily focus on algos - so please be patient! Apologies again!
@danielhanchen -- As unsloth's one of the primary strengths lie in fine tuning and multi-gpu is one of the most important feature, I think we should prioritize this.
I can contribute/assist to the multi-gpu backlogs, thanks.
I'm new to the land of LLM fine-tuning and after trying LLaMA-Factory and Axolotl I've started adopting Unsloth for better performance on memory limited cards like RTX A6000 with 48Gb.
But now I reach the point where there it's too limiting for me to use just one GPU and I should find the way to use Unsloth with multiple GPUs or move somewhere.
Please give some information is it possible to use Unsloth with Accelerate or DeepSpeed in multiGPU configuration?
I'm waiting for native multiGPU support release, just want to start using any other solution right now.