unslothai / unsloth

Finetune Llama 3.1, Mistral, Phi & Gemma LLMs 2-5x faster with 80% less memory
https://unsloth.ai
Apache License 2.0
15.17k stars 1.01k forks source link

[Feature Request] DDP #127

Open nivibilla opened 7 months ago

nivibilla commented 7 months ago

Wanted to make an issue for this instead of constantly asking in discord.

I saw the other ticket for multigpu fp16 training which is also nice. But ddp would let users scale up training that can happen on single gpus to multi gpu for linear speedup.

danielhanchen commented 7 months ago

@nivibilla We're actively working to prep for a later release to most likely provide DDP to the OSS :) We're still figuring out licensing and distribution methods, so just figuring out those stuff first :)