unslothai / unsloth

Finetune Llama 3.2, Mistral, Phi, Qwen 2.5 & Gemma LLMs 2-5x faster with 80% less memory
https://unsloth.ai
Apache License 2.0
18.64k stars 1.31k forks source link

how to use multiGPU? #1332

Open Zuozhuo opened 3 days ago

Zuozhuo commented 3 days ago

I didn’t see this option in the official Jupyter notebook provided.

Cirr0e commented 2 days ago

Hi there! Thank you for asking about multiGPU support. Let me clarify the current situation:

Currently, Unsloth officially supports only single GPU operations. While multiGPU support is a highly requested feature, it's not yet available in the public release. Here's what you need to know:

  1. Current Status:

    • Unsloth is optimized for single GPU usage
    • The library will raise a RuntimeError if it detects multiple GPU setups
  2. Future Plans:

    • MultiGPU support is actively being developed
    • It's planned for release later this year
    • A beta version exists but is not publicly available yet
  3. Available Options: For now, if you need to train with multiple GPUs, you have a few alternatives:

    • Use single GPU with Unsloth's optimized implementation
    • Consider other frameworks that currently support multiGPU training
    • Wait for the upcoming official multiGPU release

The development team is prioritizing making Unsloth the best single GPU finetuning library before expanding to multiGPU support, as this serves the majority of users' needs.

You can follow issue #543 for updates on the multiGPU feature development.

Please let me know if you have any other questions about the current capabilities or would like recommendations for your specific use case!