Great work! But I've noticed that the current implementation seems to only support single-GPU training. Is that correct? If so, do you have any plans to extend support for multi-GPU training in the future? Looking forward to your response. Thanks!
Yes we only supports single computing device (such as GPU or Accelerator). We have plan for integrating multi-devices support techniques such as FSDP from PyTorch and LoRAPP (from our m-LoRA).
Great work! But I've noticed that the current implementation seems to only support single-GPU training. Is that correct? If so, do you have any plans to extend support for multi-GPU training in the future? Looking forward to your response. Thanks!