[X] An officially supported task in the examples folder (such as GLUE/SQuAD, ...)
[ ] My own task or dataset (give details below)
Reproduction
I'm training a customized MoE language model using 8 GPUs on one node, and it works fine using Accelerate (without DeepSpeed). However, when I enable DeepSpeed (both ZerO 1 and 2), the training hangs at the first gradient syncing and will end up with an NCCL timeout after a few minutes.
The MoE is built based on the XGLM architecture and the task as language modeling.
DeepSpeed works fine when I train a dense model (not MoE).
I also set deepspeed_moe_layer_cls_names to my MoE block, but it doesn't seem to work.
I use the accelerate launch command to run my experiments.
I guess the problem with MoE is that not all GPUs use the same parameters (experts) in one forward pass, and that's why GPUs are waiting for each other to receive all gradients.
Expected behavior
The gradient syncing and training continue properly, and we are able to train an MoE (sparse network) using DeepSpeed.
System Info
transformers
version: 4.39.3Who can help?
@pacman100
Information
Tasks
examples
folder (such as GLUE/SQuAD, ...)Reproduction
I'm training a customized MoE language model using 8 GPUs on one node, and it works fine using Accelerate (without DeepSpeed). However, when I enable DeepSpeed (both ZerO 1 and 2), the training hangs at the first gradient syncing and will end up with an NCCL timeout after a few minutes. The MoE is built based on the XGLM architecture and the task as language modeling. DeepSpeed works fine when I train a dense model (not MoE).
I also set
deepspeed_moe_layer_cls_names
to my MoE block, but it doesn't seem to work. I use theaccelerate launch
command to run my experiments. I guess the problem with MoE is that not all GPUs use the same parameters (experts) in one forward pass, and that's why GPUs are waiting for each other to receive all gradients.Expected behavior
The gradient syncing and training continue properly, and we are able to train an MoE (sparse network) using DeepSpeed.