huggingface / transformers

🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
https://huggingface.co/transformers
Apache License 2.0
128.63k stars 25.51k forks source link

Training hangs at the first gradient syncing of an MoE model while using deepspeed #30911

Open negar-foroutan opened 1 month ago

negar-foroutan commented 1 month ago

System Info

Who can help?

@pacman100

Information

Tasks

Reproduction

I'm training a customized MoE language model using 8 GPUs on one node, and it works fine using Accelerate (without DeepSpeed). However, when I enable DeepSpeed (both ZerO 1 and 2), the training hangs at the first gradient syncing and will end up with an NCCL timeout after a few minutes. The MoE is built based on the XGLM architecture and the task as language modeling. DeepSpeed works fine when I train a dense model (not MoE).
I also set deepspeed_moe_layer_cls_names to my MoE block, but it doesn't seem to work. I use the accelerate launch command to run my experiments. I guess the problem with MoE is that not all GPUs use the same parameters (experts) in one forward pass, and that's why GPUs are waiting for each other to receive all gradients.

Expected behavior

The gradient syncing and training continue properly, and we are able to train an MoE (sparse network) using DeepSpeed.

amyeroberts commented 2 weeks ago

cc @SunMarc @muellerzr