microsoft / DeepSpeed

DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
https://www.deepspeed.ai/
Apache License 2.0
33.72k stars 3.96k forks source link

[BUG] [ERROR] [launch.py:321:sigkill_handler [xxx] exits with return code = -9 #4890

Open xinbingzhe opened 6 months ago

xinbingzhe commented 6 months ago

Describe the bug

[ERROR] [launch.py:321:sigkill_handler [xxx] exits with return code = -9

My script works well for training llama of huggingface transformers model, but it failed when I replace mlp layer with deepspeed moe layer.

The error occurs when training 5 steps or 6 steps.

error info

[I ProcessGroupWrapper.cpp:562] [Rank 0] Running collective: CollectiveFingerPrint(SequenceNumber=1277, OpType=ALLREDUCE, TensorShape=[], TensorDtypes=Float, TensorDeviceTypes=TensorOptions(dtype=float (default), device=cuda, layout=Strided (default), requires_grad=false (default), pinned_memory=false (default), memory_format=(nullopt)))
[I ProcessGroupWrapper.cpp:562] [Rank 1] Running collective: CollectiveFingerPrint(SequenceNumber=1275, OpType=ALLREDUCE, TensorShape=[], TensorDtypes=Float, TensorDeviceTypes=TensorOptions(dtype=float (default), device=cuda, layout=Strided (default), requires_grad=false (default), pinned_memory=false (default), memory_format=(nullopt)))
[I ProcessGroupWrapper.cpp:562] [Rank 0] Running collective: CollectiveFingerPrint(SequenceNumber=1275, OpType=ALLREDUCE, TensorShape=[], TensorDtypes=Float, TensorDeviceTypes=TensorOptions(dtype=float (default), device=cuda, layout=Strided (default), requires_grad=false (default), pinned_memory=false (default), memory_format=(nullopt)))
[I ProcessGroupWrapper.cpp:562] [Rank 3] Running collective: CollectiveFingerPrint(SequenceNumber=1275, OpType=ALLREDUCE, TensorShape=[], TensorDtypes=Float, TensorDeviceTypes=TensorOptions(dtype=float (default), device=cuda, layout=Strided (default), requires_grad=false (default), pinned_memory=false (default), memory_format=(nullopt)))
[I ProcessGroupWrapper.cpp:562] [Rank 2] Running collective: CollectiveFingerPrint(SequenceNumber=1278, OpType=ALLREDUCE, TensorShape=[], TensorDtypes=Float, TensorDeviceTypes=TensorOptions(dtype=float (default), device=cuda, layout=Strided (default), requires_grad=false (default), pinned_memory=false (default), memory_format=(nullopt)))
[I ProcessGroupWrapper.cpp:562] [Rank 3] Running collective: CollectiveFingerPrint(SequenceNumber=1278, OpType=ALLREDUCE, TensorShape=[], TensorDtypes=Float, TensorDeviceTypes=TensorOptions(dtype=float (default), device=cuda, layout=Strided (default), requires_grad=false (default), pinned_memory=false (default), memory_format=(nullopt)))
[I ProcessGroupWrapper.cpp:562] [Rank 1] Running collective: CollectiveFingerPrint(SequenceNumber=1278, OpType=ALLREDUCE, TensorShape=[], TensorDtypes=Float, TensorDeviceTypes=TensorOptions(dtype=float (default), device=cuda, layout=Strided (default), requires_grad=false (default), pinned_memory=false (default), memory_format=(nullopt)))
[I ProcessGroupWrapper.cpp:562] [Rank 2] Running collective: CollectiveFingerPrint(SequenceNumber=1275, OpType=ALLREDUCE, TensorShape=[], TensorDtypes=Float, TensorDeviceTypes=TensorOptions(dtype=float (default), device=cuda, layout=Strided (default), requires_grad=false (default), pinned_memory=false (default), memory_format=(nullopt)))
[I ProcessGroupWrapper.cpp:562] [Rank 0] Running collective: CollectiveFingerPrint(SequenceNumber=1278, OpType=ALLREDUCE, TensorShape=[], TensorDtypes=Float, TensorDeviceTypes=TensorOptions(dtype=float (default), device=cuda, layout=Strided (default), requires_grad=false (default), pinned_memory=false (default), memory_format=(nullopt)))
[I ProcessGroupWrapper.cpp:562] [Rank 3] Running collective: CollectiveFingerPrint(SequenceNumber=1279, OpType=ALLREDUCE, TensorShape=[], TensorDtypes=Float, TensorDeviceTypes=TensorOptions(dtype=float (default), device=cuda, layout=Strided (default), requires_grad=false (default), pinned_memory=false (default), memory_format=(nullopt)))
[I ProcessGroupWrapper.cpp:562] [Rank 1] Running collective: CollectiveFingerPrint(SequenceNumber=1279, OpType=ALLREDUCE, TensorShape=[], TensorDtypes=Float, TensorDeviceTypes=TensorOptions(dtype=float (default), device=cuda, layout=Strided (default), requires_grad=false (default), pinned_memory=false (default), memory_format=(nullopt)))
[I ProcessGroupWrapper.cpp:562] [Rank 2] Running collective: CollectiveFingerPrint(SequenceNumber=1279, OpType=ALLREDUCE, TensorShape=[], TensorDtypes=Float, TensorDeviceTypes=TensorOptions(dtype=float (default), device=cuda, layout=Strided (default), requires_grad=false (default), pinned_memory=false (default), memory_format=(nullopt)))
[I ProcessGroupWrapper.cpp:562] [Rank 0] Running collective: CollectiveFingerPrint(SequenceNumber=1279, OpType=ALLREDUCE, TensorShape=[], TensorDtypes=Float, TensorDeviceTypes=TensorOptions(dtype=float (default), device=cuda, layout=Strided (default), requires_grad=false (default), pinned_memory=false (default), memory_format=(nullopt)))
[I ProcessGroupWrapper.cpp:562] [Rank 0] Running collective: CollectiveFingerPrint(SequenceNumber=1276, OpType=ALLREDUCE, TensorShape=[], TensorDtypes=Float, TensorDeviceTypes=TensorOptions(dtype=float (default), device=cuda, layout=Strided (default), requires_grad=false (default), pinned_memory=false (default), memory_format=(nullopt)))
[I ProcessGroupWrapper.cpp:562] [Rank 1] Running collective: CollectiveFingerPrint(SequenceNumber=1276, OpType=ALLREDUCE, TensorShape=[], TensorDtypes=Float, TensorDeviceTypes=TensorOptions(dtype=float (default), device=cuda, layout=Strided (default), requires_grad=false (default), pinned_memory=false (default), memory_format=(nullopt)))
[I ProcessGroupWrapper.cpp:562] [Rank 3] Running collective: CollectiveFingerPrint(SequenceNumber=1276, OpType=ALLREDUCE, TensorShape=[], TensorDtypes=Float, TensorDeviceTypes=TensorOptions(dtype=float (default), device=cuda, layout=Strided (default), requires_grad=false (default), pinned_memory=false (default), memory_format=(nullopt)))
[I ProcessGroupWrapper.cpp:562] [Rank 2] Running collective: CollectiveFingerPrint(SequenceNumber=1276, OpType=ALLREDUCE, TensorShape=[], TensorDtypes=Float, TensorDeviceTypes=TensorOptions(dtype=float (default), device=cuda, layout=Strided (default), requires_grad=false (default), pinned_memory=false (default), memory_format=(nullopt)))
[I ProcessGroupWrapper.cpp:562] [Rank 0] Running collective: CollectiveFingerPrint(SequenceNumber=1277, OpType=ALLREDUCE, TensorShape=[], TensorDtypes=Float, TensorDeviceTypes=TensorOptions(dtype=float (default), device=cuda, layout=Strided (default), requires_grad=false (default), pinned_memory=false (default), memory_format=(nullopt)))
[I ProcessGroupWrapper.cpp:562] [Rank 1] Running collective: CollectiveFingerPrint(SequenceNumber=1277, OpType=ALLREDUCE, TensorShape=[], TensorDtypes=Float, TensorDeviceTypes=TensorOptions(dtype=float (default), device=cuda, layout=Strided (default), requires_grad=false (default), pinned_memory=false (default), memory_format=(nullopt)))
[I ProcessGroupWrapper.cpp:562] [Rank 3] Running collective: CollectiveFingerPrint(SequenceNumber=1277, OpType=ALLREDUCE, TensorShape=[], TensorDtypes=Float, TensorDeviceTypes=TensorOptions(dtype=float (default), device=cuda, layout=Strided (default), requires_grad=false (default), pinned_memory=false (default), memory_format=(nullopt)))
[I ProcessGroupWrapper.cpp:562] [Rank 2] Running collective: CollectiveFingerPrint(SequenceNumber=1277, OpType=ALLREDUCE, TensorShape=[], TensorDtypes=Float, TensorDeviceTypes=TensorOptions(dtype=float (default), device=cuda, layout=Strided (default), requires_grad=false (default), pinned_memory=false (default), memory_format=(nullopt)))
[I ProcessGroupWrapper.cpp:562] [Rank 1] Running collective: CollectiveFingerPrint(SequenceNumber=1278, OpType=ALLREDUCE, TensorShape=[], TensorDtypes=Float, TensorDeviceTypes=TensorOptions(dtype=float (default), device=cuda, layout=Strided (default), requires_grad=false (default), pinned_memory=false (default), memory_format=(nullopt)))
[I ProcessGroupWrapper.cpp:562] [Rank 0] Running collective: CollectiveFingerPrint(SequenceNumber=1278, OpType=ALLREDUCE, TensorShape=[], TensorDtypes=Float, TensorDeviceTypes=TensorOptions(dtype=float (default), device=cuda, layout=Strided (default), requires_grad=false (default), pinned_memory=false (default), memory_format=(nullopt)))
[I ProcessGroupWrapper.cpp:562] [Rank 3] Running collective: CollectiveFingerPrint(SequenceNumber=1278, OpType=ALLREDUCE, TensorShape=[], TensorDtypes=Float, TensorDeviceTypes=TensorOptions(dtype=float (default), device=cuda, layout=Strided (default), requires_grad=false (default), pinned_memory=false (default), memory_format=(nullopt)))
[I ProcessGroupWrapper.cpp:562] [Rank 2] Running collective: CollectiveFingerPrint(SequenceNumber=1278, OpType=ALLREDUCE, TensorShape=[], TensorDtypes=Float, TensorDeviceTypes=TensorOptions(dtype=float (default), device=cuda, layout=Strided (default), requires_grad=false (default), pinned_memory=false (default), memory_format=(nullopt)))
[I ProcessGroupWrapper.cpp:562] [Rank 1] Running collective: CollectiveFingerPrint(SequenceNumber=1279, OpType=ALLREDUCE, TensorShape=[], TensorDtypes=Float, TensorDeviceTypes=TensorOptions(dtype=float (default), device=cuda, layout=Strided (default), requires_grad=false (default), pinned_memory=false (default), memory_format=(nullopt)))
[I ProcessGroupWrapper.cpp:562] [Rank 0] Running collective: CollectiveFingerPrint(SequenceNumber=1279, OpType=ALLREDUCE, TensorShape=[], TensorDtypes=Float, TensorDeviceTypes=TensorOptions(dtype=float (default), device=cuda, layout=Strided (default), requires_grad=false (default), pinned_memory=false (default), memory_format=(nullopt)))
[I ProcessGroupWrapper.cpp:562] [Rank 3] Running collective: CollectiveFingerPrint(SequenceNumber=1279, OpType=ALLREDUCE, TensorShape=[], TensorDtypes=Float, TensorDeviceTypes=TensorOptions(dtype=float (default), device=cuda, layout=Strided (default), requires_grad=false (default), pinned_memory=false (default), memory_format=(nullopt)))
[I ProcessGroupWrapper.cpp:562] [Rank 2] Running collective: CollectiveFingerPrint(SequenceNumber=1279, OpType=ALLREDUCE, TensorShape=[], TensorDtypes=Float, TensorDeviceTypes=TensorOptions(dtype=float (default), device=cuda, layout=Strided (default), requires_grad=false (default), pinned_memory=false (default), memory_format=(nullopt)))
[2024-01-02 20:23:21,458] [INFO] [launch.py:315:sigkill_handler] Killing subprocess 118161
[I ProcessGroupWrapper.cpp:562] [Rank 7] Running collective: CollectiveFingerPrint(SequenceNumber=584, OpType=ALLGATHER, TensorShape=[216596992], TensorDtypes=Half, TensorDeviceTypes=TensorOptions(dtype=float (default), device=cuda, layout=Strided (default), requires_grad=false (default), pinned_memory=false (default), memory_format=(nullopt)))
[I ProcessGroupWrapper.cpp:562] [Rank 4] Running collective: CollectiveFingerPrint(SequenceNumber=584, OpType=ALLGATHER, TensorShape=[216596992], TensorDtypes=Half, TensorDeviceTypes=TensorOptions(dtype=float (default), device=cuda, layout=Strided (default), requires_grad=false (default), pinned_memory=false (default), memory_format=(nullopt)))
[E ProcessGroupGloo.cpp:138] Rank 4 successfully reached monitoredBarrier, but received errors while waiting for send/recv from rank 0. Please check rank 0 logs for faulty rank.
[E ProcessGroupGloo.cpp:138] Rank 7 successfully reached monitoredBarrier, but received errors while waiting for send/recv from rank 0. Please check rank 0 logs for faulty rank.
Traceback (most recent call last):
  File "/mnt/bn/mods-llm/code/lumen_train/lumen_train/minitrainer_moe_deepspeed.py", line 243, in <module>
    main()
  File "/mnt/bn/mods-llm/code/lumen_train/lumen_train/minitrainer_moe_deepspeed.py", line 240, in main
    trainer.train()
  File "/mnt/bn/mods-llm/code/lumen_train/lumen_train/minitrainer_moe_deepspeed.py", line 135, in train
    self.model_engine.step()
  File "/usr/local/lib/python3.9/dist-packages/deepspeed/runtime/engine.py", line 2116, in step
    self._take_model_step(lr_kwargs)
  File "/usr/local/lib/python3.9/dist-packages/deepspeed/runtime/engine.py", line 2022, in _take_model_step
    self.optimizer.step()
  File "/usr/local/lib/python3.9/dist-packages/deepspeed/runtime/zero/stage_1_and_2.py", line 1807, in step
    all_gather_dp_groups(partitioned_param_groups=self.parallel_partitioned_bit16_groups,
  File "/usr/local/lib/python3.9/dist-packages/deepspeed/runtime/utils.py", line 972, in all_gather_dp_groups
    dist.all_gather(shard_list, shard_list[partition_id], dp_process_group[group_id])
  File "/usr/local/lib/python3.9/dist-packages/deepspeed/comm/comm.py", line 117, in log_wrapper
    return func(*args, **kwargs)
  File "/usr/local/lib/python3.9/dist-packages/deepspeed/comm/comm.py", line 236, in all_gather
    return cdb.all_gather(tensor_list=tensor_list, tensor=tensor, group=group, async_op=async_op)
  File "/usr/local/lib/python3.9/dist-packages/deepspeed/comm/torch.py", line 200, in all_gather
    return torch.distributed.all_gather(tensor_list=tensor_list, tensor=tensor, group=group, async_op=async_op)
  File "/usr/local/lib/python3.9/dist-packages/torch/distributed/c10d_logger.py", line 47, in wrapper
    return func(*args, **kwargs)
  File "/usr/local/lib/python3.9/dist-packages/torch/distributed/distributed_c10d.py", line 2808, in all_gather
    work = group.allgather([tensor_list], [tensor])
RuntimeError: Rank 4 successfully reached monitoredBarrier, but received errors while waiting for send/recv from rank 0. Please check rank 0 logs for faulty rank.
 Original exception: 
[../third_party/gloo/gloo/transport/tcp/pair.cc:525] Read error [fdbd:dc02:16:642:4e00::fc]:14012: Connection reset by peer
Traceback (most recent call last):
  File "/mnt/bn/mods-llm/code/lumen_train/lumen_train/minitrainer_moe_deepspeed.py", line 243, in <module>
    main()
  File "/mnt/bn/mods-llm/code/lumen_train/lumen_train/minitrainer_moe_deepspeed.py", line 240, in main
    trainer.train()
  File "/mnt/bn/mods-llm/code/lumen_train/lumen_train/minitrainer_moe_deepspeed.py", line 135, in train
    self.model_engine.step()
  File "/usr/local/lib/python3.9/dist-packages/deepspeed/runtime/engine.py", line 2116, in step
    self._take_model_step(lr_kwargs)
  File "/usr/local/lib/python3.9/dist-packages/deepspeed/runtime/engine.py", line 2022, in _take_model_step
    self.optimizer.step()
  File "/usr/local/lib/python3.9/dist-packages/deepspeed/runtime/zero/stage_1_and_2.py", line 1807, in step
    all_gather_dp_groups(partitioned_param_groups=self.parallel_partitioned_bit16_groups,
  File "/usr/local/lib/python3.9/dist-packages/deepspeed/runtime/utils.py", line 972, in all_gather_dp_groups
    dist.all_gather(shard_list, shard_list[partition_id], dp_process_group[group_id])
  File "/usr/local/lib/python3.9/dist-packages/deepspeed/comm/comm.py", line 117, in log_wrapper
    return func(*args, **kwargs)
  File "/usr/local/lib/python3.9/dist-packages/deepspeed/comm/comm.py", line 236, in all_gather
    return cdb.all_gather(tensor_list=tensor_list, tensor=tensor, group=group, async_op=async_op)
  File "/usr/local/lib/python3.9/dist-packages/deepspeed/comm/torch.py", line 200, in all_gather
    return torch.distributed.all_gather(tensor_list=tensor_list, tensor=tensor, group=group, async_op=async_op)
  File "/usr/local/lib/python3.9/dist-packages/torch/distributed/c10d_logger.py", line 47, in wrapper
    return func(*args, **kwargs)
  File "/usr/local/lib/python3.9/dist-packages/torch/distributed/distributed_c10d.py", line 2808, in all_gather
    work = group.allgather([tensor_list], [tensor])
RuntimeError: Rank 7 successfully reached monitoredBarrier, but received errors while waiting for send/recv from rank 0. Please check rank 0 logs for faulty rank.
 Original exception: 
[../third_party/gloo/gloo/transport/tcp/pair.cc:525] Read error [fdbd:dc02:16:642:4e00::fc]:13993: Connection reset by peer
[2024-01-02 20:23:26,326] [INFO] [launch.py:315:sigkill_handler] Killing subprocess 118162
[2024-01-02 20:23:26,327] [INFO] [launch.py:315:sigkill_handler] Killing subprocess 118163
[2024-01-02 20:23:26,327] [INFO] [launch.py:315:sigkill_handler] Killing subprocess 118164
[2024-01-02 20:23:26,328] [INFO] [launch.py:315:sigkill_handler] Killing subprocess 118165
[I ProcessGroupNCCL.cpp:875] [Rank 7] Destroyed 1communicators on CUDA device 7
[2024-01-02 20:23:36,022] [INFO] [launch.py:315:sigkill_handler] Killing subprocess 118166
[2024-01-02 20:23:36,025] [INFO] [launch.py:315:sigkill_handler] Killing subprocess 118167
[2024-01-02 20:23:36,026] [INFO] [launch.py:315:sigkill_handler] Killing subprocess 118168
[2024-01-02 20:23:36,027] [ERROR] [launch.py:321:sigkill_handler][] exits with return code = -9

my train script

export TORCH_CPP_LOG_LEVEL=INFO
export TORCH_DISTRIBUTED_DEBUG=DETAIL

export LAUNCHER="deepspeed \
    --num_gpus ${WORKER_GPU} \
    --num_nodes ${WORKER_NUM} \
    --master_addr ${WORKER_0_HOST} \
    --master_port ${WORKER_0_PORT}"

export CMD="$LAUNCHER minitrainer_moe_deepspeed.py \
    --model /model \
    --tokenizer xxx \
    --data xxx \
    --output_dir xxx \
    --evaluation_strategy no \
    --moe true \
    --seq_length 2048 \
    --gradient_checkpointing true \
    --per_device_train_batch_size 1 \
    --gradient_accumulation_steps 1 \
    --save_steps 2 \
    --learning_rate 2e-6 \
    --num_train_epochs 2
    "

echo $CMD
$CMD

ds_report output

--------------------------------------------------
DeepSpeed C++/CUDA extension op report
--------------------------------------------------
NOTE: Ops not installed will be just-in-time (JIT) compiled at
      runtime if needed. Op compatibility means that your system
      meet the required dependencies to JIT install the op.
--------------------------------------------------
JIT compiled ops requires ninja
ninja .................. [OKAY]
--------------------------------------------------
op name ................ installed .. compatible
--------------------------------------------------
async_io ............... [NO] ....... [OKAY]
fused_adam ............. [NO] ....... [OKAY]
cpu_adam ............... [NO] ....... [OKAY]
cpu_adagrad ............ [NO] ....... [OKAY]
fused_lamb ............. [NO] ....... [OKAY]
quantizer .............. [NO] ....... [OKAY]
random_ltd ............. [NO] ....... [OKAY]
 [WARNING]  sparse_attn requires a torch version >= 1.5 and < 2.0 but detected 2.1
 [WARNING]  using untested triton version (2.1.0), only 1.0.0 is known to be compatible
sparse_attn ............ [NO] ....... [NO]
spatial_inference ...... [NO] ....... [OKAY]
transformer ............ [NO] ....... [OKAY]
stochastic_transformer . [NO] ....... [OKAY]
transformer_inference .. [NO] ....... [OKAY]
--------------------------------------------------
DeepSpeed general environment info:
torch install path ............... ['/usr/local/lib/python3.9/dist-packages/torch']
torch version .................... 2.1.1+cu118
deepspeed install path ........... ['/usr/local/lib/python3.9/dist-packages/deepspeed']
deepspeed info ................... 0.10.3, unknown, unknown
torch cuda version ............... 11.8
torch hip version ................ None
nvcc version ..................... 11.8
deepspeed wheel compiled w. ...... torch 2.1, cuda 11.8
shared memory (/dev/shm) size .... 168.00 GB

System info:

shangzyu commented 5 months ago

I met the same problem too

boolmriver commented 3 months ago

解决了吗