NVIDIA / nccl

Optimized primitives for collective multi-GPU communication
Other
3.23k stars 812 forks source link

NCCL2.21 hangs at cudaLaunchKernelExC() #1317

Open leiyi666 opened 4 months ago

leiyi666 commented 4 months ago

Hello, when I was running the nccl-tests program, a network card suddenly went down, causing the program to hang at cudaLaunchKernelExC (). What is the reason? How can I solve it? The version of NCCL is NCCL2.21.5

Network card down information

down

Stack information:

hang
sjeaugey commented 4 months ago

It should not hang. An error should be generated by the network operation and reported to ncclCommGetAsyncError(), which should be checked by the application.

Edit: the application is actually the NCCL tests. Interesting. How did you launch the NCCL tests?

leiyi666 commented 4 months ago

It should not hang. An error should be generated by the network operation and reported to ncclCommGetAsyncError(), which should be checked by the application.

Edit: the application is actually the NCCL tests. Interesting. How did you launch the NCCL tests?

Thank you for your answer. My nccl-tests was downloaded from: https://github.com/NVIDIA/nccl-tests, and started through mpirun. The instruction is:

unset CUDA_VISIBLE_DEVICES && \
mpirun -np 8 -H 0.0.0.0:8 -v \
--allow-run-as-root --bind-to none --map-by slot \
--mca btl_tcp_if_include bond1 --mca oob_tcp_if_include bond1 \
-x NCCL_SOCKET_IFNAME=bond1 -x UCX_NET_DEVICES=bond1 \
-x NCCL_IB_DISABLE=0 -x NCCL_IB_GID_INDEX=3 -x NCCL_IB_CUDA_SUPPORT=1 \
-x NCCL_MIN_CTAS=4 -x NCCL_P2P_DISABLE=1 -x NCCL_SHM_DISABLE=1 -x _BOOT_STORAGE=0 \
-x NCCL_IB_HCA=mlx5_bond_1,mlx5_bond_2,mlx5_bond_3,mlx5_bond_4,mlx5_bond_5,mlx5_bond_6,mlx5_bond_7,mlx5_bond_8 \
-x NCCL_COLLNET_ENABLE=0 -x LD_LIBRARY_PATH=$LD_LIBRARY_PATH \
-x CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 -x SHARP_COLL_ENABLE_SAT=0 \
-x NCCL_NET_GDR_LEVEL=2 -x NCCL_IB_QPS_PER_CONNECTION=4 \
-x NCCL_IB_TC=160 -x NCCL_PXN_DISABLE=0 \
-mca plm_rsh_args "-p 12345" all_reduce_perf -b 2G -e 2G -f 2 -g 1 -n 2000 -z 0

The ncclCommGetAsyncError() you mentioned exists in testStreamSynchronize() in nccl-tests, but since I did not make NCCL collective blocking, nccl-tests will enter testStreamSynchronize() after sending all the collective communication operations (ncclAllReduce) for all iterations, but now it is stuck in the process of sending collective communication and will never enter testStreamSynchronize().

sjeaugey commented 4 months ago

Ah, I see. You're running with -n 2000. That's why NCCL eventually gets stuck trying to enqueue a kernel to the GPU (the GPU queue is full) and this is a blocking call; we can't do much about it. We would need cudaLaunchKernelExC to have a non-blocking variant which would allow us to retry later.

leiyi666 commented 4 months ago

Ah, I see. You're running with -n 2000. That's why NCCL eventually gets stuck trying to enqueue a kernel to the GPU (the GPU queue is full) and this is a blocking call; we can't do much about it. We would need cudaLaunchKernelExC to have a non-blocking variant which would allow us to retry later.

Oh, I get it. Now I try to resolve this fault with reference:

参考

I created a separate thread to monitor the status of the network card. Once an exception is found, I call ncclCommAbort in the thread, but it does not solve the problem. Instead, it hangs in another function (cudaStreamSynchronize). Is it not possible to call abort in this way?

image
sjeaugey commented 4 months ago

In theory, setting the communicator in non-blocking mode should make NCCL not block in the ncclAllReduce call and return ncclInProgress instead. But I'm not sure we currently handle the case of cudaLaunchKernelExC blocking.

leiyi666 commented 4 months ago

In theory, setting the communicator in non-blocking mode should make NCCL not block in the ncclAllReduce call and return ncclInProgress instead. But I'm not sure we currently handle the case of cudaLaunchKernelExC blocking.

In theory, setting the communicator in non-blocking mode should make NCCL not block in the ncclAllReduce call and return ncclInProgress instead. But I'm not sure we currently handle the case of cudaLaunchKernelExC blocking.

Thank you for your answer. Can I avoid this problem by using ncclCommAbort? I now call this function but it hangs in the cudaStreamSynchronize function. The stack information is as shown in my previous answer.