Open leiyi666 opened 4 months ago
It should not hang. An error should be generated by the network operation and reported to ncclCommGetAsyncError(), which should be checked by the application.
Edit: the application is actually the NCCL tests. Interesting. How did you launch the NCCL tests?
It should not hang. An error should be generated by the network operation and reported to ncclCommGetAsyncError(), which should be checked by the application.
Edit: the application is actually the NCCL tests. Interesting. How did you launch the NCCL tests?
Thank you for your answer. My nccl-tests was downloaded from: https://github.com/NVIDIA/nccl-tests, and started through mpirun. The instruction is:
unset CUDA_VISIBLE_DEVICES && \
mpirun -np 8 -H 0.0.0.0:8 -v \
--allow-run-as-root --bind-to none --map-by slot \
--mca btl_tcp_if_include bond1 --mca oob_tcp_if_include bond1 \
-x NCCL_SOCKET_IFNAME=bond1 -x UCX_NET_DEVICES=bond1 \
-x NCCL_IB_DISABLE=0 -x NCCL_IB_GID_INDEX=3 -x NCCL_IB_CUDA_SUPPORT=1 \
-x NCCL_MIN_CTAS=4 -x NCCL_P2P_DISABLE=1 -x NCCL_SHM_DISABLE=1 -x _BOOT_STORAGE=0 \
-x NCCL_IB_HCA=mlx5_bond_1,mlx5_bond_2,mlx5_bond_3,mlx5_bond_4,mlx5_bond_5,mlx5_bond_6,mlx5_bond_7,mlx5_bond_8 \
-x NCCL_COLLNET_ENABLE=0 -x LD_LIBRARY_PATH=$LD_LIBRARY_PATH \
-x CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 -x SHARP_COLL_ENABLE_SAT=0 \
-x NCCL_NET_GDR_LEVEL=2 -x NCCL_IB_QPS_PER_CONNECTION=4 \
-x NCCL_IB_TC=160 -x NCCL_PXN_DISABLE=0 \
-mca plm_rsh_args "-p 12345" all_reduce_perf -b 2G -e 2G -f 2 -g 1 -n 2000 -z 0
The ncclCommGetAsyncError() you mentioned exists in testStreamSynchronize() in nccl-tests, but since I did not make NCCL collective blocking, nccl-tests will enter testStreamSynchronize() after sending all the collective communication operations (ncclAllReduce) for all iterations, but now it is stuck in the process of sending collective communication and will never enter testStreamSynchronize().
Ah, I see. You're running with -n 2000. That's why NCCL eventually gets stuck trying to enqueue a kernel to the GPU (the GPU queue is full) and this is a blocking call; we can't do much about it. We would need cudaLaunchKernelExC to have a non-blocking variant which would allow us to retry later.
Ah, I see. You're running with -n 2000. That's why NCCL eventually gets stuck trying to enqueue a kernel to the GPU (the GPU queue is full) and this is a blocking call; we can't do much about it. We would need cudaLaunchKernelExC to have a non-blocking variant which would allow us to retry later.
Oh, I get it. Now I try to resolve this fault with reference:
I created a separate thread to monitor the status of the network card. Once an exception is found, I call ncclCommAbort in the thread, but it does not solve the problem. Instead, it hangs in another function (cudaStreamSynchronize). Is it not possible to call abort in this way?
In theory, setting the communicator in non-blocking mode should make NCCL not block in the ncclAllReduce
call and return ncclInProgress
instead. But I'm not sure we currently handle the case of cudaLaunchKernelExC blocking.
In theory, setting the communicator in non-blocking mode should make NCCL not block in the
ncclAllReduce
call and returnncclInProgress
instead. But I'm not sure we currently handle the case of cudaLaunchKernelExC blocking.In theory, setting the communicator in non-blocking mode should make NCCL not block in the
ncclAllReduce
call and returnncclInProgress
instead. But I'm not sure we currently handle the case of cudaLaunchKernelExC blocking.
Thank you for your answer. Can I avoid this problem by using ncclCommAbort? I now call this function but it hangs in the cudaStreamSynchronize function. The stack information is as shown in my previous answer.
Hello, when I was running the nccl-tests program, a network card suddenly went down, causing the program to hang at cudaLaunchKernelExC (). What is the reason? How can I solve it? The version of NCCL is NCCL2.21.5
Network card down information
Stack information: