NVIDIA / nccl

Optimized primitives for collective multi-GPU communication
Other
3.16k stars 798 forks source link

RuntimeError: NCCL Error 1: unhandled cuda error (run with NCCL_DEBUG=INFO for details) when torch._C._broadcast_coalesced #1283

Open zhoulei-biubiu opened 4 months ago

zhoulei-biubiu commented 4 months ago

During the execution of the HuggingFace Trainer.train(), I encountered the RuntimeError: NCCL Error 1: unhandled cuda error multiple times. This error happens occasionally at the last step of each epoch. I also wrapped the training process in a ray task by @ray.remote(num_cpus=8, num_gpus=4). Don't know if it matters.

stderr :

File "/usr/local/lib/python3.11/dist-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl return self._call_impl(*args, *kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/dist-packages/torch/nn/modules/module.py", line 1520, in _call_impl return forward_call(args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/dist-packages/torch/nn/parallel/data_parallel.py", line 184, in forward replicas = self.replicate(self.module, self.device_ids[:len(inputs)]) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/dist-packages/torch/nn/parallel/data_parallel.py", line 189, in replicate return replicate(module, device_ids, not torch.is_grad_enabled()) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/dist-packages/torch/nn/parallel/replicate.py", line 110, in replicate param_copies = _broadcast_coalesced_reshape(params, devices, detach) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/dist-packages/torch/nn/parallel/replicate.py", line 83, in _broadcast_coalesced_reshape tensor_copies = Broadcast.apply(devices, tensors) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/dist-packages/torch/autograd/function.py", line 553, in apply return super().apply(args, kwargs) # type: ignore[misc] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/dist-packages/torch/nn/parallel/_functions.py", line 23, in forward outputs = comm.broadcast_coalesced(inputs, ctx.target_gpus) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/dist-packages/torch/nn/parallel/comm.py", line 57, in broadcast_coalesced return torch._C._broadcast_coalesced(tensors, devices, buffer_size) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ RuntimeError: NCCL Error 1: unhandled cuda error (run with NCCL_DEBUG=INFO for details)

NCCL debug info:

(train_rft pid=2657296) 2024-05-11 14:10:51.939 n176-080-198:2657296:2658229 [0] NCCL INFO Channel 01/24 : 0 1 (train_rft pid=2657296) 2024-05-11 14:10:51.939 n176-080-198:2657296:2658229 [0] NCCL INFO Channel 02/24 : 0 1 (train_rft pid=2657296) 2024-05-11 14:10:51.939 n176-080-198:2657296:2658229 [0] NCCL INFO Channel 03/24 : 0 1 (train_rft pid=2657296) 2024-05-11 14:10:51.939 n176-080-198:2657296:2658230 [1] NCCL INFO P2P Chunksize set to 524288 (train_rft pid=2657296) 2024-05-11 14:10:51.939 n176-080-198:2657296:2658229 [0] NCCL INFO Channel 04/24 : 0 1 (train_rft pid=2657296) 2024-05-11 14:10:51.939 n176-080-198:2657296:2658229 [0] NCCL INFO Channel 05/24 : 0 1 (train_rft pid=2657296) 2024-05-11 14:10:51.939 n176-080-198:2657296:2658229 [0] NCCL INFO Channel 06/24 : 0 1 (train_rft pid=2657296) 2024-05-11 14:10:51.939 n176-080-198:2657296:2658229 [0] NCCL INFO Channel 07/24 : 0 1 (train_rft pid=2657296) 2024-05-11 14:10:51.939 n176-080-198:2657296:2658229 [0] NCCL INFO Channel 08/24 : 0 1 (train_rft pid=2657296) 2024-05-11 14:10:51.939 n176-080-198:2657296:2658229 [0] NCCL INFO Channel 09/24 : 0 1 (train_rft pid=2657296) 2024-05-11 14:10:51.939 n176-080-198:2657296:2658229 [0] NCCL INFO Channel 10/24 : 0 1 (train_rft pid=2657296) 2024-05-11 14:10:51.939 n176-080-198:2657296:2658229 [0] NCCL INFO Channel 11/24 : 0 1 (train_rft pid=2657296) 2024-05-11 14:10:51.939 n176-080-198:2657296:2658229 [0] NCCL INFO Channel 12/24 : 0 1 (train_rft pid=2657296) 2024-05-11 14:10:51.939 n176-080-198:2657296:2658229 [0] NCCL INFO Channel 13/24 : 0 1 (train_rft pid=2657296) 2024-05-11 14:10:51.939 n176-080-198:2657296:2658229 [0] NCCL INFO Channel 14/24 : 0 1 (train_rft pid=2657296) 2024-05-11 14:10:51.939 n176-080-198:2657296:2658229 [0] NCCL INFO Channel 15/24 : 0 1 (train_rft pid=2657296) 2024-05-11 14:10:51.939 n176-080-198:2657296:2658229 [0] NCCL INFO Channel 16/24 : 0 1 (train_rft pid=2657296) 2024-05-11 14:10:51.939 n176-080-198:2657296:2658229 [0] NCCL INFO Channel 17/24 : 0 1 (train_rft pid=2657296) 2024-05-11 14:10:51.939 n176-080-198:2657296:2658229 [0] NCCL INFO Channel 18/24 : 0 1 (train_rft pid=2657296) 2024-05-11 14:10:51.939 n176-080-198:2657296:2658229 [0] NCCL INFO Channel 19/24 : 0 1 (train_rft pid=2657296) 2024-05-11 14:10:51.939 n176-080-198:2657296:2658229 [0] NCCL INFO Channel 20/24 : 0 1 (train_rft pid=2657296) 2024-05-11 14:10:51.939 n176-080-198:2657296:2658229 [0] NCCL INFO Channel 21/24 : 0 1 (train_rft pid=2657296) 2024-05-11 14:10:51.939 n176-080-198:2657296:2658229 [0] NCCL INFO Channel 22/24 : 0 1 (train_rft pid=2657296) 2024-05-11 14:10:51.939 n176-080-198:2657296:2658229 [0] NCCL INFO Channel 23/24 : 0 1 (train_rft pid=2657296) 2024-05-11 14:10:51.939 n176-080-198:2657296:2658229 [0] NCCL INFO Trees [0] 1/-1/-1->0->-1 [1] 1/-1/-1->0->-1 [2] 1/-1/-1->0->-1 [3] 1/-1/-1->0->-1 [4] 1/-1/-1->0->-1 [5] 1/-1/-1->0->-1 [6] -1/-1/-1->0->1 [7] -1/-1/-1->0->1 [8] -1/-1/-1->0->1 [9] -1/-1/-1->0->1 [10] -1/-1/-1->0->1 [11] -1/-1/-1->0->1 [12] 1/-1/-1->0->-1 [13] 1/-1/-1->0->-1 [14] 1/-1/-1->0->-1 [15] 1/-1/-1->0->-1 [16] 1/-1/-1->0->-1 [17] 1/-1/-1->0->-1 [18] -1/-1/-1->0->1 [19] -1/-1/-1->0->1 [20] -1/-1/-1->0->1 [21] -1/-1/-1->0->1 [22] -1/-1/-1->0->1 [23] -1/-1/-1->0->1 (train_rft pid=2657296) 2024-05-11 14:10:51.939 n176-080-198:2657296:2658229 [0] NCCL INFO P2P Chunksize set to 524288 (train_rft pid=2657296) 2024-05-11 14:10:52.140 n176-080-198:2657296:2658230 [1] NCCL INFO Channel 00/0 : 1[1] -> 0[0] via P2P/direct pointer/read (train_rft pid=2657296) 2024-05-11 14:10:52.141 n176-080-198:2657296:2658229 [0] NCCL INFO Channel 00/0 : 0[0] -> 1[1] via P2P/direct pointer/read (train_rft pid=2657296) 2024-05-11 14:10:52.142 n176-080-198:2657296:2658230 [1] NCCL INFO Channel 01/0 : 1[1] -> 0[0] via P2P/direct pointer/read (train_rft pid=2657296) 2024-05-11 14:10:52.142 n176-080-198:2657296:2658229 [0] NCCL INFO Channel 01/0 : 0[0] -> 1[1] via P2P/direct pointer/read (train_rft pid=2657296) 2024-05-11 14:10:52.143 n176-080-198:2657296:2658230 [1] NCCL INFO Channel 02/0 : 1[1] -> 0[0] via P2P/direct pointer/read (train_rft pid=2657296) 2024-05-11 14:10:52.144 n176-080-198:2657296:2658229 [0] NCCL INFO Channel 02/0 : 0[0] -> 1[1] via P2P/direct pointer/read (train_rft pid=2657296) 2024-05-11 14:10:52.145 n176-080-198:2657296:2658230 [1] NCCL INFO Channel 03/0 : 1[1] -> 0[0] via P2P/direct pointer/read (train_rft pid=2657296) 2024-05-11 14:10:52.145 n176-080-198:2657296:2658229 [0] NCCL INFO Channel 03/0 : 0[0] -> 1[1] via P2P/direct pointer/read (train_rft pid=2657296) 2024-05-11 14:10:52.146 n176-080-198:2657296:2658230 [1] NCCL INFO Channel 04/0 : 1[1] -> 0[0] via P2P/direct pointer/read (train_rft pid=2657296) 2024-05-11 14:10:52.146 n176-080-198:2657296:2658229 [0] NCCL INFO Channel 04/0 : 0[0] -> 1[1] via P2P/direct pointer/read (train_rft pid=2657296) 2024-05-11 14:10:52.147 n176-080-198:2657296:2658230 [1] NCCL INFO Channel 05/0 : 1[1] -> 0[0] via P2P/direct pointer/read (train_rft pid=2657296) 2024-05-11 14:10:52.148 n176-080-198:2657296:2658229 [0] NCCL INFO Channel 05/0 : 0[0] -> 1[1] via P2P/direct pointer/read (train_rft pid=2657296) 2024-05-11 14:10:52.148 n176-080-198:2657296:2658230 [1] NCCL INFO Channel 06/0 : 1[1] -> 0[0] via P2P/direct pointer/read (train_rft pid=2657296) 2024-05-11 14:10:52.149 n176-080-198:2657296:2658229 [0] NCCL INFO Channel 06/0 : 0[0] -> 1[1] via P2P/direct pointer/read (train_rft pid=2657296) 2024-05-11 14:10:52.150 n176-080-198:2657296:2658230 [1] NCCL INFO Channel 07/0 : 1[1] -> 0[0] via P2P/direct pointer/read (train_rft pid=2657296) 2024-05-11 14:10:52.150 n176-080-198:2657296:2658229 [0] NCCL INFO Channel 07/0 : 0[0] -> 1[1] via P2P/direct pointer/read (train_rft pid=2657296) 2024-05-11 14:10:52.151 n176-080-198:2657296:2658230 [1] NCCL INFO Channel 08/0 : 1[1] -> 0[0] via P2P/direct pointer/read (train_rft pid=2657296) 2024-05-11 14:10:52.151 n176-080-198:2657296:2658229 [0] NCCL INFO Channel 08/0 : 0[0] -> 1[1] via P2P/direct pointer/read (train_rft pid=2657296) 2024-05-11 14:10:52.152 n176-080-198:2657296:2658230 [1] NCCL INFO Channel 09/0 : 1[1] -> 0[0] via P2P/direct pointer/read (train_rft pid=2657296) 2024-05-11 14:10:52.153 n176-080-198:2657296:2658229 [0] NCCL INFO Channel 09/0 : 0[0] -> 1[1] via P2P/direct pointer/read (train_rft pid=2657296) 2024-05-11 14:10:52.154 n176-080-198:2657296:2658230 [1] NCCL INFO Channel 10/0 : 1[1] -> 0[0] via P2P/direct pointer/read (train_rft pid=2657296) 2024-05-11 14:10:52.154 n176-080-198:2657296:2658229 [0] NCCL INFO Channel 10/0 : 0[0] -> 1[1] via P2P/direct pointer/read (train_rft pid=2657296) 2024-05-11 14:10:52.155 n176-080-198:2657296:2658230 [1] NCCL INFO Channel 11/0 : 1[1] -> 0[0] via P2P/direct pointer/read (train_rft pid=2657296) 2024-05-11 14:10:52.156 n176-080-198:2657296:2658229 [0] NCCL INFO Channel 11/0 : 0[0] -> 1[1] via P2P/direct pointer/read (train_rft pid=2657296) 2024-05-11 14:10:52.157 n176-080-198:2657296:2658230 [1] NCCL INFO Channel 12/0 : 1[1] -> 0[0] via P2P/direct pointer/read (train_rft pid=2657296) 2024-05-11 14:10:52.157 n176-080-198:2657296:2658229 [0] NCCL INFO Channel 12/0 : 0[0] -> 1[1] via P2P/direct pointer/read (train_rft pid=2657296) 2024-05-11 14:10:52.158 n176-080-198:2657296:2658230 [1] NCCL INFO Channel 13/0 : 1[1] -> 0[0] via P2P/direct pointer/read (train_rft pid=2657296) 2024-05-11 14:10:52.158 n176-080-198:2657296:2658229 [0] NCCL INFO Channel 13/0 : 0[0] -> 1[1] via P2P/direct pointer/read (train_rft pid=2657296) 2024-05-11 14:10:52.159 n176-080-198:2657296:2658230 [1] NCCL INFO Channel 14/0 : 1[1] -> 0[0] via P2P/direct pointer/read (train_rft pid=2657296) 2024-05-11 14:10:52.160 n176-080-198:2657296:2658229 [0] NCCL INFO Channel 14/0 : 0[0] -> 1[1] via P2P/direct pointer/read (train_rft pid=2657296) 2024-05-11 14:10:52.161 n176-080-198:2657296:2658230 [1] NCCL INFO Channel 15/0 : 1[1] -> 0[0] via P2P/direct pointer/read (train_rft pid=2657296) 2024-05-11 14:10:52.161 n176-080-198:2657296:2658229 [0] NCCL INFO Channel 15/0 : 0[0] -> 1[1] via P2P/direct pointer/read (train_rft pid=2657296) 2024-05-11 14:10:52.162 n176-080-198:2657296:2658230 [1] NCCL INFO Channel 16/0 : 1[1] -> 0[0] via P2P/direct pointer/read (train_rft pid=2657296) 2024-05-11 14:10:52.162 n176-080-198:2657296:2658229 [0] NCCL INFO Channel 16/0 : 0[0] -> 1[1] via P2P/direct pointer/read (train_rft pid=2657296) 2024-05-11 14:10:52.163 n176-080-198:2657296:2658230 [1] NCCL INFO Channel 17/0 : 1[1] -> 0[0] via P2P/direct pointer/read (train_rft pid=2657296) 2024-05-11 14:10:52.174 n176-080-198:2657296:2658229 [0] NCCL INFO Channel 17/0 : 0[0] -> 1[1] via P2P/direct pointer/read (train_rft pid=2657296) 2024-05-11 14:10:52.175 n176-080-198:2657296:2658230 [1] NCCL INFO Channel 18/0 : 1[1] -> 0[0] via P2P/direct pointer/read (train_rft pid=2657296) 2024-05-11 14:10:52.176 n176-080-198:2657296:2658229 [0] NCCL INFO Channel 18/0 : 0[0] -> 1[1] via P2P/direct pointer/read (train_rft pid=2657296) 2024-05-11 14:10:52.176 n176-080-198:2657296:2658230 [1] NCCL INFO Channel 19/0 : 1[1] -> 0[0] via P2P/direct pointer/read (train_rft pid=2657296) 2024-05-11 14:10:52.177 n176-080-198:2657296:2658229 [0] NCCL INFO Channel 19/0 : 0[0] -> 1[1] via P2P/direct pointer/read (train_rft pid=2657296) 2024-05-11 14:10:52.178 n176-080-198:2657296:2658230 [1] NCCL INFO Channel 20/0 : 1[1] -> 0[0] via P2P/direct pointer/read (train_rft pid=2657296) 2024-05-11 14:10:52.178 n176-080-198:2657296:2658229 [0] NCCL INFO Channel 20/0 : 0[0] -> 1[1] via P2P/direct pointer/read (train_rft pid=2657296) 2024-05-11 14:10:52.179 n176-080-198:2657296:2658230 [1] NCCL INFO Channel 21/0 : 1[1] -> 0[0] via P2P/direct pointer/read (train_rft pid=2657296) 2024-05-11 14:10:52.179 n176-080-198:2657296:2658229 [0] NCCL INFO Channel 21/0 : 0[0] -> 1[1] via P2P/direct pointer/read (train_rft pid=2657296) 2024-05-11 14:10:52.180 n176-080-198:2657296:2658230 [1] NCCL INFO Channel 22/0 : 1[1] -> 0[0] via P2P/direct pointer/read (train_rft pid=2657296) 2024-05-11 14:10:52.180 n176-080-198:2657296:2658229 [0] NCCL INFO Channel 22/0 : 0[0] -> 1[1] via P2P/direct pointer/read (train_rft pid=2657296) 2024-05-11 14:10:52.181 n176-080-198:2657296:2658230 [1] NCCL INFO Channel 23/0 : 1[1] -> 0[0] via P2P/direct pointer/read (train_rft pid=2657296) 2024-05-11 14:10:52.181 n176-080-198:2657296:2658229 [0] NCCL INFO Channel 23/0 : 0[0] -> 1[1] via P2P/direct pointer/read (train_rft pid=2657296) 2024-05-11 14:10:52.224 n176-080-198:2657296:2658230 [1] NCCL INFO Connected all rings (train_rft pid=2657296) 2024-05-11 14:10:52.224 n176-080-198:2657296:2658229 [0] NCCL INFO Connected all rings (train_rft pid=2657296) 2024-05-11 14:10:52.224 n176-080-198:2657296:2658230 [1] NCCL INFO Connected all trees (train_rft pid=2657296) 2024-05-11 14:10:52.224 n176-080-198:2657296:2658229 [0] NCCL INFO Connected all trees (train_rft pid=2657296) 2024-05-11 14:10:52.224 n176-080-198:2657296:2658230 [1] NCCL INFO threadThresholds 8/8/64 | 16/8/64 | 512 | 512 (train_rft pid=2657296) 2024-05-11 14:10:52.224 n176-080-198:2657296:2658230 [1] NCCL INFO 24 coll channels, 0 nvls channels, 32 p2p channels, 32 p2p channels per peer (train_rft pid=2657296) 2024-05-11 14:10:52.224 n176-080-198:2657296:2658229 [0] NCCL INFO threadThresholds 8/8/64 | 16/8/64 | 512 | 512 (train_rft pid=2657296) 2024-05-11 14:10:52.224 n176-080-198:2657296:2658229 [0] NCCL INFO 24 coll channels, 0 nvls channels, 32 p2p channels, 32 p2p channels per peer (train_rft pid=2657296) (train_rft pid=2657296) 2024-05-11 14:10:52.227 n176-080-198:2657296:2658232 [0] include/alloc.h:102 NCCL WARN Cuda failure 1 'invalid argument' (train_rft pid=2657296) 2024-05-11 14:10:52.227 n176-080-198:2657296:2658232 [0] NCCL INFO transport/p2p.cc:196 -> 1 (train_rft pid=2657296) 2024-05-11 14:10:52.227 n176-080-198:2657296:2658232 [0] NCCL INFO transport/net.cc:500 -> 1 (train_rft pid=2657296) 2024-05-11 14:10:52.227 n176-080-198:2657296:2658232 [0] NCCL INFO transport/net.cc:551 -> 1 (train_rft pid=2657296) 2024-05-11 14:10:52.227 n176-080-198:2657296:2658229 [0] NCCL INFO init.cc:1224 -> 1 (train_rft pid=2657296) 2024-05-11 14:10:52.227 n176-080-198:2657296:2658229 [0] NCCL INFO init.cc:1396 -> 1 (train_rft pid=2657296) 2024-05-11 14:10:52.227 n176-080-198:2657296:2658229 [0] NCCL INFO group.cc:64 -> 1 [Async thread] (train_rft pid=2657296) 2024-05-11 14:10:52.227 n176-080-198:2657296:2658230 [1] NCCL INFO misc/socket.cc:47 -> 3 (train_rft pid=2657296) 2024-05-11 14:10:52.227 n176-080-198:2657296:2658230 [1] NCCL INFO misc/socket.cc:750 -> 3 (train_rft pid=2657296) (train_rft pid=2657296) 2024-05-11 14:10:52.227 n176-080-198:2657296:2658230 [1] proxy.cc:1172 NCCL WARN Socket recv failed while polling for opId=0x7f4620207480 (train_rft pid=2657296) 2024-05-11 14:10:52.227 n176-080-198:2657296:2658230 [1] NCCL INFO init.cc:1224 -> 3 (train_rft pid=2657296) 2024-05-11 14:10:52.227 n176-080-198:2657296:2658230 [1] NCCL INFO init.cc:1396 -> 3 (train_rft pid=2657296) 2024-05-11 14:10:52.227 n176-080-198:2657296:2658230 [1] NCCL INFO group.cc:64 -> 3 [Async thread] (train_rft pid=2657296) 2024-05-11 14:10:52.227 n176-080-198:2657296:2657296 [0] NCCL INFO group.cc:418 -> 1 (train_rft pid=2657296) 2024-05-11 14:10:52.227 n176-080-198:2657296:2657296 [0] NCCL INFO group.cc:95 -> 1 (train_rft pid=2657296) 2024-05-11 14:10:52.227 n176-080-198:2657296:2657296 [0] NCCL INFO init.cc:1734 -> 1 (train_rft pid=2657296) 2024-05-11 14:10:52.229 n176-080-198:2657296:2658231 [1] NCCL INFO misc/socket.cc:47 -> 3 (train_rft pid=2657296) 2024-05-11 14:10:52.229 n176-080-198:2657296:2658231 [1] NCCL INFO misc/socket.cc:58 -> 3 (train_rft pid=2657296) 2024-05-11 14:10:52.229 n176-080-198:2657296:2658231 [1] NCCL INFO misc/socket.cc:773 -> 3 (train_rft pid=2657296) 2024-05-11 14:10:52.229 n176-080-198:2657296:2658231 [1] NCCL INFO proxy.cc:1374 -> 3 (train_rft pid=2657296) 2024-05-11 14:10:52.229 n176-080-198:2657296:2658231 [1] NCCL INFO proxy.cc:1415 -> 3 (train_rft pid=2657296) (train_rft pid=2657296) 2024-05-11 14:10:52.229 n176-080-198:2657296:2658231 [1] proxy.cc:1557 NCCL WARN [Proxy Service 1] Failed to execute operation SharedInit from rank 1, retcode 3

torch version & env info

Torch Version: 2.2.1+cu122

torch.utils.collect_env

:128: RuntimeWarning: 'torch.utils.collect_env' found in sys.modules after import of package 'torch.utils', but prior to execution of 'torch.utils.collect_env'; this may result in unpredictable behaviour Collecting environment information...

PyTorch version: 2.2.1+cu122 Is debug build: False CUDA used to build PyTorch: 12.2 ROCM used to build PyTorch: N/A

OS: Debian GNU/Linux 12 (bookworm) (x86_64) GCC version: (Debian 12.2.0-14) 12.2.0 Clang version: Could not collect CMake version: version 3.29.2 Libc version: glibc-2.36

Python version: 3.11.2 (main, Mar 13 2023, 12:18:29) [GCC 12.2.0] (64-bit runtime) Python platform: Linux-5.4.143.bsk.7-amd64-x86_64-with-glibc2.36 Is CUDA available: True CUDA runtime version: 12.2.140 CUDA_MODULE_LOADING set to: LAZY GPU models and configuration: GPU 0: NVIDIA A100-SXM4-80GB GPU 1: NVIDIA A100-SXM4-80GB GPU 2: NVIDIA A100-SXM4-80GB GPU 3: NVIDIA A100-SXM4-80GB

Nvidia driver version: 470.129.06 cuDNN version: Probably one of the following: /usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.4 /usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.4 /usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.4 /usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.4 /usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.4 /usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.4 /usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.4 HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True

CPU: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Address sizes: 46 bits physical, 57 bits virtual Byte Order: Little Endian CPU(s): 128 On-line CPU(s) list: 0-127 Vendor ID: GenuineIntel Model name: Intel(R) Xeon(R) Platinum 8336C CPU @ 2.30GHz CPU family: 6 Model: 106 Thread(s) per core: 2 Core(s) per socket: 32 Socket(s): 2 Stepping: 6 CPU(s) scaling MHz: 86% CPU max MHz: 3500.0000 CPU min MHz: 800.0000 BogoMIPS: 4600.00 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 invpcid_single ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local wbnoinvd dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq rdpid md_clear pconfig flush_l1d arch_capabilities Virtualization: VT-x L1d cache: 3 MiB (64 instances) L1i cache: 2 MiB (64 instances) L2 cache: 80 MiB (64 instances) L3 cache: 108 MiB (2 instances) NUMA node(s): 2 NUMA node0 CPU(s): 0-31,64-95 NUMA node1 CPU(s): 32-63,96-127 Vulnerability Itlb multihit: Not affected Vulnerability L1tf: Not affected Vulnerability Mds: Not affected Vulnerability Meltdown: Not affected Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling Vulnerability Srbds: Not affected Vulnerability Tsx async abort: Not affected

Versions of relevant libraries: [pip3] byted-torch==2.2.1+cu122 [pip3] byted_torch_monitor==0.0.1 [pip3] numpy==1.26.4 [pip3] torch==2.2.1+cu122 [pip3] torchaudio==2.2.1 [pip3] torchvision==0.17.1 [pip3] triton==2.2.0 [conda] Could not collect

NCCL Version: (2, 19, 3)

nvidia-smi topo -m

    GPU0    GPU1    GPU2    GPU3    mlx5_0  mlx5_1  mlx5_2  mlx5_3  CPU Affinity    NUMA Affinity

GPU0 X NV12 NV12 NV12 SYS SYS PXB NODE 32-63,96-127 1 GPU1 NV12 X NV12 NV12 SYS SYS PXB NODE 32-63,96-127 1 GPU2 NV12 NV12 X NV12 SYS SYS NODE PXB 32-63,96-127 1 GPU3 NV12 NV12 NV12 X SYS SYS NODE PXB 32-63,96-127 1 mlx5_0 SYS SYS SYS SYS X NODE SYS SYS mlx5_1 SYS SYS SYS SYS NODE X SYS SYS mlx5_2 PXB PXB NODE NODE SYS SYS X NODE mlx5_3 NODE NODE PXB PXB SYS SYS NODE X

Legend:

X = Self SYS = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI) NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node PHB = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU) PXB = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge) PIX = Connection traversing at most a single PCIe bridge NV# = Connection traversing a bonded set of # NVLinks

ncvv version

nvcc: NVIDIA (R) Cuda compiler driver Copyright (c) 2005-2023 NVIDIA Corporation Built on Tue_Aug_15_22:02:13_PDT_2023 Cuda compilation tools, release 12.2, V12.2.140 Build cuda_12.2.r12.2/compiler.33191640_0

env | grep -E "NCCL"

NCCL_SOCKET_IFNAME=eth0 NCCL_DEBUG=INFO NCCL_IB_HCA=mlx5 NCCL_IB_GID_INDEX=3 NCCL_IB_TIMEOUT=25 NCCL_IB_DISABLE=0 NCCL_IB_RETRY_CNT=7

kwen2501 commented 4 months ago

perhaps unrelated to the NCCL error you encountered, torch. nn.parallel.DistributedDataParallel (or what we call "DDP") is preferred over torch.nn.parallel.data_parallel used in this issue.

For detailed reason, please see: https://pytorch.org/tutorials/intermediate/ddp_tutorial.html#comparison-between-dataparallel-and-distributeddataparallel

kiskra-nvidia commented 4 months ago

With respect to the reported error, it looks like cuMemCreate failed for some reason. I recommend upgrading to the current NCCL version (2.21.5) and retesting. As a workaround, running with NCCL_CUMEM_ENABLE=0 should avoid making those calls.