Open zhoulei-biubiu opened 4 months ago
perhaps unrelated to the NCCL error you encountered, torch. nn.parallel.DistributedDataParallel
(or what we call "DDP") is preferred over torch.nn.parallel.data_parallel
used in this issue.
For detailed reason, please see: https://pytorch.org/tutorials/intermediate/ddp_tutorial.html#comparison-between-dataparallel-and-distributeddataparallel
With respect to the reported error, it looks like cuMemCreate
failed for some reason. I recommend upgrading to the current NCCL version (2.21.5) and retesting. As a workaround, running with NCCL_CUMEM_ENABLE=0
should avoid making those calls.
During the execution of the HuggingFace Trainer.train(), I encountered the RuntimeError: NCCL Error 1: unhandled cuda error multiple times. This error happens occasionally at the last step of each epoch. I also wrapped the training process in a ray task by @ray.remote(num_cpus=8, num_gpus=4). Don't know if it matters.
stderr :
NCCL debug info:
torch version & env info
Torch Version: 2.2.1+cu122
PyTorch version: 2.2.1+cu122 Is debug build: False CUDA used to build PyTorch: 12.2 ROCM used to build PyTorch: N/A
OS: Debian GNU/Linux 12 (bookworm) (x86_64) GCC version: (Debian 12.2.0-14) 12.2.0 Clang version: Could not collect CMake version: version 3.29.2 Libc version: glibc-2.36
Python version: 3.11.2 (main, Mar 13 2023, 12:18:29) [GCC 12.2.0] (64-bit runtime) Python platform: Linux-5.4.143.bsk.7-amd64-x86_64-with-glibc2.36 Is CUDA available: True CUDA runtime version: 12.2.140 CUDA_MODULE_LOADING set to: LAZY GPU models and configuration: GPU 0: NVIDIA A100-SXM4-80GB GPU 1: NVIDIA A100-SXM4-80GB GPU 2: NVIDIA A100-SXM4-80GB GPU 3: NVIDIA A100-SXM4-80GB
Nvidia driver version: 470.129.06 cuDNN version: Probably one of the following: /usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.4 /usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.4 /usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.4 /usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.4 /usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.4 /usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.4 /usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.4 HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True
CPU: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Address sizes: 46 bits physical, 57 bits virtual Byte Order: Little Endian CPU(s): 128 On-line CPU(s) list: 0-127 Vendor ID: GenuineIntel Model name: Intel(R) Xeon(R) Platinum 8336C CPU @ 2.30GHz CPU family: 6 Model: 106 Thread(s) per core: 2 Core(s) per socket: 32 Socket(s): 2 Stepping: 6 CPU(s) scaling MHz: 86% CPU max MHz: 3500.0000 CPU min MHz: 800.0000 BogoMIPS: 4600.00 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 invpcid_single ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local wbnoinvd dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq rdpid md_clear pconfig flush_l1d arch_capabilities Virtualization: VT-x L1d cache: 3 MiB (64 instances) L1i cache: 2 MiB (64 instances) L2 cache: 80 MiB (64 instances) L3 cache: 108 MiB (2 instances) NUMA node(s): 2 NUMA node0 CPU(s): 0-31,64-95 NUMA node1 CPU(s): 32-63,96-127 Vulnerability Itlb multihit: Not affected Vulnerability L1tf: Not affected Vulnerability Mds: Not affected Vulnerability Meltdown: Not affected Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling Vulnerability Srbds: Not affected Vulnerability Tsx async abort: Not affected
Versions of relevant libraries: [pip3] byted-torch==2.2.1+cu122 [pip3] byted_torch_monitor==0.0.1 [pip3] numpy==1.26.4 [pip3] torch==2.2.1+cu122 [pip3] torchaudio==2.2.1 [pip3] torchvision==0.17.1 [pip3] triton==2.2.0 [conda] Could not collect
NCCL Version: (2, 19, 3)
nvidia-smi topo -m
GPU0 X NV12 NV12 NV12 SYS SYS PXB NODE 32-63,96-127 1 GPU1 NV12 X NV12 NV12 SYS SYS PXB NODE 32-63,96-127 1 GPU2 NV12 NV12 X NV12 SYS SYS NODE PXB 32-63,96-127 1 GPU3 NV12 NV12 NV12 X SYS SYS NODE PXB 32-63,96-127 1 mlx5_0 SYS SYS SYS SYS X NODE SYS SYS mlx5_1 SYS SYS SYS SYS NODE X SYS SYS mlx5_2 PXB PXB NODE NODE SYS SYS X NODE mlx5_3 NODE NODE PXB PXB SYS SYS NODE X
Legend:
X = Self SYS = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI) NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node PHB = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU) PXB = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge) PIX = Connection traversing at most a single PCIe bridge NV# = Connection traversing a bonded set of # NVLinks
ncvv version