Describe the bug
When I upgrade to DeepSpeed 0.14.3, training does not progress because all gradients and gradient norms are zero. From using git bisect, I think it's from this PR:
but this was annoying to debug, because something else in 0.14.3 seems to be causing stochastic segfaulting during the first training step, and that made it hard to trace. (FWIW, here's a log from the segfaults, not sure what's going on there - it always happens during the first step of training)
[860301:1]:[860dbb3d-01:575573:0:579475] Caught signal 11 (Segmentation fault: invalid permissions for mapped obje\
ct at address 0x148186426110)
[860301:1]:==== backtrace (tid: 579475) ====
[860301:1]: 0 0x0000000000042520 __sigaction() ???:0
[860301:1]: 1 0x00000000000b2e00 ucp_wireup_get_dst_rsc_indices() /build-result/src/hpcx-v2.17.1-gcc-mlnx_ofed-ub\
untu22.04-cuda12-x86_64/ucx-02432d35d8228f44e9a3b809964cccdebc45703a/src/ucp/wireup/wireup.c:1369
[860301:1]:=================================
W0621 08:08:30.041000 22703366347648 torch/distributed/elastic/multiprocessing/api.py:858] Sending process 575572 \
closing signal SIGTERM
W0621 08:08:30.043000 22703366347648 torch/distributed/elastic/multiprocessing/api.py:858] Sending process 575574 \
closing signal SIGTERM
W0621 08:08:30.045000 22703366347648 torch/distributed/elastic/multiprocessing/api.py:858] Sending process 575575 \
closing signal SIGTERM
W0621 08:08:30.047000 22703366347648 torch/distributed/elastic/multiprocessing/api.py:858] Sending process 575576 \
closing signal SIGTERM
W0621 08:08:30.062000 22703366347648 torch/distributed/elastic/multiprocessing/api.py:858] Sending process 575577 \
closing signal SIGTERM
W0621 08:08:30.067000 22703366347648 torch/distributed/elastic/multiprocessing/api.py:858] Sending process 575578 \
closing signal SIGTERM
W0621 08:08:30.068000 22703366347648 torch/distributed/elastic/multiprocessing/api.py:858] Sending process 575579 \
closing signal SIGTERM
[860301:6]:[rank6]:W0621 08:08:30.113000 22697319532096 torch/_inductor/compile_worker/subproc_pool.py:126] Subpro\
cPool unclean exit
[860301:3]:[rank3]:W0621 08:08:30.105000 22697108514368 torch/_inductor/compile_worker/subproc_pool.py:126] Subpro\
cPool unclean exit
[860301:0]:[rank0]:W0621 08:08:30.102000 22722195838528 torch/_inductor/compile_worker/subproc_pool.py:126] Subpro\
cPool unclean exit
To Reproduce
Unfortunately, this may be tricky to reproduce, but I'm fine-tuning Llama-3-70B-Instruct on two 8x H100 machines, using DeepSpeed/HF Transformers/Accelerate/PyTorch/NCCL/Ubuntu 22.04. I'm not trying to use torch.compile (too many bugs), but there's a bunch of Triton code in the pipeline for speed reasons, which might be why torch/_inductor/compile_worker is running (?).
ds_report output
/home/alyssavance/miniforge3/envs/brr/bin/ds_report:4: DeprecationWarning: pkg_resources is deprecated as an API. See https://setuptools.pyp\
a.io/en/latest/pkg_resources.html
__import__('pkg_resources').require('deepspeed==0.14.3+fbdf0eaf')
[2024-06-21 08:52:55,080] [INFO] [real_accelerator.py:203:get_accelerator] Setting ds_accelerator to cuda (auto detect)
Warning: The default cache directory for DeepSpeed Triton autotune, /home/alyssavance/.triton/autotune, appears to be on an NFS system. Whil\
e this is generally acceptable, if you experience slowdowns or hanging when DeepSpeed exits, it is recommended to set the TRITON_CACHE_DIR e\
nvironment variable to a non-NFS path.
[2024-06-21 08:52:55,564] [INFO] [real_accelerator.py:203:get_accelerator] Setting ds_accelerator to cuda (auto detect)
/home/alyssavance/DeepSpeed/deepspeed/runtime/zero/linear.py:49: FutureWarning: `torch.cuda.amp.custom_fwd(args...)` is deprecated. Please u\
se `torch.amp.custom_fwd(args..., device_type='cuda')` instead.
def forward(ctx, input, weight, bias=None):
/home/alyssavance/DeepSpeed/deepspeed/runtime/zero/linear.py:67: FutureWarning: `torch.cuda.amp.custom_bwd(args...)` is deprecated. Please u\
se `torch.amp.custom_bwd(args..., device_type='cuda')` instead.
def backward(ctx, grad_output):
^[[93m [WARNING] ^[[0m Please specify the CUTLASS repo directory as environment variable $CUTLASS_PATH
^[[93m [WARNING] ^[[0m sparse_attn requires a torch version >= 1.5 and < 2.0 but detected 2.4
^[[93m [WARNING] ^[[0m using untested triton version (3.0.0+45fff310c8), only 1.0.0 is known to be compatible
--------------------------------------------------
DeepSpeed C++/CUDA extension op report
--------------------------------------------------
NOTE: Ops not installed will be just-in-time (JIT) compiled at
runtime if needed. Op compatibility means that your system
meet the required dependencies to JIT install the op.
--------------------------------------------------
JIT compiled ops requires ninja
ninja .................. ^[[92m[OKAY]^[[0m
--------------------------------------------------
op name ................ installed .. compatible
--------------------------------------------------
async_io ............... ^[[93m[NO]^[[0m ....... ^[[92m[OKAY]^[[0m
fused_adam ............. ^[[93m[NO]^[[0m ....... ^[[92m[OKAY]^[[0m
cpu_adam ............... ^[[93m[NO]^[[0m ....... ^[[92m[OKAY]^[[0m
cpu_adagrad ............ ^[[93m[NO]^[[0m ....... ^[[92m[OKAY]^[[0m
cpu_lion ............... ^[[93m[NO]^[[0m ....... ^[[92m[OKAY]^[[0m
^[[93m [WARNING] ^[[0m Please specify the CUTLASS repo directory as environment variable $CUTLASS_PATH
evoformer_attn ......... ^[[93m[NO]^[[0m ....... ^[[93m[NO]^[[0m
fp_quantizer ........... ^[[93m[NO]^[[0m ....... ^[[92m[OKAY]^[[0m
fused_lamb ............. ^[[93m[NO]^[[0m ....... ^[[92m[OKAY]^[[0m
fused_lion ............. ^[[93m[NO]^[[0m ....... ^[[92m[OKAY]^[[0m
inference_core_ops ..... ^[[93m[NO]^[[0m ....... ^[[92m[OKAY]^[[0m
cutlass_ops ............ ^[[93m[NO]^[[0m ....... ^[[92m[OKAY]^[[0m
transformer_inference .. ^[[93m[NO]^[[0m ....... ^[[92m[OKAY]^[[0m
quantizer .............. ^[[93m[NO]^[[0m ....... ^[[92m[OKAY]^[[0m
ragged_device_ops ...... ^[[93m[NO]^[[0m ....... ^[[92m[OKAY]^[[0m
ragged_ops ............. ^[[93m[NO]^[[0m ....... ^[[92m[OKAY]^[[0m
random_ltd ............. ^[[93m[NO]^[[0m ....... ^[[92m[OKAY]^[[0m
^[[93m [WARNING] ^[[0m sparse_attn requires a torch version >= 1.5 and < 2.0 but detected 2.4
^[[93m [WARNING] ^[[0m using untested triton version (3.0.0+45fff310c8), only 1.0.0 is known to be compatible
sparse_attn ............ ^[[93m[NO]^[[0m ....... ^[[93m[NO]^[[0m
spatial_inference ...... ^[[93m[NO]^[[0m ....... ^[[92m[OKAY]^[[0m
transformer ............ ^[[93m[NO]^[[0m ....... ^[[92m[OKAY]^[[0m
stochastic_transformer . ^[[93m[NO]^[[0m ....... ^[[92m[OKAY]^[[0m
--------------------------------------------------
DeepSpeed general environment info:
torch install path ............... ['/home/alyssavance/miniforge3/envs/brr/lib/python3.10/site-packages/torch']
torch version .................... 2.4.0.dev20240609+cu124
deepspeed install path ........... ['/home/alyssavance/DeepSpeed/deepspeed']
deepspeed info ................... 0.14.3+fbdf0eaf, fbdf0eaf, HEAD
torch cuda version ............... 12.4
torch hip version ................ None
nvcc version ..................... 12.5
deepspeed wheel compiled w. ...... torch 2.4, cuda 12.4
shared memory (/dev/shm) size .... 1007.73 GB
System info (please complete the following information):
OS: Ubuntu 22.04
GPU count and types: Two machines with x8 H100s each
Interconnects (if applicable): Two machines connected with 3.2 Tbps Infiniband over Mellanox Ethernet (RoCE)
Python version: 3.10
Any other relevant info about your setup
Launcher context
I'm using the HF Accelerate launcher
Docker context
N/A
Additional context
There's also stochastic NCCL deadlocking, which is really annoying but I haven't found the cause of it.
Describe the bug When I upgrade to DeepSpeed 0.14.3, training does not progress because all gradients and gradient norms are zero. From using git bisect, I think it's from this PR:
https://github.com/microsoft/DeepSpeed/pull/5613
but this was annoying to debug, because something else in 0.14.3 seems to be causing stochastic segfaulting during the first training step, and that made it hard to trace. (FWIW, here's a log from the segfaults, not sure what's going on there - it always happens during the first step of training)
To Reproduce Unfortunately, this may be tricky to reproduce, but I'm fine-tuning Llama-3-70B-Instruct on two 8x H100 machines, using DeepSpeed/HF Transformers/Accelerate/PyTorch/NCCL/Ubuntu 22.04. I'm not trying to use torch.compile (too many bugs), but there's a bunch of Triton code in the pipeline for speed reasons, which might be why
torch/_inductor/compile_worker
is running (?).ds_report output
System info (please complete the following information):
Launcher context I'm using the HF Accelerate launcher
Docker context N/A
Additional context There's also stochastic NCCL deadlocking, which is really annoying but I haven't found the cause of it.