huggingface / accelerate

🚀 A simple way to launch, train, and use PyTorch models on almost any device and distributed configuration, automatic mixed precision (including fp8), and easy-to-configure FSDP and DeepSpeed support
https://huggingface.co/docs/accelerate
Apache License 2.0
7.83k stars 952 forks source link

Using accelerate launch to initialize sagemaker job doesn't work properly with multiple GPUs #3148

Open BaldPulse opened 1 week ago

BaldPulse commented 1 week ago

System Info

- `Accelerate` version: 0.30.1
- Platform: Linux-6.8.0-1015-aws-x86_64-with-glibc2.31
- `accelerate` bash location: /opt/conda/bin/accelerate
- Python version: 3.11.9
- Numpy version: 1.26.4
- PyTorch version (GPU?): 2.3.0 (False)
- PyTorch XPU available: False
- PyTorch NPU available: False
- PyTorch MLU available: False
- System RAM: 30.98 GB
- `Accelerate` default config:
    Not found

Information

Tasks

Reproduction

! Please note that the system info above does not reflect the actual environment accelerate runs in on Sagemaker. The above config is generated in a Sagemaker official container.

To reproduce the bug:

  1. Create any training script that invokes accelerator.gather()
  2. Configure accelerate to run on a Sagemaker multi-gpu machine using accelerate config, use 209479262201.dkr.ecr.us-west-2.amazonaws.com/1xgpt-from-sagemaker:2.3.0 as your docker image
  3. Create a training job using accelerate launch and run the training script

Expected behavior

Sagemaker will return an error somewhere along the lines of this:

File "/opt/conda/lib/python3.11/site-packages/accelerate/accelerator.py", line 2373, in gather_for_metrics
 data = self.gather(input_data)
 ^^^^^^^^^^^^^^^^^^^^^^^
 File "/opt/conda/lib/python3.11/site-packages/accelerate/accelerator.py", line 2329, in gather
 return gather(tensor)
 ^^^^^^^^^^^^^^
 File "/opt/conda/lib/python3.11/site-packages/accelerate/utils/operations.py", line 380, in wrapper
 return function(*args, **kwargs)
 ^^^^^^^^^^^^^^^^^^^^^^^^^
 File "/opt/conda/lib/python3.11/site-packages/accelerate/utils/operations.py", line 441, in gather
 return _gpu_gather(tensor)
 ^^^^^^^^^^^^^^^^^^^
 File "/opt/conda/lib/python3.11/site-packages/accelerate/utils/operations.py", line 360, in _gpu_gather
 return recursively_apply(_gpu_gather_one, tensor, error_on_other_type=True)
 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
 File "/opt/conda/lib/python3.11/site-packages/accelerate/utils/operations.py", line 126, in recursively_apply
 return func(data, *args, **kwargs)
 ^^^^^^^^^^^^^^^^^^^^^^^^^^^
 File "/opt/conda/lib/python3.11/site-packages/accelerate/utils/operations.py", line 350, in _gpu_gather_one
 gather_op(output_tensors, tensor)
 File "/opt/conda/lib/python3.11/site-packages/torch/distributed/c10d_logger.py", line 75, in wrapper
 return func(*args, **kwargs)
 ^^^^^^^^^^^^^^^^^^^^^
 File "/opt/conda/lib/python3.11/site-packages/torch/distributed/distributed_c10d.py", line 2948, in all_gather_into_tensor
 work = group._allgather_base(output_tensor, input_tensor, opts)
 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
 RuntimeError: SMDDP does not support: _allgather_base
BaldPulse commented 1 week ago

If accelerate launch is invoked inside of sagemaker instead of used to create the sagemaker job, the script works fine. I suspect this is because MPI is not well-supported by sagemaker yet accelerate launch uses MPI

muellerzr commented 6 days ago

Yes, I'd recommend invoking inside of sagemaker instead in this case. (Though MPI should only be ran on CPU, not GPU)

BaldPulse commented 1 day ago

Yes, I'd recommend invoking inside of sagemaker instead in this case. (Though MPI should only be ran on CPU, not GPU)

Sorry if I wasn't clear in my original report. This is more of a complaint on the default behavior of of accelerate launch when configured to run on SageMaker. When I followed this guide to configure and run accelerate with SageMaker's, it defaulted to MPI, which doesn't work with distributed training on SageMaker. accelerate luanch should default to NCCL when configured to run distributed training on SageMaker.