huggingface / accelerate

🚀 A simple way to launch, train, and use PyTorch models on almost any device and distributed configuration, automatic mixed precision (including fp8), and easy-to-configure FSDP and DeepSpeed support
https://huggingface.co/docs/accelerate
Apache License 2.0
7.9k stars 961 forks source link

Passing 0 for main_process_port for accelerate does not work #2955

Closed jayxsinha closed 1 month ago

jayxsinha commented 3 months ago

System Info

accelerate Version: 0.30.1
torch Version: 2.3.0
transformers Version: 4.41.2
GPU: 2 - A100 80GB SXM

This happens on a Single Node. Both GPUs are on a single node.

Information

Tasks

Reproduction

Please review Sweep YAML and Accelerate config yaml to get a look on this.

Expected behavior

accelerate_2_bf16 .txt - Accelerate Config file slurm_23908547-4.txt - Log from the job sweep copy.txt - Sweep YAML

As pointed out in the exception: ConnectionError: Tried to launch distributed communication on port 29144, but another process is utilizing it. Please specify a different port (such as using the --main_process_port flag or specifying a different main_process_port in your config file) and rerun your script. To automatically use the next open port (on a single node), you can set this to 0.

I set the main_process_port to be 0 but it shows error:


torch.distributed.DistNetworkError: The client socket has timed out after 600s while trying to connect to (127.0.0.1, 0).
    tcp_store = TCPStore(hostname, port, world_size, False, timeout)
torch.distributed.DistNetworkError: The client socket has timed out after 600s while trying to connect to (127.0.0.1, 0).```
github-actions[bot] commented 2 months ago

This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.

Please note that issues that do not follow the contributing guidelines are likely to be ignored.

abdelkareemkobo commented 1 week ago

@jayxsinha have the same error raise ConnectionError( ConnectionError: Tried to launch distributed communication on port 29500, but another process is utilizing it. Please specify a different port (such as using the --main_process_port flag or specifying a different main_process_port i n your config file) and rerun your script. To automatically use the next open port (on a single node), you can set this to 0. trying to run with

 accelerate launch .\acc_torch\multi_gpu_accelerate.py --main_process_port 29600

or even make the --main_process_port=0 or equal =29000 but the same error