huggingface / accelerate

🚀 A simple way to launch, train, and use PyTorch models on almost any device and distributed configuration, automatic mixed precision (including fp8), and easy-to-configure FSDP and DeepSpeed support
https://huggingface.co/docs/accelerate
Apache License 2.0
7.76k stars 941 forks source link

Incorrect Argument Default for DeepSpeed Multi-node Training #2869

Closed jomayeri closed 2 months ago

jomayeri commented 3 months ago

System Info

pip install accelerate.

Information

Tasks

Reproduction

Run accelerate for multi-node training.

Expected behavior

Accelerate is setting the default DeepSpeed hostfile to None this overrides the DeepSpeed default of /job/hostfile. Overriding this default is causing issues with users attempting multi-node trading. Please change the default to match DeepSpeed's default.

SunMarc commented 3 months ago

Hi @jomayeri, thanks for reporting ! Make sense to switch to the DeepSpeed's default. Would you like to open the PR ? Otherwise, I can do it ! cc @muellerzr

github-actions[bot] commented 2 months ago

This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.

Please note that issues that do not follow the contributing guidelines are likely to be ignored.