facebookresearch / fairseq

Facebook AI Research Sequence-to-Sequence Toolkit written in Python.
MIT License
30.48k stars 6.41k forks source link

Hydra can not recognize "--local_rank=0" argument #4164

Open flycser opened 2 years ago

flycser commented 2 years ago

🐛 Bug

When I ran a model in a distributed model(2 nodes, each node with 2 GPUs) via hydra_train.py. the hydra can not accept arguments starting with "--", while torch.distributed.launch pass a argument as "--local_rank=0", which will raise a "unrecognized argument" error. I ran the model via command line, it works because the argparser can recognize arguments starting with "--"

To Reproduce

Steps to reproduce the behavior (always include the command you ran):

  1. Run cmd 'python -m torch.distributed.launch --nproc-per-node 2 --nnodes 2 --node_rank 0 --master_addr 'xxxxxx" --master_port 12345 fairseq_cli/hydra_train.py xxxxxxxxxxxxxx'
  2. See error: "Unrecognized argument --local_rank=0" "Unrecognized argument --local_rank=1"

Code sample

Expected behavior

Environment

Additional context

Dawn-970 commented 2 years ago

Hi, have you solved this problem yet? I met a same one.

Dawn-970 commented 2 years ago

Update the torch version and try this command : torchrun --nproc-per-node 2 --nnodes 2 --node_rank 0 --master_addr 'xxxxxx" --master_port 12345 fairseq_cli/hydra_train.py xxxxxxxxxxxxxx

Rongjiehuang commented 2 years ago

@Dawn-970 Hi, I have met with same issue, can it be solved?

xiyewang2 commented 2 years ago

Its so difficult!

flycser commented 2 years ago

@Dawn-970 Hi, I have met with same issue, can it be solved?

use pytorch -m torch.distributed.lauch --use_env and set cfg.distributed_training.device_id by os.environ['LOCAL_RANK']