wenet-e2e / wespeaker

Research and Production Oriented Speaker Verification, Recognition and Diarization Toolkit
Apache License 2.0
707 stars 116 forks source link

[bug] Multi jobs running on one node #171

Closed ductuantruong closed 1 year ago

ductuantruong commented 1 year ago

Hi,

Thank you for developing this amazing toolkit. I am currently running my experiments with your toolkit. However, if I run multiple experiments on one computing node, I notice that if one job finish first, it will cause the following errors for the remaining running jobs:

WARNING:torch.distributed.elastic.rendezvous.dynamic_rendezvous:The node 'node04.localdomain_3882442_0' has failed to shutdown the rendezvous '9987b766-350c-48bf-aa45-cfc2da182f33' due to an error of type RendezvousConnectionError.
Traceback (most recent call last):
  File "/home/tuantruong001/miniconda3/envs/wespeaker/lib/python3.9/site-packages/torch/distributed/elastic/rendezvous/c10d_rendezvous_backend.py", line 113, in _call_store
    return getattr(self._store, store_op)(*args, **kwargs)
RuntimeError: Broken pipe

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/home/tuantruong001/miniconda3/envs/wespeaker/bin/torchrun", line 33, in <module>
    sys.exit(load_entry_point('torch==1.12.1', 'console_scripts', 'torchrun')())
  File "/home/tuantruong001/miniconda3/envs/wespeaker/lib/python3.9/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 345, in wrapper
    return f(*args, **kwargs)
  File "/home/tuantruong001/miniconda3/envs/wespeaker/lib/python3.9/site-packages/torch/distributed/run.py", line 761, in main
    run(args)
  File "/home/tuantruong001/miniconda3/envs/wespeaker/lib/python3.9/site-packages/torch/distributed/run.py", line 752, in run
    elastic_launch(
  File "/home/tuantruong001/miniconda3/envs/wespeaker/lib/python3.9/site-packages/torch/distributed/launcher/api.py", line 131, in __call__
    return launch_agent(self._config, self._entrypoint, list(args))
  File "/home/tuantruong001/miniconda3/envs/wespeaker/lib/python3.9/site-packages/torch/distributed/launcher/api.py", line 236, in launch_agent
    result = agent.run()
  File "/home/tuantruong001/miniconda3/envs/wespeaker/lib/python3.9/site-packages/torch/distributed/elastic/metrics/api.py", line 125, in wrapper
    result = f(*args, **kwargs)
  File "/home/tuantruong001/miniconda3/envs/wespeaker/lib/python3.9/site-packages/torch/distributed/elastic/agent/server/api.py", line 709, in run
    result = self._invoke_run(role)
  File "/home/tuantruong001/miniconda3/envs/wespeaker/lib/python3.9/site-packages/torch/distributed/elastic/agent/server/api.py", line 881, in _invoke_run
    num_nodes_waiting = rdzv_handler.num_nodes_waiting()
  File "/home/tuantruong001/miniconda3/envs/wespeaker/lib/python3.9/site-packages/torch/distributed/elastic/rendezvous/dynamic_rendezvous.py", line 1079, in num_nodes_waiting
    self._state_holder.sync()
  File "/home/tuantruong001/miniconda3/envs/wespeaker/lib/python3.9/site-packages/torch/distributed/elastic/rendezvous/dynamic_rendezvous.py", line 408, in sync
    get_response = self._backend.get_state()
  File "/home/tuantruong001/miniconda3/envs/wespeaker/lib/python3.9/site-packages/torch/distributed/elastic/rendezvous/c10d_rendezvous_backend.py", line 73, in get_state
    base64_state: bytes = self._call_store("get", self._key)
  File "/home/tuantruong001/miniconda3/envs/wespeaker/lib/python3.9/site-packages/torch/distributed/elastic/rendezvous/c10d_rendezvous_backend.py", line 115, in _call_store
    raise RendezvousConnectionError(
torch.distributed.elastic.rendezvous.api.RendezvousConnectionError: The connection to the C10d store has failed. See inner exception for details.

I am asking whether you encountered this issue. If yes, could you guide me how to fix this bug? Once again, thank you for sharing this toolkit and helping us.

cdliang11 commented 1 year ago

Hi, please try the following code:

torchrun --master_addr=localhost --master_port=16888 --nnodes=1 --nproc_per_node=$num_gpus \

Ref: https://yzsxeajuhm.feishu.cn/docx/JNmddhTz0oDA8zxgDN1cJeqnnQb

ductuantruong commented 1 year ago

Thank you for your quickly response. I will try it. I am closing this issue.