wns823 / NMT_SSP

NMT with ssp
MIT License
11 stars 0 forks source link

Error raised when use multi-GPUs #1

Open songmzhang opened 3 years ago

songmzhang commented 3 years ago

Hi there, a little bug occured when I try to use two or more gpus to train the model with your code. To exclude the problem caused by environment, we follow the same environment as you suggested, but the bug still exists.

Here's the error information:

2021-08-23 11:58:25 | INFO | fairseq.trainer | begin training epoch 1 Traceback (most recent call last): File "/home/zhangsongming/anaconda3/envs/py37/bin/fairseq-train", line 33, in sys.exit(load_entry_point('fairseq', 'console_scripts', 'fairseq-train')()) File "/data/zhangsongming/project-term_nmt/baselines/ssp/NMT_SSP-main/fairseq_cli/train.py", line 441, in cli_main distributed_utils.call_main(cfg, main) File "/data/zhangsongming/project-term_nmt/baselines/ssp/NMT_SSP-main/fairseq/distributed_utils.py", line 320, in call_main cfg.distributed_training.distributed_world_size, File "/home/zhangsongming/anaconda3/envs/py37/lib/python3.7/site-packages/torch/multiprocessing/spawn.py", line 200, in spawn return start_processes(fn, args, nprocs, join, daemon, start_method='spawn') File "/home/zhangsongming/anaconda3/envs/py37/lib/python3.7/site-packages/torch/multiprocessing/spawn.py", line 158, in start_processes while not context.join(): File "/home/zhangsongming/anaconda3/envs/py37/lib/python3.7/site-packages/torch/multiprocessing/spawn.py", line 119, in join raise Exception(msg) Exception: -- Process 0 terminated with the following error: Traceback (most recent call last): File "/home/zhangsongming/anaconda3/envs/py37/lib/python3.7/site-packages/torch/multiprocessing/spawn.py", line 20, in _wrap fn(i, *args) File "/data/zhangsongming/project-term_nmt/baselines/ssp/NMT_SSP-main/fairseq/distributed_utils.py", line 302, in distributed_main main(cfg, *kwargs) File "/data/zhangsongming/project-term_nmt/baselines/ssp/NMT_SSP-main/fairseq_cli/train.py", line 137, in main valid_losses, should_stop = train(cfg, trainer, task, epoch_itr) File "/home/zhangsongming/anaconda3/envs/py37/lib/python3.7/contextlib.py", line 74, in inner return func(args, **kwds) File "/data/zhangsongming/project-term_nmt/baselines/ssp/NMT_SSP-main/fairseq_cli/train.py", line 233, in train for i, samples in enumerate(progress): File "/data/zhangsongming/project-term_nmt/baselines/ssp/NMT_SSP-main/fairseq/logging/progress_bar.py", line 186, in iter for i, obj in enumerate(self.iterable, start=self.n): File "/data/zhangsongming/project-term_nmt/baselines/ssp/NMT_SSP-main/fairseq/data/iterators.py", line 59, in iter for x in self.iterable: File "/data/zhangsongming/project-term_nmt/baselines/ssp/NMT_SSP-main/fairseq/data/iterators.py", line 518, in _chunk_iterator for x in itr: File "/data/zhangsongming/project-term_nmt/baselines/ssp/NMT_SSP-main/fairseq/data/iterators.py", line 59, in iter for x in self.iterable: File "/data/zhangsongming/project-term_nmt/baselines/ssp/NMT_SSP-main/fairseq/data/iterators.py", line 640, in next raise item File "/data/zhangsongming/project-term_nmt/baselines/ssp/NMT_SSP-main/fairseq/data/iterators.py", line 571, in run for item in self._source: File "/home/zhangsongming/anaconda3/envs/py37/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 279, in iter return _MultiProcessingDataLoaderIter(self) File "/home/zhangsongming/anaconda3/envs/py37/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 719, in init w.start() File "/home/zhangsongming/anaconda3/envs/py37/lib/python3.7/multiprocessing/process.py", line 112, in start self._popen = self._Popen(self) File "/home/zhangsongming/anaconda3/envs/py37/lib/python3.7/multiprocessing/context.py", line 223, in _Popen return _default_context.get_context().Process._Popen(process_obj) File "/home/zhangsongming/anaconda3/envs/py37/lib/python3.7/multiprocessing/context.py", line 284, in _Popen return Popen(process_obj) File "/home/zhangsongming/anaconda3/envs/py37/lib/python3.7/multiprocessing/popen_spawn_posix.py", line 32, in init super().init(process_obj) File "/home/zhangsongming/anaconda3/envs/py37/lib/python3.7/multiprocessing/popen_fork.py", line 20, in init self._launch(process_obj) File "/home/zhangsongming/anaconda3/envs/py37/lib/python3.7/multiprocessing/popen_spawn_posix.py", line 47, in _launch reduction.dump(process_obj, fp) File "/home/zhangsongming/anaconda3/envs/py37/lib/python3.7/multiprocessing/reduction.py", line 60, in dump ForkingPickler(file, protocol).dump(obj) TypeError: can't pickle dict_values objects

Have you met this problem before? Or have you tried to train your model with multiple gpus?

HuihuiChyan commented 3 years ago

The same problem encountered. Hope to be addressed as soon as possible.

wns823 commented 3 years ago

Thank you for raising the concern. We do not use a multi-GPU setup and conducted all the experiments utilizing a single GPU (RTX 3090 Ti ). So, our code isn't focused on distributed functions in the fairseq library.

Due to time constraints, I cannot modify the code now but we will try to add this functionality to our code later. However, you may change some parts of the code in order to use multiple GPUs.