facebookresearch / fairseq

Facebook AI Research Sequence-to-Sequence Toolkit written in Python.
MIT License
30.37k stars 6.4k forks source link

fairseq fails with RuntimeError: _Map_base::at #176

Closed antoajayraj closed 6 years ago

antoajayraj commented 6 years ago

Trying to run fairseq using the following command results in error

$ python train.py data-bin/iwslt14.tokenized.de-en --lr 0.25 --clip-norm 0.1 --dropout 0.2 --max-tokens 4000 - -arch fconv_iwslt_de_en --save-dir checkpoints/fconv | distributed init (rank 1): tcp://localhost:11807 | distributed init (rank 4): tcp://localhost:11807 Exception ignored in: <module 'threading' from '/opt/conda/envs/pytorch-py35/lib/python3.5/threading.py'> Traceback (most recent call last): File "/opt/conda/envs/pytorch-py35/lib/python3.5/threading.py", line 1286, in _shutdown Traceback (most recent call last): Traceback (most recent call last): File "", line 1, in File "", line 1, in Traceback (most recent call last): File "train.py", line 29, in File "/opt/conda/envs/pytorch-py35/lib/python3.5/multiprocessing/spawn.py", line 106, in spawn_main File "/opt/conda/envs/pytorch-py35/lib/python3.5/multiprocessing/spawn.py", line 106, in spawn_main Traceback (most recent call last): File "", line 1, in exitcode = _main(fd) exitcode = _main(fd) File "/opt/conda/envs/pytorch-py35/lib/python3.5/multiprocessing/spawn.py", line 116, in _main main(args) File "/opt/conda/envs/pytorch-py35/lib/python3.5/multiprocessing/spawn.py", line 116, in _main File "train.py", line 21, in main File "/opt/conda/envs/pytorch-py35/lib/python3.5/multiprocessing/spawn.py", line 106, in spawn_main multiprocessing_main(args) File "/home/nimbix/fairseq/multiprocessing_train.py", line 40, in main self = pickle.load(from_parent) self = pickle.load(from_parent) File "/opt/conda/envs/pytorch-py35/lib/python3.5/multiprocessing/synchronize.py", line 112, in setstate File "", line 969, in _find_and_load Traceback (most recent call last): File "", line 1, in p.join() exitcode = _main(fd) File "/opt/conda/envs/pytorch-py35/lib/python3.5/multiprocessing/process.py", line 121, in join File "/opt/conda/envs/pytorch-py35/lib/python3.5/multiprocessing/spawn.py", line 115, in _main Traceback (most recent call last): File "", line 1, in File "", line 958, in _find_and_load_unlocked Traceback (most recent call last): File "", line 1, in prepare(preparation_data) res = self._popen.wait(timeout) File "/opt/conda/envs/pytorch-py35/lib/python3.5/multiprocessing/spawn.py", line 106, in spawn_main File "/opt/conda/envs/pytorch-py35/lib/python3.5/multiprocessing/spawn.py", line 226, in prepare File "/opt/conda/envs/pytorch-py35/lib/python3.5/multiprocessing/popen_fork.py", line 51, in wait File "", line 673, in _load_unlocked File "/opt/conda/envs/pytorch-py35/lib/python3.5/multiprocessing/spawn.py", line 106, in spawn_main File "/opt/conda/envs/pytorch-py35/lib/python3.5/multiprocessing/spawn.py", line 106, in spawn_main File "", line 661, in exec_module return self.poll(os.WNOHANG if timeout == 0.0 else 0) File "/opt/conda/envs/pytorch-py35/lib/python3.5/multiprocessing/popen_fork.py", line 29, in poll _fixup_main_from_path(data['init_main_from_path']) File "", line 750, in get_code File "/opt/conda/envs/pytorch-py35/lib/python3.5/multiprocessing/spawn.py", line 278, in _fixup_main_from_path pid, sts = os.waitpid(self.pid, flag) exitcode = _main(fd) File "/home/nimbix/fairseq/multiprocessing_train.py", line 82, in signal_handler File "/opt/conda/envs/pytorch-py35/lib/python3.5/multiprocessing/spawn.py", line 116, in _main exitcode = _main(fd) exitcode = _main(fd) File "/opt/conda/envs/pytorch-py35/lib/python3.5/multiprocessing/spawn.py", line 116, in _main File "", line 819, in get_data File "/opt/conda/envs/pytorch-py35/lib/python3.5/multiprocessing/spawn.py", line 116, in _main raise Exception(msg) KeyboardInterrupt Exception:

-- Tracebacks above this line can probably be ignored --

Traceback (most recent call last): File "/home/nimbix/fairseq/multiprocessing_train.py", line 45, in run args.distributed_rank = distributed_utils.distributed_init(args) File "/home/nimbix/fairseq/fairseq/distributed_utils.py", line 29, in distributed_init world_size=args.distributed_world_size, rank=args.distributed_rank) File "/opt/conda/envs/pytorch-py35/lib/python3.5/site-packages/torch/distributed/init.py", line 46, in init_process_group group_name, rank) RuntimeError: _Map_base::at self = pickle.load(from_parent)

File "/opt/conda/envs/pytorch-py35/lib/python3.5/multiprocessing/queues.py", line 24, in run_name="__mp_main__") self = pickle.load(from_parent) self = pickle.load(from_parent) File "/opt/conda/envs/pytorch-py35/lib/python3.5/runpy.py", line 254, in run_path File "/opt/conda/envs/pytorch-py35/lib/python3.5/multiprocessing/queues.py", line 24, in File "/opt/conda/envs/pytorch-py35/lib/python3.5/multiprocessing/queues.py", line 24, in

myleott commented 6 years ago

What version of pytorch and fairseq are you using?

antoajayraj commented 6 years ago

pytorch 0.2 and fairseq (latest checkout)

myleott commented 6 years ago

We require pytorch >= 0.4.0, please update your pytorch installation and try again. Reopen the issue if it's still not working.

antoajayraj commented 6 years ago

I updated to latest pytorch and fairseq

print(torch.version) 0.5.0a0+7ca8e2f

and the fairseq code does run. However, when I monitor the GPU utilization (using nvidia-smi), I dont see any process running on the GPU. Any idea why this could happen ?

$ CUDA_VISIBLE_DEVICES="0" python3 train.py data-bin/wmt14_en_de --lr 0.5 --clip-norm 0.1 --dropout 0.2 --max-tokens 4000 --criterion label_smoothed_cross_entropy --label-smoothing 0.1 --lr-scheduler fixed --force-anneal 50 --arch fconv_wmt_en_de --no-save --no-progress-bar --log-interval 10 --max-epoch 1 --max-update 10 Namespace(arch='fconv_wmt_en_de', clip_norm=0.1, criterion='label_smoothed_cross_entropy', curriculum=0, data='data-bin/wmt14_en_de', decoder_attention='True', decoder_embed_dim=768, decoder_embed_path=None, decoder_layers='[(512, 3)] 9 + [(1024, 3)] 4 + [(2048, 1)] 2', decoder_out_embed_dim=512, device_id=0, distributed_backend='nccl', distributed_init_method=None, distributed_port=-1, distributed_rank=0, distributed_world_size=1, dropout=0.2, encoder_embed_dim=768, encoder_embed_path=None, encoder_layers='[(512, 3)] 9 + [(1024, 3)] 4 + [(2048, 1)] 2', force_anneal=50, label_smoothing=0.1, log_format=None, log_interval=10, lr=[0.5], lr_scheduler='fixed', lr_shrink=0.1, max_epoch=1, max_sentences=None, max_sentences_valid=None, max_source_positions=1024, max_target_positions=1024, max_tokens=4000, max_update=10, min_lr=1e-05, momentum=0.99, no_epoch_checkpoints=False, no_progress_bar=True, no_save=True, optimizer='nag', restore_file='checkpoint_last.pt', sample_without_replacement=0, save_dir='checkpoints', save_interval=-1, seed=1, sentence_avg=False, share_input_output_embed=False, skip_invalid_size_inputs_valid_test=False, source_lang=None, target_lang=None, train_subset='train', valid_subset='valid', validate_interval=1, weight_decay=0.0) | [en] dictionary: 40471 types | [de] dictionary: 42715 types | data-bin/wmt14_en_de train 3900144 examples | data-bin/wmt14_en_de valid 39412 examples | model fconv_wmt_en_de, criterion LabelSmoothedCrossEntropyCriterion | num. model params: 213412278 | training on 1 GPUs | max tokens per GPU = 4000 and max sentences per GPU = None /usr/local/lib/python3.5/dist-packages/torch/autograd/function.py:41: UserWarning: mark_sharedstorage is deprecated. Tensors with shared storages are automatically tracked. Note that calls to `set()` are not tracked 'mark_shared_storage is deprecated. ' /home/nimbix/fairseq/fairseq/trainer.py:193: UserWarning: torch.nn.utils.clip_grad_norm is now deprecated in favor of torch.nn.utils.clip_gradnorm. grad_norm = utils.item(torch.nn.utils.clip_grad_norm(self.model.parameters(), self.args.clip_norm)) | epoch 001 | loss 15.266 | nll_loss 15.176 | ppl 37023.27 | wps 10149 | ups 2.4 | wpb 3936 | bsz 121 | num_updates 10 | lr 0.5 | gnorm 3.668 | clip 100% | oom 0 | sample_size 3936.1

myleott commented 6 years ago

fairseq only supports GPU training, so it's definitely using GPUs... note that fairseq only appears in nvidia-smi after training has begun (e.g., it is not resident during data loading). Please test with a small pytorch script and confirm that your setup is correct. If not then please open an issue with pytorch.