Open grossmanm opened 1 year ago
This issue has already been resolved.
Hello, I encountered the same error as you did and I was wondering if you would be so kind as to share the method you used to solve it. I would greatly appreciate it. Thank you very much for your time and help.
Hello, I encountered the same error as you did and I was wondering if you would be so kind as to share the method you used to solve it. I would greatly appreciate it. Thank you very much for your time and help.
This issue has already been resolved.
Can you share the solution. I am running into this when training on GPU (though the error does not occur on CPU).
Hi, I encountered the same error as you did. Is there anyone able to share the solution? Thanks!
Hey, after some trial and error, I think you are trying to run the single-node version (without torch.distributed) on a multi GPUs setup. If this is the case then the following solution will enable you to run the code on one of those GPUs.
In case you would like to run using all GPUs, the highlighted addition in 'train_dense_encoder.py' might solve the issue: and running it as follows: python -m torch.distributed.launch --nproc_per_node=2 train_dense_encoder.py train=biencoder_nq train_datasets=[nq_train] dev_datasets=[nq_dev] train=biencoder_nq output_dir=outputs/
Hi, I'm following the instructional code on the readme and ran
python train_dense_encoder.py \ train_datasets=[nq_train] \ dev_datasets=[nq_dev] \ train=biencoder_local \ output_dir={path to checkpoints dir}
after install the nq_train and nq_dev datasets however whenever I run this I get an error in pytorch
torch/nn/modules/module.py", line 1190, in _call_impl return forward_call(*input, **kwargs) TypeError: forward() missing 6 required positional arguments: 'question_ids', 'question_segments', 'question_attn_mask', 'context_ids', 'ctx_segments', and 'ctx_attn_mask'
I'm not sure what could be causing this.