Open roddar92 opened 2 years ago
Hi @roddar92, this looks like a bug in the code. Because you're running this code on a single GPU, which means distributed training is not initialized, which in turn leads to this error.
Could you try adding a check if torch.distributed.is_initialized():
before this line? https://github.com/facebookresearch/dpr-scale/blob/e8eb457edb0f0781f4bb5ebf5a157b7df23d952a/dpr_scale/task/dpr_eval_task.py#L49
Dear colleagues, when I try to generate embeddings, I have an error:
Do you know how to fix it?