Hi,
Thanks for the woderful job.
I encountered a error caused by distributed training, maybe? I ran the code on multi-gpus and got the error below:
RuntimeError: Expected to have finished reduction in the prior iteration before starting a new one. This error indicates that your module has parameters that were not used in producing loss. You can enable unused parameter detection by passing the keyword argumentfind_unused_parameters=Truetotorch.nn.parallel.DistributedDataParallel,.....
In the train.py I see the code for multi processing, but here I dont know how to fix it, or can I force the code to run on only 1 gpu?
Thanks for the help of any kind you provide.
Hi, Thanks for the woderful job. I encountered a error caused by distributed training, maybe? I ran the code on multi-gpus and got the error below:
RuntimeError: Expected to have finished reduction in the prior iteration before starting a new one. This error indicates that your module has parameters that were not used in producing loss. You can enable unused parameter detection by passing the keyword argument
find_unused_parameters=Trueto
torch.nn.parallel.DistributedDataParallel,.....
In the train.py I see the code for multi processing, but here I dont know how to fix it, or can I force the code to run on only 1 gpu? Thanks for the help of any kind you provide.