Thanks for your work!
I have a problem while running the code ShelfNetlw.
The environment configuration I use is as follows:
torch 1.0.0 python3.6
and when give the command
CUDA_VISIBLE_DEVICES=0,1 python -m torch.distributed.launch --nproc_per_node=2 train.py
The error is:TypeError: init() got an unexpected keyword argument 'find_unused_parameters'
so, I tried to delete line76 of train.py (find_unused_parameters=True)
But I encountered this situation as shown:
There is something that confuses me, the DistributedDataParallel in pytorch1.0.0 does not have this parameter
“find_unused_parameters”,but the lack of find_unused_parameters can create trouble.
Can someone tell me how to solve it? Best not to change the version of pytorch.
Thanks for your work! I have a problem while running the code ShelfNetlw. The environment configuration I use is as follows: torch 1.0.0 python3.6 and when give the command CUDA_VISIBLE_DEVICES=0,1 python -m torch.distributed.launch --nproc_per_node=2 train.py
The error is:TypeError: init() got an unexpected keyword argument 'find_unused_parameters' so, I tried to delete line76 of train.py (find_unused_parameters=True) But I encountered this situation as shown:
There is something that confuses me, the DistributedDataParallel in pytorch1.0.0 does not have this parameter “find_unused_parameters”,but the lack of find_unused_parameters can create trouble.
Can someone tell me how to solve it? Best not to change the version of pytorch.