Closed ahong007007 closed 3 years ago
when i use 1 GPU(v100) "python projects/UniDet/train_net.py --config-file projects/UniDet/configs/Partitioned_COI_R50_2x.yaml --num-gpus 1",batch=16, there is no problem.
When I use two GPUs( 2x v100), an error occurred
python projects/UniDet/train_net.py --config-file projects/UniDet/configs/Partitioned_COI_R50_2x.yaml --num-gpus 1",batch=32
How can I solve this problem? Thank you!
any update on this? I am also facing the same problem
Changing "values = torch.stack(values, dim=0)" to "values = torch.stack(values, dim=0).float()" has fixed the issue.
when i use 1 GPU(v100) "python projects/UniDet/train_net.py --config-file projects/UniDet/configs/Partitioned_COI_R50_2x.yaml --num-gpus 1",batch=16, there is no problem.
When I use two GPUs( 2x v100), an error occurred
python projects/UniDet/train_net.py --config-file projects/UniDet/configs/Partitioned_COI_R50_2x.yaml --num-gpus 1",batch=32
How can I solve this problem? Thank you!