Closed yeRuiHuan closed 4 years ago
@juntang-zhuang Think for your answer.But how can I change two GPU to one?Change "CUDA_VISIBLE_DEVICES=0,1" to "CUDA_VISIBLE_DEVICES=0"?
Yes, you will need to modify that. But there might be potential problems, depending on which branch you are using: (1) for all branches, I'm not sure if the synchronized batch-norm can run normally on only 1 GPU; if not, you can replace all synchronized BN layer with normal BN. (2) for branches "citys" and "pascal", the loss function is specially written for multi-GPU case, not sure if it works on one GPU. If not, you need to call a normal loss function defined in official PyTorch. (3) for branch "citys_lw", things might be complicated with distributed training, I'm not so familiar. But since the model and loss are already written, you can re-write your own training script.
ok,I understand it.Thank you for the details!
In theory, you can, but the batchsize has to be reduced. In this case, the training might not be optimal.