juntang-zhuang / ShelfNet

implementation for paper "ShelfNet for fast semantic segmentation"
MIT License
252 stars 41 forks source link

Can I run the code in one GPU? #14

Closed yeRuiHuan closed 4 years ago

juntang-zhuang commented 4 years ago

In theory, you can, but the batchsize has to be reduced. In this case, the training might not be optimal.

yeRuiHuan commented 4 years ago

@juntang-zhuang Think for your answer.But how can I change two GPU to one?Change "CUDA_VISIBLE_DEVICES=0,1" to "CUDA_VISIBLE_DEVICES=0"?

juntang-zhuang commented 4 years ago

Yes, you will need to modify that. But there might be potential problems, depending on which branch you are using: (1) for all branches, I'm not sure if the synchronized batch-norm can run normally on only 1 GPU; if not, you can replace all synchronized BN layer with normal BN. (2) for branches "citys" and "pascal", the loss function is specially written for multi-GPU case, not sure if it works on one GPU. If not, you need to call a normal loss function defined in official PyTorch. (3) for branch "citys_lw", things might be complicated with distributed training, I'm not so familiar. But since the model and loss are already written, you can re-write your own training script.

yeRuiHuan commented 4 years ago

ok,I understand it.Thank you for the details!