Tramac / awesome-semantic-segmentation-pytorch

Semantic Segmentation on PyTorch (include FCN, PSPNet, Deeplabv3, Deeplabv3+, DANet, DenseASPP, BiSeNet, EncNet, DUNet, ICNet, ENet, OCNet, CCNet, PSANet, CGNet, ESPNet, LEDNet, DFANet)
Apache License 2.0
2.82k stars 581 forks source link

Question #34

Open HuaZheLei opened 5 years ago

HuaZheLei commented 5 years ago

Thanks for your nice work. I can not figure out 'MixSoftmaxCrossEntropyLoss' in your code, which is the default loss function. Could you explain it for me?

Tramac commented 5 years ago

Hi,the default loss function is cross entropy loss,MixSoftmaxCrossEntropyLoss means auxiliary loss (deep supervised)+ cross entropy loss.

HuaZheLei commented 5 years ago

Thanks for your reply. When I use multi-gpu to evaluate my trained model, I get an error. Traceback (most recent call last): File "eval.py", line 108, in <module> evaluator = Evaluator(args) File "eval.py", line 49, in __init__ self.model = self.model.module File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 539, in __getattr__ type(self).__name__, name)) AttributeError: 'FCN32s' object has no attribute 'module' My conmand is export NGPUS=4 python -m torch.distributed.launch --nproc_per_node=$NGPUS eval.py --model fcn32s --backbone vgg16 --dataset coco_voc

roclark commented 5 years ago

Hey @HuaZheLei, I'm running into this issue as well (for ENet specifically, but the same principle applies). Did you find a resolution?

roclark commented 5 years ago

Hello again @HuaZheLei, not sure if you are still looking at this or not, but I just created a pull request (#50) which now fully supports multi-GPU evaluation. I tested with the following command:

export NGPUS=8
python -m torch.distributed.launch --nproc_per_node=$NGPUS eval.py --model enet --dataset citys

Feel free to give it a shot and let me know if it solves your issue!