wuhuikai / FastFCN

FastFCN: Rethinking Dilated Convolution in the Backbone for Semantic Segmentation.
http://wuhuikai.me/FastFCNProject
Other
838 stars 148 forks source link

performance on deeplab_jpu #18

Closed alphaccw closed 5 years ago

alphaccw commented 5 years ago

Hi, Thank you for the awesome code. I test the deeplab +jpu without changing anything on 4xgeforce 1080, cuda 9.0, torch 1.0.0

train

CUDA_VISIBLE_DEVICES=4,5,6,7 python train.py --dataset pcontext --model deeplab --jpu --aux --backbone resnet50 --checkname deeplab_res50_pcontext_deeplabv3

test

CUDA_VISIBLE_DEVICES=4,5,6,7 python test.py --dataset pcontext --model deeplab --jpu --aux --backbone resnet50 --resume ./runs/pcontext/deeplab/deeplab_res50_pcontext_deeplabv3/model_best.pth.tar --checkname deeplab_res50_pcontext_deeplabv3 --split val --mode testval The model_best.pth.tar is the same as checkpoint.pth.tar in my case. Ther performance i get is pixAcc: 0.7868, mIoU: 0.4904. Compare to Table 1, there is 1% drop (50.07 from table 1).

Do I miss sth or this is normal?

Thank you

wuhuikai commented 5 years ago

The training and testing process is the same as ours.