Closed perrying closed 4 years ago
@perrying Hi, Have you achieved the result shown in the paper yet?
I haven't reproduced it yet.
@perrying Thank you for your reply! I was interested in this work and planned to re-implement it, but now it seems that my plan should be postponed...
@seekFire @perrying We will release the trained model. Also, we will release the better models in another repo. https://github.com/lxtGH/SFSegNets
@lxtGH Thank you for your response, we are looking forward to it!
@donnyyou Please give the trained ckpt for reproducing results.
You could email your training logs to "youansheng@pku.edu.cn", and I will check them for you. @perrying @seekFire
The logs you email me seems right for res101, but wrong for res18. The pretrained res18 model is missing for initialing the weights. You could download the deepbase res18 pretrained model to fix this.
@perrying
https://github.com/CSAILVision/semantic-segmentation-pytorch/blob/master/mit_semseg/models/resnet.py The pretrained imagenet models could be downloaded here! Note that the resnet version is different from the torchvision version by replacing 7x7 convolution with three 3x3 convolutions.
Not using deeply supervised losses, that res18-based model uses, might get better results for training the sf models with deeper resnets, such as res50, res101, etc..
More questions please email me at youansheng@pku.edu.cn.
You should also change the loss type to fpndsnohemce_loss series for training res18-based sf model. @perrying
@donnyyou So you mean when training the sf model with deeper resnets, we should not use fpndsnohemce_loss series as its loss, isn't it?
@donnyyou Thank you for the clarification!
@donnyyou So you mean when training the sf model with deeper resnets, we should not use fpndsnohemce_loss series as its loss, isn't it?
You are right, and dsnce_loss might get better results.
@perrying OK, thanks for your answer! BTW, if I select other backbone, such as HRNet, MobileNet and so on, which loss would you recommend?
deeply supervised losses for smaller models. @seekFire
@donnyyou OK! I see. Thank you very much!
I trained SFNet with default setting of
torchcv/scripts/seg/cityscapes/run_sfnet_res18_cityscapes.sh
except forNGPUS
(I changedNGPUS
from 8 to 4, and--train_batch_size
from 2 to 4). But I got about 73% of mIoU for single scale inference. It may be because pretrained model,3x3resnet18-imagenet.pth
, is not given. Although I also trained SFNet (ResNet-101) using pretrained model,3x3resnet101-imagenet.pth
, given by this repository, I got about 78% of mIoU for multi-scale inference.How can I reproduce the paper results?