usr922 / wseg

[CVPR'22] Weakly Supervised Semantic Segmentation by Pixel-to-Prototype Contrast
126 stars 14 forks source link

A question about your segmentation training #6

Closed YininKorea closed 2 years ago

YininKorea commented 2 years ago

Excuse me. There is a question about your segmentation training part.

In your deeplabv2 case, it seems that there is a redundant argument "MODEL_OUTPUT_STRIDE" when initializing resnet101.

Your current released code can not successfully introduce Resnet101 as the backbone for training.

usr922 commented 2 years ago

Hi, you can delete it in case it is redundant. Maybe you need to modify the download link of resnet101 in resnet.py :)

YininKorea commented 2 years ago

May I know your device for the segmentation training?

I conducted the segmentation training with your released pseudo labels, yet only get 71.89 mIoU (reported 72.60) on the validation set (on NVIDIA TITAN).

Please check your config file.

Is it the one you used for real training?

usr922 commented 2 years ago

I use one V100 gpu for training. I checked the config file, which should be right. May you can direct use the original repo from https://github.com/YudeWang/semantic-segmentation-codebase to train deeplab or try it again:)

Italy2006 commented 1 year ago

I use one V100 gpu for training. I checked the config file, which should be right. May you can direct use the original repo from https://github.com/YudeWang/semantic-segmentation-codebase to train deeplab or try it again:) I‘m sorry but it seems that the link of the Resnet101 pretrained model you provided doesn't match the network, could you please provide the right one? Thanks!

RuntimeError: Error(s) in loading state_dict for ResNet: size mismatch for bn1.weight: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([128]). size mismatch for bn1.bias: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([128]). size mismatch for bn1.running_mean: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([128]). size mismatch for bn1.running_var: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([128]). size mismatch for layer1.0.conv1.weight: copying a param with shape torch.Size([64, 64, 1, 1]) from checkpoint, the shape in current model is torch.Size([64, 128, 1, 1]). size mismatch for layer1.0.downsample.0.weight: copying a param with shape torch.Size([256, 64, 1, 1]) from checkpoint, the shape in current model is torch.Size([256, 128, 1, 1]).

Italy2006 commented 1 year ago

Hi, you can delete it in case it is redundant. Maybe you need to modify the download link of resnet101 in resnet.py :)

I‘m sorry but i want to know the result differences between deleting the outstride or not, have you tried?