Tramac / awesome-semantic-segmentation-pytorch

Semantic Segmentation on PyTorch (include FCN, PSPNet, Deeplabv3, Deeplabv3+, DANet, DenseASPP, BiSeNet, EncNet, DUNet, ICNet, ENet, OCNet, CCNet, PSANet, CGNet, ESPNet, LEDNet, DFANet)
Apache License 2.0
2.82k stars 581 forks source link

换成灰度图数据集后,一直报错。 #111

Open YLONl opened 4 years ago

YLONl commented 4 years ago

我换成了我的灰度图的数据集之后,然后一直报错,改了又错,错了又改,主要问题有维度参数,通道数,我改了,最后报错指向环境包文件,试了各种方法。最终还是没有解决。主要涉及文件dataloader里面的自己数据文件.py,segbase.py,train,py。 想问作者,换成[1,512,512]的灰度图后,需要改哪些地方?

Tramac commented 4 years ago

You need to change the first convolutional layer.

YLONl commented 4 years ago

very thank for your quick reply❤❤.i will try!!

YLONl commented 4 years ago

@Tramac hi, I changed the channel of the first con layer. and show:

RuntimeError: Error(s) in loading state_dict for VGG: size mismatch for features.0.weight: copying a param with shape torch.Size([64, 3, 3, 3]) from checkpoint, the shape in current model is torch.Size([64, 1, 3, 3]).

So, i delete the '.ptk' of the pretrained_base. And I set the pretrained_base = False. again .It show :

RuntimeError: output with shape [1, 256, 256] doesn't match the broadcast shape [3, 256, 256]

OK,I googled and changed the code.

    input_transform = transforms.Compose([
        transforms.ToTensor(),
        #transforms.Lambda(lambda x: x.repeat(3,1,1)),
        #transforms.Normalize([.485, .456, .406], [.229, .224, .225]),
        transforms.Normalize([0.5], [0.5]),
    ])

but error again:

Traceback (most recent call last): File "train.py", line 334, in trainer.train() File "train.py", line 229, in train loss_dict = self.criterion(outputs, targets) File "/home/zhangxl003/.local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 493, in call result = self.forward(*input, *kwargs) File "/home/zhangxl003/sheyulong/all-seg-pytorch-1220/core/utils/loss.py", line 34, in forward return dict(loss=super(MixSoftmaxCrossEntropyLoss, self).forward(inputs)) File "/home/zhangxl003/.local/lib/python3.6/site-packages/torch/nn/modules/loss.py", line 942, in forward ignore_index=self.ignore_index, reduction=self.reduction) File "/home/zhangxl003/.local/lib/python3.6/site-packages/torch/nn/functional.py", line 2056, in cross_entropy return nll_loss(log_softmax(input, 1), target, weight, None, ignore_index, None, reduction) File "/home/zhangxl003/.local/lib/python3.6/site-packages/torch/nn/functional.py", line 1873, in nll_loss ret = torch._C._nn.nll_loss2d(input, target, weight, _Reduction.get_enum(reduction), ignore_index) RuntimeError: 1only batches of spatial targets supported (non-empty 3D tensors) but got targets of size: : [4, 256, 256, 3]

SO,what should i do? please...

YLONl commented 4 years ago

maybe what other parameter in the first convolutional layer do i change? or what?

deadpoppy commented 4 years ago

maybe what other parameter in the first convolutional layer do i change? or what?

我也遇到和你同样的错误,我不认为是灰度图像的问题。 我的报错和你完全一样,我进行了如下处理,离开恢复正常:

casiahnu commented 4 years ago

maybe what other parameter in the first convolutional layer do i change? or what?

我也遇到和你同样的错误,我不认为是灰度图像的问题。 我的报错和你完全一样,我进行了如下处理,离开恢复正常:

  • 打印target数据,发现数据不合理
  • 创新做标签为0-C的map

How do you solve the problem"1only batches of spatial targets supported (non-empty 3D tensors) but got targets of size: : [4, 256, 256, 3]"? I got the same problem, Thank you. My WeChat is 18810220665