bcmi / SLBR-Visible-Watermark-Removal

[ACM MM 2021] Visible Watermark Removal via Self-calibrated Localization and Background Refinement
209 stars 35 forks source link

size mismatch #41

Open Xelawk opened 4 months ago

Xelawk commented 4 months ago

RuntimeError: Error(s) in loading state_dict for SLBR: size mismatch for shared_decoder.up_im_atts.0.conv.weight: copying a param with shape torch.Size([1, 1, 3]) from checkpoint, the shape in current model is torch.Size([1, 1, 256]). size mismatch for shared_decoder.up_mask_atts.0.conv.weight: copying a param with shape torch.Size([1, 1, 3]) from checkpoint, the shape in current model is torch.Size([1, 1, 256]). size mismatch for coarse_decoder.atts_bg.0.conv.weight: copying a param with shape torch.Size([1, 1, 3]) from checkpoint, the shape in current model is torch.Size([1, 1, 128]). size mismatch for coarse_decoder.atts_bg.1.conv.weight: copying a param with shape torch.Size([1, 1, 3]) from checkpoint, the shape in current model is torch.Size([1, 1, 64]). size mismatch for coarse_decoder.atts_bg.2.conv.weight: copying a param with shape torch.Size([1, 1, 3]) from checkpoint, the shape in current model is torch.Size([1, 1, 32]). size mismatch for coarse_decoder.atts_mask.0.conv.weight: copying a param with shape torch.Size([1, 1, 3]) from checkpoint, the shape in current model is torch.Size([1, 1, 128]). size mismatch for coarse_decoder.atts_mask.1.conv.weight: copying a param with shape torch.Size([1, 1, 3]) from checkpoint, the shape in current model is torch.Size([1, 1, 64]). size mismatch for coarse_decoder.atts_mask.2.conv.weight: copying a param with shape torch.Size([1, 1, 3]) from checkpoint, the shape in current model is torch.Size([1, 1, 32]).

Xelawk commented 4 months ago

问题找到了,你的ECABlock有个channel的入参是无效的,导致我在重构后错误传给了k_size

class ECABlock(nn.Module):
    """Constructs a ECA module.

    Args:
        channel: Number of channels of the input feature map
        k_size: Adaptive selection of kernel size
    """
    def __init__(self, channel, k_size=3):
        super(ECABlock, self).__init__()
        self.avg_pool = nn.AdaptiveAvgPool2d(1)
        self.conv = nn.Conv1d(1, 1, kernel_size=k_size, padding=(k_size - 1) // 2, bias=False) 
        self.sigmoid = nn.Sigmoid()