Closed Lsz-20 closed 2 years ago
Hi @Lsz-20, could you share me with the full error message or config. It seems that only the shape of downsample.norm is wrong which is really wired. The Swin-Transformer-Semantic-Segmentation is indeed build on mmsegmentation.
Thanks for your code~ I'm trying to use swin for segmentation,and add some part in the net ....,but it seems not work well in mmsegmentation for swin,but the same work can work well in 'Swin-Transformer-Semantic-Segmentation' which is a original code of swin built on mmsegmentation either? Do I need to change the pretrain model 'swin_tiny_patch4_window7_224.pth'or others? it shows : RuntimeError: Error(s) in loading state_dict for SwinTransformer: size mismatch for stages.0.downsample.norm.weight: copying a param with shape torch.Size([384]) from checkpoint, the shape in current model is torch.Size([192]). size mismatch for stages.0.downsample.norm.bias: copying a param with shape torch.Size([384]) from checkpoint, the shape in current model is torch.Size([192]). size mismatch for stages.1.downsample.norm.weight: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([384]). size mismatch for stages.1.downsample.norm.bias: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([384]). size mismatch for stages.2.downsample.norm.weight: copying a param with shape torch.Size([1536]) from checkpoint, the shape in current model is torch.Size([768]). size mismatch for stages.2.downsample.norm.bias: copying a param with shape torch.Size([1536]) from checkpoint, the shape in current model is torch.Size([768]).
Thanks~