Monu-Khicher-1 / multi-stage-learning

Deep fake detection Model
0 stars 0 forks source link

Load pretrained model size mismatch. #2

Open bimo-adiparwa opened 2 days ago

bimo-adiparwa commented 2 days ago

Hi, when I tried to run test. I got this error.

Unexpected key(s) in state_dict: "backbone.patch_embed.backbone.layers.3.downsample.norm.weight", "backbone.patch_embed.backbone.layers.3.downsample.norm.bias", "backbone.patch_embed.backbone.layers.3.downsample.reduction.weight", "backbone.patch_embed.backbone.head.fc.weight", "backbone.patch_embed.backbone.head.fc.bias", "embedder.layers.3.downsample.norm.weight", "embedder.layers.3.downsample.norm.bias", "embedder.layers.3.downsample.reduction.weight", "embedder.head.fc.weight", "embedder.head.fc.bias". 
        size mismatch for backbone.patch_embed.backbone.layers.1.downsample.norm.weight: copying a param with shape torch.Size([384]) from checkpoint, the shape in current model is torch.Size([768]).
        size mismatch for backbone.patch_embed.backbone.layers.1.downsample.norm.bias: copying a param with shape torch.Size([384]) from checkpoint, the shape in current model is torch.Size([768]).
        size mismatch for backbone.patch_embed.backbone.layers.1.downsample.reduction.weight: copying a param with shape torch.Size([192, 384]) from checkpoint, the shape in current model is torch.Size([384, 768]).
        size mismatch for backbone.patch_embed.backbone.layers.2.downsample.norm.weight: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([1536]).
        size mismatch for backbone.patch_embed.backbone.layers.2.downsample.norm.bias: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([1536]).
        size mismatch for backbone.patch_embed.backbone.layers.2.downsample.reduction.weight: copying a param with shape torch.Size([384, 768]) from checkpoint, the shape in current model is torch.Size([768, 1536]).
        size mismatch for embedder.layers.1.downsample.norm.weight: copying a param with shape torch.Size([384]) from checkpoint, the shape in current model is torch.Size([768]).
        size mismatch for embedder.layers.1.downsample.norm.bias: copying a param with shape torch.Size([384]) from checkpoint, the shape in current model is torch.Size([768]).
        size mismatch for embedder.layers.1.downsample.reduction.weight: copying a param with shape torch.Size([192, 384]) from checkpoint, the shape in current model is torch.Size([384, 768]).
        size mismatch for embedder.layers.2.downsample.norm.weight: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([1536]).
        size mismatch for embedder.layers.2.downsample.norm.bias: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([1536]).
        size mismatch for embedder.layers.2.downsample.reduction.weight: copying a param with shape torch.Size([384, 768]) from checkpoint, the shape in current model is torch.Size([768, 1536]).

Is it different pretrained model?

Monu-Khicher-1 commented 18 hours ago

@bimo-adiparwa Can you give some more details? Like image size and the command you run. It will help me to understand the problem.