Hi:
I have some confusions about predtrained model resnet101. Do you get the pretrained model resnet101 by "pretrained_resnet101 = models.resnet101(pretrained=True)" and then save it in the dictionary '/network/pretrianed_models/'?
However, when I run the file 'main.py', it remands error
" RuntimeError: Error(s) in loading state_dict for ResNet:
size mismatch for conv1.weight: copying a param with shape torch.Size([64, 3, 7, 7]) from checkpoint, the shape in current model is torch.Size([64, 3, 3, 3]).
size mismatch for layer1.0.conv1.weight: copying a param with shape torch.Size([64, 64, 1, 1]) from checkpoint, the shape in current model is torch.Size([64, 128, 1, 1]).
size mismatch for layer1.0.downsample.0.weight: copying a param with shape torch.Size([256, 64, 1, 1]) from checkpoint, the shape in current model is torch.Size([256, 128, , 1, 1]).
How can I fix it? Thank you very much.
Hi: I have some confusions about predtrained model resnet101. Do you get the pretrained model resnet101 by "pretrained_resnet101 = models.resnet101(pretrained=True)" and then save it in the dictionary '/network/pretrianed_models/'? However, when I run the file 'main.py', it remands error " RuntimeError: Error(s) in loading state_dict for ResNet: size mismatch for conv1.weight: copying a param with shape torch.Size([64, 3, 7, 7]) from checkpoint, the shape in current model is torch.Size([64, 3, 3, 3]). size mismatch for layer1.0.conv1.weight: copying a param with shape torch.Size([64, 64, 1, 1]) from checkpoint, the shape in current model is torch.Size([64, 128, 1, 1]). size mismatch for layer1.0.downsample.0.weight: copying a param with shape torch.Size([256, 64, 1, 1]) from checkpoint, the shape in current model is torch.Size([256, 128, , 1, 1]). How can I fix it? Thank you very much.