I use the pretainted Vgg-19.pth and occurs error:(How can I sovle?)
Traceback (most recent call last):
File "/home/customer/Desktop/Joint-Bilateral-Photorealistic-Style-Transfer-master/train.py", line 93, in
train(args)
File "/home/customer/Desktop/Joint-Bilateral-Photorealistic-Style-Transfer-master/train.py", line 21, in train
model = network.BilateralNetwork().cuda()
File "/home/customer/Desktop/Joint-Bilateral-Photorealistic-Style-Transfer-master/network.py", line 296, in init
self.stylenet = StyleNetwork(size)
File "/home/customer/Desktop/Joint-Bilateral-Photorealistic-Style-Transfer-master/network.py", line 228, in init
self.extractor = blocks.Vgg19()
File "/home/customer/Desktop/Joint-Bilateral-Photorealistic-Style-Transfer-master/blocks.py", line 38, in init
vgg19.load_state_dict(torch.load(weight_path),False) # 增加了False
File "/opt/software/anaconda3/envs/pytorch/lib/python3.8/site-packages/torch/nn/modules/module.py", line 846, in load_state_dict
raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for VGG:
size mismatch for features.7.weight: copying a param with shape torch.Size([128, 64, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 128, 3, 3]).
size mismatch for features.10.weight: copying a param with shape torch.Size([128, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([256, 128, 3, 3]).
size mismatch for features.10.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for features.14.weight: copying a param with shape torch.Size([256, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([256, 256, 3, 3]).
size mismatch for features.21.weight: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([512, 512, 3, 3]).
size mismatch for features.21.bias: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for features.23.weight: copying a param with shape torch.Size([256, 256, 3, 3]) from checkpoint, the shape in current model is torch.Size([512, 512, 3, 3]).
size mismatch for features.23.bias: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for features.28.weight: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([512, 512, 3, 3]).
size mismatch for features.34.weight: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([512, 512, 3, 3]).
I use the pretainted Vgg-19.pth and occurs error:(How can I sovle?) Traceback (most recent call last): File "/home/customer/Desktop/Joint-Bilateral-Photorealistic-Style-Transfer-master/train.py", line 93, in
train(args)
File "/home/customer/Desktop/Joint-Bilateral-Photorealistic-Style-Transfer-master/train.py", line 21, in train
model = network.BilateralNetwork().cuda()
File "/home/customer/Desktop/Joint-Bilateral-Photorealistic-Style-Transfer-master/network.py", line 296, in init
self.stylenet = StyleNetwork(size)
File "/home/customer/Desktop/Joint-Bilateral-Photorealistic-Style-Transfer-master/network.py", line 228, in init
self.extractor = blocks.Vgg19()
File "/home/customer/Desktop/Joint-Bilateral-Photorealistic-Style-Transfer-master/blocks.py", line 38, in init
vgg19.load_state_dict(torch.load(weight_path),False) # 增加了False
File "/opt/software/anaconda3/envs/pytorch/lib/python3.8/site-packages/torch/nn/modules/module.py", line 846, in load_state_dict
raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for VGG:
size mismatch for features.7.weight: copying a param with shape torch.Size([128, 64, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 128, 3, 3]).
size mismatch for features.10.weight: copying a param with shape torch.Size([128, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([256, 128, 3, 3]).
size mismatch for features.10.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for features.14.weight: copying a param with shape torch.Size([256, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([256, 256, 3, 3]).
size mismatch for features.21.weight: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([512, 512, 3, 3]).
size mismatch for features.21.bias: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for features.23.weight: copying a param with shape torch.Size([256, 256, 3, 3]) from checkpoint, the shape in current model is torch.Size([512, 512, 3, 3]).
size mismatch for features.23.bias: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for features.28.weight: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([512, 512, 3, 3]).
size mismatch for features.34.weight: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([512, 512, 3, 3]).