jiupinjia / stylized-neural-painting

Official Pytorch implementation of the preprint paper "Stylized Neural Painting", in CVPR 2021.
https://jiupinjia.github.io/neuralpainter/
Creative Commons Zero v1.0 Universal
1.56k stars 262 forks source link

update render.py to use lightweight version #21

Open ak9250 opened 3 years ago

ak9250 commented 3 years ago

currently the light weight version models are not supported RuntimeError: Error(s) in loading state_dict for ZouFCNFusion: Missing key(s) in state_dict: "huangnet.fc4.weight", "huangnet.fc4.bias", "huangnet.conv3.weight", "huangnet.conv3.bias", "huangnet.conv4.weight", "huangnet.conv4.bias", "huangnet.conv5.weight", "huangnet.conv5.bias", "huangnet.conv6.weight", "huangnet.conv6.bias", "dcgan.main.10.weight", "dcgan.main.10.bias", "dcgan.main.10.running_mean", "dcgan.main.10.running_var", "dcgan.main.12.weight", "dcgan.main.13.weight", "dcgan.main.13.bias", "dcgan.main.13.running_mean", "dcgan.main.13.running_var", "dcgan.main.15.weight". size mismatch for huangnet.conv1.weight: copying a param with shape torch.Size([64, 8, 3, 3]) from checkpoint, the shape in current model is torch.Size([32, 16, 3, 3]). size mismatch for huangnet.conv1.bias: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([32]). size mismatch for huangnet.conv2.weight: copying a param with shape torch.Size([12, 64, 3, 3]) from checkpoint, the shape in current model is torch.Size([32, 32, 3, 3]). size mismatch for huangnet.conv2.bias: copying a param with shape torch.Size([12]) from checkpoint, the shape in current model is torch.Size([32]). size mismatch for dcgan.main.3.weight: copying a param with shape torch.Size([512, 256, 4, 4]) from checkpoint, the shape in current model is torch.Size([512, 512, 4, 4]). size mismatch for dcgan.main.4.weight: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([512]). size mismatch for dcgan.main.4.bias: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([512]). size mismatch for dcgan.main.4.running_mean: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([512]). size mismatch for dcgan.main.4.running_var: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([512]). size mismatch for dcgan.main.6.weight: copying a param with shape torch.Size([256, 128, 4, 4]) from checkpoint, the shape in current model is torch.Size([512, 256, 4, 4]). size mismatch for dcgan.main.7.weight: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([256]). size mismatch for dcgan.main.7.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([256]). size mismatch for dcgan.main.7.running_mean: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([256]). size mismatch for dcgan.main.7.running_var: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([256]). size mismatch for dcgan.main.9.weight: copying a param with shape torch.Size([128, 6, 4, 4]) from checkpoint, the shape in current model is torch.Size([256, 128, 4, 4]).

jiupinjia commented 3 years ago

Hi @ak9250, sorry for the late response. To use the lightweight version of the renders, you need to specify --net_G zou-fusion-net-light. You can try the following command and please let me know whether it works. Thanks. python demo_prog.py --img_path ./test_images/diamond.jpg --canvas_color 'black' --max_m_strokes 500 --max_divide 5 --renderer markerpen --renderer_checkpoint_dir checkpoints_G_markerpen_light --net_G zou-fusion-net-light