Open Yiru1103 opened 3 years ago
Hallo,
i want to use the synthesis netzwork to generate the realistic images of dataset Cityscapes using semantic map. but the pretrained weight of the synthesis module didn't match.
hier is the error file: ./synbost-try/image_synthesis/util/util.py
############################################################# line 227 net.load_state_dict(weights)
*** RuntimeError: Error(s) in loading state_dict for CondConvGenerator: size mismatch for fc.weight: copying a param with shape torch.Size([32768, 256]) from checkpoint, the shape in current model is torch.Size([1024, 36, 3, 3]). size mismatch for fc.bias: copying a param with shape torch.Size([32768]) from checkpoint, the shape in current model is torch.Size([1024]). #############################################################
Do you knwo the reason? I didn't change any other parameters expect the path of checkpoints.
Thank you for your help.
Best Regards Yiru
use_vae set True
Hallo,
i want to use the synthesis netzwork to generate the realistic images of dataset Cityscapes using semantic map. but the pretrained weight of the synthesis module didn't match.
hier is the error file: ./synbost-try/image_synthesis/util/util.py
############################################################# line 227 net.load_state_dict(weights)
*** RuntimeError: Error(s) in loading state_dict for CondConvGenerator: size mismatch for fc.weight: copying a param with shape torch.Size([32768, 256]) from checkpoint, the shape in current model is torch.Size([1024, 36, 3, 3]). size mismatch for fc.bias: copying a param with shape torch.Size([32768]) from checkpoint, the shape in current model is torch.Size([1024]). #############################################################
Do you knwo the reason? I didn't change any other parameters expect the path of checkpoints.
Thank you for your help.
Best Regards Yiru