akanimax / BMSG-GAN

[MSG-GAN] Any body can GAN! Highly stable and robust architecture. Requires little to no hyperparameter tuning. Pytorch Implementation
MIT License
630 stars 104 forks source link

demo.py #36

Open jiaying96 opened 4 years ago

jiaying96 commented 4 years ago

I trained the model with: python train.py --depth=7 --latent_size=512 --images_dir=../data/flowers --sample_dir=samples/exp_1 --model_dir=models/exp_1

and I test with: python demo.py --generator_file=models/exp_1/GAN_DIS_3.pth --depth=7 --latent_size=512

but some error:

Missing key(s) in state_dict: "module.layers.0.conv_1.weight", "module.layers.0.conv_1.bias", "module.layers.0.conv_2.weight", "module.layers.0.conv_2.bias", "module.layers.1.conv_1.weight", "module.layers.1.conv_1.bias", "module.layers.1.conv_2.weight", "module.layers.1.conv_2.bias", "module.layers.2.conv_1.weight", "module.layers.2.conv_1.bias", "module.layers.2.conv_2.weight", "module.layers.2.conv_2.bias", "module.layers.3.conv_1.weight", "module.layers.3.conv_1.bias", "module.layers.3.conv_2.weight", "module.layers.3.conv_2.bias", "module.layers.4.conv_1.weight", "module.layers.4.conv_1.bias", "module.layers.4.conv_2.weight", "module.layers.4.conv_2.bias", "module.layers.5.conv_1.weight", "module.layers.5.conv_1.bias", "module.layers.5.conv_2.weight", "module.layers.5.conv_2.bias", "module.layers.6.conv_1.weight", "module.layers.6.conv_1.bias", "module.layers.6.conv_2.weight", "module.layers.6.conv_2.bias", "module.rgb_converters.0.weight", "module.rgb_converters.0.bias", "module.rgb_converters.1.weight", "module.rgb_converters.1.bias", "module.rgb_converters.2.weight", "module.rgb_converters.2.bias", "module.rgb_converters.3.weight", "module.rgb_converters.3.bias", "module.rgb_converters.4.weight", "module.rgb_converters.4.bias", "module.rgb_converters.5.weight", "module.rgb_converters.5.bias", "module.rgb_converters.6.weight", "module.rgb_converters.6.bias". Unexpected key(s) in state_dict: "rgb_to_features.0.weight", "rgb_to_features.0.bias", "rgb_to_features.1.weight", "rgb_to_features.1.bias", "rgb_to_features.2.weight", "rgb_to_features.2.bias", "rgb_to_features.3.weight", "rgb_to_features.3.bias", "rgb_to_features.4.weight", "rgb_to_features.4.bias", "rgb_to_features.5.weight", "rgb_to_features.5.bias", "final_converter.weight", "final_converter.bias", "layers.0.conv_1.weight", "layers.0.conv_1.bias", "layers.0.conv_2.weight", "layers.0.conv_2.bias", "layers.1.conv_1.weight", "layers.1.conv_1.bias", "layers.1.conv_2.weight", "layers.1.conv_2.bias", "layers.2.conv_1.weight", "layers.2.conv_1.bias", "layers.2.conv_2.weight", "layers.2.conv_2.bias", "layers.3.conv_1.weight", "layers.3.conv_1.bias", "layers.3.conv_2.weight", "layers.3.conv_2.bias", "layers.4.conv_1.weight", "layers.4.conv_1.bias", "layers.4.conv_2.weight", "layers.4.conv_2.bias", "layers.5.conv_1.weight", "layers.5.conv_1.bias", "layers.5.conv_2.weight", "layers.5.conv_2.bias", "final_block.conv_1.weight", "final_block.conv_1.bias", "final_block.conv_2.weight", "final_block.conv_2.bias", "final_block.conv_3.weight", "final_block.conv_3.bias".

how to solve it?