Closed yxt132 closed 3 years ago
Hello, restoration_demo.py
should work. Please see here for more details.
Thank you Kelvin! My next question is how to avoid downloading the pretrained models. I commented out "pretrained" arguments for the generator and discriminator in the glean_ffhq_16x.py. But I could not comment out the "pretrained" for the perceptual_loss, otherwise it gives an error as below:
Traceback (most recent call last): File "/usr/local/lib/python3.6/dist-packages/mmcv/utils/registry.py", line 51, in build_from_cfg return obj_cls(**args) File "/app/mmedit/models/losses/perceptual_loss.py", line 139, in init pretrained=pretrained) File "/app/mmedit/models/losses/perceptual_loss.py", line 37, in init assert vgg_type in pretrained AssertionError
Thank you Kelvin! My next question is how to avoid downloading the pretrained models. I commented out "pretrained" arguments for the generator and discriminator in the glean_ffhq_16x.py. But I could not comment out the "pretrained" for the perceptual_loss, otherwise it gives an error as below:
Traceback (most recent call last): File "/usr/local/lib/python3.6/dist-packages/mmcv/utils/registry.py", line 51, in build_from_cfg return obj_cls(**args) File "/app/mmedit/models/losses/perceptual_loss.py", line 139, in init pretrained=pretrained) File "/app/mmedit/models/losses/perceptual_loss.py", line 37, in init assert vgg_type in pretrained AssertionError
If you only want to test, I think you can remove the whole perceptual_loss
. In this way, no vgg network will be loaded :
perceptual_loss=dict(
type='PerceptualLoss',
layer_weights={'21': 1.0},
vgg_type='vgg16',
perceptual_weight=1e-2,
style_weight=0,
norm_img=False,
criterion='mse',
pretrained='torchvision://vgg16')
Thanks for the speedy reply. I did that. Now I bump into another issue. I guess I can ignore the checkpoint loading warning. But the error appears to be about image size. Does the model require specific sizes? My input image is 512x512.
Use load_from_local loader The model and loaded state dict do not match exactly
unexpected key in source state_dict: perceptual_loss.vgg.mean, perceptual_loss.vgg.std, perceptual_loss.vgg.vgg_layers.0.weight, perceptual_loss.vgg.vgg_layers.0.bias, perceptual_loss.vgg.vgg_layers.2.weight, perceptual_loss.vgg.vgg_layers.2.bias, perceptual_loss.vgg.vgg_layers.5.weight, perceptual_loss.vgg.vgg_layers.5.bias, perceptual_loss.vgg.vgg_layers.7.weight, perceptual_loss.vgg.vgg_layers.7.bias, perceptual_loss.vgg.vgg_layers.10.weight, perceptual_loss.vgg.vgg_layers.10.bias, perceptual_loss.vgg.vgg_layers.12.weight, perceptual_loss.vgg.vgg_layers.12.bias, perceptual_loss.vgg.vgg_layers.14.weight, perceptual_loss.vgg.vgg_layers.14.bias, perceptual_loss.vgg.vgg_layers.17.weight, perceptual_loss.vgg.vgg_layers.17.bias, perceptual_loss.vgg.vgg_layers.19.weight, perceptual_loss.vgg.vgg_layers.19.bias, perceptual_loss.vgg.vgg_layers.21.weight, perceptual_loss.vgg.vgg_layers.21.bias
2021-05-29 16:33:12,370 - mmgen - INFO - Switch to evaluation style mode: single
Traceback (most recent call last):
File "demo/restoration_demo.py", line 45, in
Got it working. It seems the model takes image size in 64 x 64. Thanks again!
Good to hear that :D
Nice work! I am trying to test the model out using one of my own images. The test example was to run test for a whole dataset with metrics calculated. How do I do a test on a real world image without calculating metrics but just with an output image?