ckkelvinchan / GLEAN

107 stars 5 forks source link

How to do inference with one image of my own #1

Closed yxt132 closed 3 years ago

yxt132 commented 3 years ago

Nice work! I am trying to test the model out using one of my own images. The test example was to run test for a whole dataset with metrics calculated. How do I do a test on a real world image without calculating metrics but just with an output image?

ckkelvinchan commented 3 years ago

Hello, restoration_demo.py should work. Please see here for more details.

yxt132 commented 3 years ago

Thank you Kelvin! My next question is how to avoid downloading the pretrained models. I commented out "pretrained" arguments for the generator and discriminator in the glean_ffhq_16x.py. But I could not comment out the "pretrained" for the perceptual_loss, otherwise it gives an error as below:

Traceback (most recent call last): File "/usr/local/lib/python3.6/dist-packages/mmcv/utils/registry.py", line 51, in build_from_cfg return obj_cls(**args) File "/app/mmedit/models/losses/perceptual_loss.py", line 139, in init pretrained=pretrained) File "/app/mmedit/models/losses/perceptual_loss.py", line 37, in init assert vgg_type in pretrained AssertionError

yxt132 commented 3 years ago

Thank you Kelvin! My next question is how to avoid downloading the pretrained models. I commented out "pretrained" arguments for the generator and discriminator in the glean_ffhq_16x.py. But I could not comment out the "pretrained" for the perceptual_loss, otherwise it gives an error as below:

Traceback (most recent call last): File "/usr/local/lib/python3.6/dist-packages/mmcv/utils/registry.py", line 51, in build_from_cfg return obj_cls(**args) File "/app/mmedit/models/losses/perceptual_loss.py", line 139, in init pretrained=pretrained) File "/app/mmedit/models/losses/perceptual_loss.py", line 37, in init assert vgg_type in pretrained AssertionError

ckkelvinchan commented 3 years ago

If you only want to test, I think you can remove the whole perceptual_loss. In this way, no vgg network will be loaded :

perceptual_loss=dict(
        type='PerceptualLoss',
        layer_weights={'21': 1.0},
        vgg_type='vgg16',
        perceptual_weight=1e-2,
        style_weight=0,
        norm_img=False,
        criterion='mse',
        pretrained='torchvision://vgg16')
yxt132 commented 3 years ago

Thanks for the speedy reply. I did that. Now I bump into another issue. I guess I can ignore the checkpoint loading warning. But the error appears to be about image size. Does the model require specific sizes? My input image is 512x512.

Use load_from_local loader The model and loaded state dict do not match exactly

unexpected key in source state_dict: perceptual_loss.vgg.mean, perceptual_loss.vgg.std, perceptual_loss.vgg.vgg_layers.0.weight, perceptual_loss.vgg.vgg_layers.0.bias, perceptual_loss.vgg.vgg_layers.2.weight, perceptual_loss.vgg.vgg_layers.2.bias, perceptual_loss.vgg.vgg_layers.5.weight, perceptual_loss.vgg.vgg_layers.5.bias, perceptual_loss.vgg.vgg_layers.7.weight, perceptual_loss.vgg.vgg_layers.7.bias, perceptual_loss.vgg.vgg_layers.10.weight, perceptual_loss.vgg.vgg_layers.10.bias, perceptual_loss.vgg.vgg_layers.12.weight, perceptual_loss.vgg.vgg_layers.12.bias, perceptual_loss.vgg.vgg_layers.14.weight, perceptual_loss.vgg.vgg_layers.14.bias, perceptual_loss.vgg.vgg_layers.17.weight, perceptual_loss.vgg.vgg_layers.17.bias, perceptual_loss.vgg.vgg_layers.19.weight, perceptual_loss.vgg.vgg_layers.19.bias, perceptual_loss.vgg.vgg_layers.21.weight, perceptual_loss.vgg.vgg_layers.21.bias

2021-05-29 16:33:12,370 - mmgen - INFO - Switch to evaluation style mode: single Traceback (most recent call last): File "demo/restoration_demo.py", line 45, in main() File "demo/restoration_demo.py", line 36, in main output = restoration_inference(model, args.img_path) File "/app/mmedit/apis/restoration_inference.py", line 39, in restoration_inference result = model(test_mode=True, data) File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 541, in call result = self.forward(*input, *kwargs) File "/usr/local/lib/python3.6/dist-packages/mmcv/runner/fp16_utils.py", line 95, in new_func return old_func(args, kwargs) File "/app/mmedit/models/restorers/srgan.py", line 94, in forward return self.forward_test(lq, gt, *kwargs) File "/app/mmedit/models/restorers/glean.py", line 53, in forward_test output = self.generator(lq) File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 541, in call result = self.forward(input, **kwargs) File "/app/mmedit/models/backbones/sr_backbones/glean_styleganv2.py", line 209, in forward f'Spatial resolution must equal in_size ({self.in_size}).' AssertionError: Spatial resolution must equal in_size (64). Got (512, 512).

yxt132 commented 3 years ago

Got it working. It seems the model takes image size in 64 x 64. Thanks again!

ckkelvinchan commented 3 years ago

Good to hear that :D