AliaksandrSiarohin / monkey-net

Animating Arbitrary Objects via Deep Motion Transfer
467 stars 81 forks source link

the size larger than 64x64 does work for nemo model #18

Open PapaMadeleine2022 opened 4 years ago

PapaMadeleine2022 commented 4 years ago

Hello, I get the pre-trained nemo model from [https://yadi.sk/d/EX7N9fuIuE4FNg], (https://yadi.sk/d/EX7N9fuIuE4FNg), but I get two problem:

  1. when I try the image size of 64x64, driving image for test/213_deliberate_smile_1.png of nemo dataset, source image for the first five frames of test/505_spontaneous_smile_4.png of nemo dataset, the nemo model works very well, but when I try image size 128x128, 256x256, 512x512(using resize) for the same driving image and source image, the result.gif is bad.

  2. when I try the driving image test/213_deliberate_smile_1.png of nemo dataset , source image for one 64x64 test.gif from common front face image, the result.gif is bad.

Can anyone give some advises to fix the above problem? @AliaksandrSiarohin Thank you very much~

AliaksandrSiarohin commented 4 years ago

First of all, test/****.png is actually videos. The frames are stacked together for a simpler i/o.

  1. What do you mean resize? You resize some of your images to 64x64, or resize test/***.png to 128x128? If you want to use the model for higher resolution, the model should be trained on higher resolution dataset. For example 256x256 models on nemo dataset can be found here, 256x256 models on VoxCeleb here.

  2. What is the common face image? Please post your images and your results.

PapaMadeleine2022 commented 4 years ago

@AliaksandrSiarohin Thanks for your reply.

  1. I use resize. For example, for driving image test/213_deliberate_smile_1.png, I modify some codes in frames_dataset.py to :
        video_array = np.moveaxis(image, 1, 0)

        video_array = video_array.reshape((-1,) + image_shape)
        video_array = np.moveaxis(video_array, 1, 2)
        video_array = np.array([resize(frame, (256, 256)) for frame in video_array])

Of cause, I do the same resize operation of 256x256 with the source image for the first five frames of test/505_spontaneous_smile_4.png. The result image is blurred for resize of 128x128 or 256x256 ops.

Thanks for your 256x256 pre-trained model, but how to modify some configuration in config/nemo.yaml, because I get this error:

Traceback (most recent call last):
 ...
RuntimeError: Error(s) in loading state_dict for MotionTransferGenerator:
    Unexpected key(s) in state_dict: "appearance_encoder.down_blocks.5.conv.weight", "appearance_encoder.down_blocks.5.conv.bias", "appearance_encoder.down_blocks.5.norm.weight", "appearance_encoder.down_blocks.5.norm.bias", "appearance_encoder.down_blocks.5.norm.running_mean", "appearance_encoder.down_blocks.5.norm.running_var", "appearance_encoder.down_blocks.5.norm.num_batches_tracked", "appearance_encoder.down_blocks.6.conv.weight", "appearance_encoder.down_blocks.6.conv.bias", "appearance_encoder.down_blocks.6.norm.weight", "appearance_encoder.down_blocks.6.norm.bias", "appearance_encoder.down_blocks.6.norm.running_mean", "appearance_encoder.down_blocks.6.norm.running_var", "appearance_encoder.down_blocks.6.norm.num_batches_tracked", "video_decoder.up_blocks.5.conv.weight", "video_decoder.up_blocks.5.conv.bias", "video_decoder.up_blocks.5.norm.weight", "video_decoder.up_blocks.5.norm.bias", "video_decoder.up_blocks.5.norm.running_mean", "video_decoder.up_blocks.5.norm.running_var", "video_decoder.up_blocks.5.norm.num_batches_tracked", "video_decoder.up_blocks.6.conv.weight", "video_decoder.up_blocks.6.conv.bias", "video_decoder.up_blocks.6.norm.weight", "video_decoder.up_blocks.6.norm.bias", "video_decoder.up_blocks.6.norm.running_mean", "video_decoder.up_blocks.6.norm.running_var", "video_decoder.up_blocks.6.norm.num_batches_tracked".
    size mismatch for appearance_encoder.down_blocks.4.conv.weight: copying a param with shape torch.Size([1024, 512, 1, 3, 3]) from checkpoint, the shape in current model is torch.Size([512, 512, 1, 3, 3]).
    size mismatch for appearance_encoder.down_blocks.4.conv.bias: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([512]).
    size mismatch for appearance_encoder.down_blocks.4.norm.weight: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([512]).
...
  1. I upload 4 test images here , the xxx_result.gif is corresponding to the xxx.gif that is source image for nemo-ckp.pth.tar model and meanwhile I use test/213_deliberate_smile_1.png as driving image.
AliaksandrSiarohin commented 4 years ago
  1. Yes the keypoints is learned to be extracted at resolution 64x64, I doubt it generalize to higher resolutions. Need another model, or model trained on different resolutions. Model params should be, same as in vox.yaml.:

    model_params:
    common_params:
    num_kp: 10
    kp_variance: 'matrix'
    num_channels: 3
    kp_detector_params:
     temperature: 0.1
     block_expansion: 32
     max_features: 1024
     scale_factor: 0.25 
     num_blocks: 5
     clip_variance: 0.001 
    generator_params:
    interpolation_mode: 'trilinear'
    block_expansion: 32
    max_features: 1024
    num_blocks: 7
    num_refinement_blocks: 4
    dense_motion_params:
      block_expansion: 32
      max_features: 1024
      num_blocks: 5
      use_mask: True
      use_correction: True
      scale_factor: 0.25
      mask_embedding_params:
        use_heatmap: True
        use_deformed_source_image: True
        heatmap_type: 'difference'
        norm_const: 100
      num_group_blocks: 2
    kp_embedding_params:
      scale_factor: 0.25 
      use_heatmap: True
      norm_const: 100
      heatmap_type: 'difference'
    discriminator_params:
    kp_embedding_params:
      norm_const: 100
    block_expansion: 32
    max_features: 256
    num_blocks: 4
  2. Most likely nemo is too small to generalize to arbitrary faces. Try model trained on vox-celeb.