primepake / wav2lip_288x288

MIT License
524 stars 135 forks source link

do inference #143

Closed see2run closed 1 month ago

see2run commented 2 months ago

Hey, I have done the following:

Trained SyncNet using train_syncnet_sam.py. Model result: wav2lip_288x288/checkpoints/syncnet/actor/best_syncnet_actor.pth.

Trained Wav2Lip using hq_wav2lip_sam_train.py. Model result: wav2lip_288x288/checkpoints/wav/sam/gen_best_wav128_1e4.pth.

However, when trying to perform inference using the gen_best_wav128_1e4.pth model and already changed img_size = 384, there is an error like the following. What could be wrong? Is there anyone who can help me?

error:

0%| | 0/3 [02:07<?, ?it/s] Traceback (most recent call last): File "inference.py", line 280, in main() File "inference.py", line 252, in main model = load_model(args.checkpoint_path) File "inference.py", line 176, in load_model model.load_state_dict(new_s) File "/home/anaconda3/envs/w2l/lib/python3.8/site-packages/torch/nn/modules/module.py", line 2041, in load_state_dict raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format( RuntimeError: Error(s) in loading state_dict for Wav2Lip: Missing key(s) in state_dict: "face_encoder_blocks.8.0.conv_block.0.weight", "face_encoder_blocks.8.0.conv_block.0.bias", "face_encoder_blocks.8.0.conv_block.1.weight", "face_encoder_blocks.8.0.conv_block.1.bias", "face_encoder_blocks.8.0.conv_block.1.running_mean", "face_encoder_blocks.8.0.conv_block.1.running_var", "face_encoder_blocks.8.1.conv_block.0.weight", "face_encoder_blocks.8.1.conv_block.0.bias", "face_encoder_blocks.8.1.conv_block.1.weight", "face_encoder_blocks.8.1.conv_block.1.bias", "face_encoder_blocks.8.1.conv_block.1.running_mean", "face_encoder_blocks.8.1.conv_block.1.running_var", "face_decoder_blocks.8.0.conv_block.0.weight", "face_decoder_blocks.8.0.conv_block.0.bias", "face_decoder_blocks.8.0.conv_block.1.weight", "face_decoder_blocks.8.0.conv_block.1.bias", "face_decoder_blocks.8.0.conv_block.1.running_mean", "face_decoder_blocks.8.0.conv_block.1.running_var", "face_decoder_blocks.8.1.conv_block.0.weight", "face_decoder_blocks.8.1.conv_block.0.bias", "face_decoder_blocks.8.1.conv_block.1.weight", "face_decoder_blocks.8.1.conv_block.1.bias", "face_decoder_blocks.8.1.conv_block.1.running_mean", "face_decoder_blocks.8.1.conv_block.1.running_var", "face_decoder_blocks.8.2.conv_block.0.weight", "face_decoder_blocks.8.2.conv_block.0.bias", "face_decoder_blocks.8.2.conv_block.1.weight", "face_decoder_blocks.8.2.conv_block.1.bias", "face_decoder_blocks.8.2.conv_block.1.running_mean", "face_decoder_blocks.8.2.conv_block.1.running_var". Unexpected key(s) in state_dict: "sam.sa.conv1.weight", "audio_refine.0.conv_block.0.weight", "audio_refine.0.conv_block.0.bias", "audio_refine.0.conv_block.1.weight", "audio_refine.0.conv_block.1.bias", "audio_refine.0.conv_block.1.running_mean", "audio_refine.0.conv_block.1.running_var", "audio_refine.0.conv_block.1.num_batches_tracked", "audio_refine.1.conv_block.0.weight", "audio_refine.1.conv_block.0.bias", "audio_refine.1.conv_block.1.weight", "audio_refine.1.conv_block.1.bias", "audio_refine.1.conv_block.1.running_mean", "audio_refine.1.conv_block.1.running_var", "audio_refine.1.conv_block.1.num_batches_tracked", "face_encoder_blocks.0.1.conv_block.0.weight", "face_encoder_blocks.0.1.conv_block.0.bias", "face_encoder_blocks.0.1.conv_block.1.weight", "face_encoder_blocks.0.1.conv_block.1.bias", "face_encoder_blocks.0.1.conv_block.1.running_mean", "face_encoder_blocks.0.1.conv_block.1.running_var", "face_encoder_blocks.0.1.conv_block.1.num_batches_tracked", "face_encoder_blocks.0.2.conv_block.0.weight", "face_encoder_blocks.0.2.conv_block.0.bias", "face_encoder_blocks.0.2.conv_block.1.weight", "face_encoder_blocks.0.2.conv_block.1.bias", "face_encoder_blocks.0.2.conv_block.1.running_mean", "face_encoder_blocks.0.2.conv_block.1.running_var", "face_encoder_blocks.0.2.conv_block.1.num_batches_tracked", "face_encoder_blocks.0.3.conv_block.0.weight", "face_encoder_blocks.0.3.conv_block.0.bias", "face_encoder_blocks.0.3.conv_block.1.weight", "face_encoder_blocks.0.3.conv_block.1.bias", "face_encoder_blocks.0.3.conv_block.1.running_mean", "face_encoder_blocks.0.3.conv_block.1.running_var", "face_encoder_blocks.0.3.conv_block.1.num_batches_tracked", "face_encoder_blocks.1.2.conv_block.0.weight", "face_encoder_blocks.1.2.conv_block.0.bias", "face_encoder_blocks.1.2.conv_block.1.weight", "face_encoder_blocks.1.2.conv_block.1.bias", "face_encoder_blocks.1.2.conv_block.1.running_mean", "face_encoder_blocks.1.2.conv_block.1.running_var", "face_encoder_blocks.1.2.conv_block.1.num_batches_tracked", "face_encoder_blocks.1.3.conv_block.0.weight", "face_encoder_blocks.1.3.conv_block.0.bias", "face_encoder_blocks.1.3.conv_block.1.weight", "face_encoder_blocks.1.3.conv_block.1.bias", "face_encoder_blocks.1.3.conv_block.1.running_mean", "face_encoder_blocks.1.3.conv_block.1.running_var", "face_encoder_blocks.1.3.conv_block.1.num_batches_tracked", "face_encoder_blocks.2.3.conv_block.0.weight", "face_encoder_blocks.2.3.conv_block.0.bias", "face_encoder_blocks.2.3.conv_block.1.weight", "face_encoder_blocks.2.3.conv_block.1.bias", "face_encoder_blocks.2.3.conv_block.1.running_mean", "face_encoder_blocks.2.3.conv_block.1.running_var", "face_encoder_blocks.2.3.conv_block.1.num_batches_tracked", "face_encoder_blocks.4.3.conv_block.0.weight", "face_encoder_blocks.4.3.conv_block.0.bias", "face_encoder_blocks.4.3.conv_block.1.weight", "face_encoder_blocks.4.3.conv_block.1.bias", "face_encoder_blocks.4.3.conv_block.1.running_mean", "face_encoder_blocks.4.3.conv_block.1.running_var", "face_encoder_blocks.4.3.conv_block.1.num_batches_tracked", "face_encoder_blocks.5.3.conv_block.0.weight", "face_encoder_blocks.5.3.conv_block.0.bias", "face_encoder_blocks.5.3.conv_block.1.weight", "face_encoder_blocks.5.3.conv_block.1.bias", "face_encoder_blocks.5.3.conv_block.1.running_mean", "face_encoder_blocks.5.3.conv_block.1.running_var", "face_encoder_blocks.5.3.conv_block.1.num_batches_tracked", "face_encoder_blocks.6.2.conv_block.0.weight", "face_encoder_blocks.6.2.conv_block.0.bias", "face_encoder_blocks.6.2.conv_block.1.weight", "face_encoder_blocks.6.2.conv_block.1.bias", "face_encoder_blocks.6.2.conv_block.1.running_mean", "face_encoder_blocks.6.2.conv_block.1.running_var", "face_encoder_blocks.6.2.conv_block.1.num_batches_tracked", "face_encoder_blocks.6.3.conv_block.0.weight", "face_encoder_blocks.6.3.conv_block.0.bias", "face_encoder_blocks.6.3.conv_block.1.weight", "face_encoder_blocks.6.3.conv_block.1.bias", "face_encoder_blocks.6.3.conv_block.1.running_mean", "face_encoder_blocks.6.3.conv_block.1.running_var", "face_encoder_blocks.6.3.conv_block.1.num_batches_tracked", "face_encoder_blocks.7.2.conv_block.0.weight", "face_encoder_blocks.7.2.conv_block.0.bias", "face_encoder_blocks.7.2.conv_block.1.weight", "face_encoder_blocks.7.2.conv_block.1.bias", "face_encoder_blocks.7.2.conv_block.1.running_mean", "face_encoder_blocks.7.2.conv_block.1.running_var", "face_encoder_blocks.7.2.conv_block.1.num_batches_tracked", "audio_encoder.13.conv_block.0.weight", "audio_encoder.13.conv_block.0.bias", "audio_encoder.13.conv_block.1.weight", "audio_encoder.13.conv_block.1.bias", "audio_encoder.13.conv_block.1.running_mean", "audio_encoder.13.conv_block.1.running_var", "audio_encoder.13.conv_block.1.num_batches_tracked", "audio_encoder.14.conv_block.0.weight", "audio_encoder.14.conv_block.0.bias", "audio_encoder.14.conv_block.1.weight", "audio_encoder.14.conv_block.1.bias", "audio_encoder.14.conv_block.1.running_mean", "audio_encoder.14.conv_block.1.running_var", "audio_encoder.14.conv_block.1.num_batches_tracked", "audio_encoder.15.conv_block.0.weight", "audio_encoder.15.conv_block.0.bias", "audio_encoder.15.conv_block.1.weight", "audio_encoder.15.conv_block.1.bias", "audio_encoder.15.conv_block.1.running_mean", "audio_encoder.15.conv_block.1.running_var", "audio_encoder.15.conv_block.1.num_batches_tracked", "audio_encoder.16.conv_block.0.weight", "audio_encoder.16.conv_block.0.bias", "audio_encoder.16.conv_block.1.weight", "audio_encoder.16.conv_block.1.bias", "audio_encoder.16.conv_block.1.running_mean", "audio_encoder.16.conv_block.1.running_var", "audio_encoder.16.conv_block.1.num_batches_tracked". size mismatch for face_encoder_blocks.1.0.conv_block.0.weight: copying a param with shape torch.Size([32, 16, 3, 3]) from checkpoint, the shape in current model is torch.Size([32, 16, 5, 5]). size mismatch for face_encoder_blocks.2.0.conv_block.0.weight: copying a param with shape torch.Size([64, 32, 3, 3]) from checkpoint, the shape in current model is torch.Size([32, 32, 3, 3]). size mismatch for face_encoder_blocks.2.0.conv_block.0.bias: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([32]). size mismatch for face_encoder_blocks.2.0.conv_block.1.weight: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([32]). size mismatch for face_encoder_blocks.2.0.conv_block.1.bias: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([32]). size mismatch for face_encoder_blocks.2.0.conv_block.1.running_mean: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([32]). size mismatch for face_encoder_blocks.2.0.conv_block.1.running_var: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([32]). size mismatch for face_encoder_blocks.2.1.conv_block.0.weight: copying a param with shape torch.Size([64, 64, 3, 3]) from checkpoint, the shape in current model is torch.Size([32, 32, 3, 3]). size mismatch for face_encoder_blocks.2.1.conv_block.0.bias: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([32]). size mismatch for face_encoder_blocks.2.1.conv_block.1.weight: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([32]). size mismatch for face_encoder_blocks.2.1.conv_block.1.bias: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([32]). size mismatch for face_encoder_blocks.2.1.conv_block.1.running_mean: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([32]). size mismatch for face_encoder_blocks.2.1.conv_block.1.running_var: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([32]). size mismatch for face_encoder_blocks.2.2.conv_block.0.weight: copying a param with shape torch.Size([64, 64, 3, 3]) from checkpoint, the shape in current model is torch.Size([32, 32, 3, 3]). size mismatch for face_encoder_blocks.2.2.conv_block.0.bias: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([32]). size mismatch for face_encoder_blocks.2.2.conv_block.1.weight: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([32]). size mismatch for face_encoder_blocks.2.2.conv_block.1.bias: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([32]). size mismatch for face_encoder_blocks.2.2.conv_block.1.running_mean: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([32]). size mismatch for face_encoder_blocks.2.2.conv_block.1.running_var: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([32]). size mismatch for face_encoder_blocks.3.0.conv_block.0.weight: copying a param with shape torch.Size([128, 64, 3, 3]) from checkpoint, the shape in current model is torch.Size([64, 32, 3, 3]). size mismatch for face_encoder_blocks.3.0.conv_block.0.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([64]). size mismatch for face_encoder_blocks.3.0.conv_block.1.weight: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([64]). size mismatch for face_encoder_blocks.3.0.conv_block.1.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([64]). size mismatch for face_encoder_blocks.3.0.conv_block.1.running_mean: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([64]). size mismatch for face_encoder_blocks.3.0.conv_block.1.running_var: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([64]). size mismatch for face_encoder_blocks.3.1.conv_block.0.weight: copying a param with shape torch.Size([128, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([64, 64, 3, 3]). size mismatch for face_encoder_blocks.3.1.conv_block.0.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([64]). size mismatch for face_encoder_blocks.3.1.conv_block.1.weight: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([64]). size mismatch for face_encoder_blocks.3.1.conv_block.1.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([64]). size mismatch for face_encoder_blocks.3.1.conv_block.1.running_mean: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([64]). size mismatch for face_encoder_blocks.3.1.conv_block.1.running_var: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([64]). size mismatch for face_encoder_blocks.3.2.conv_block.0.weight: copying a param with shape torch.Size([128, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([64, 64, 3, 3]). size mismatch for face_encoder_blocks.3.2.conv_block.0.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([64]). size mismatch for face_encoder_blocks.3.2.conv_block.1.weight: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([64]). size mismatch for face_encoder_blocks.3.2.conv_block.1.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([64]). size mismatch for face_encoder_blocks.3.2.conv_block.1.running_mean: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([64]). size mismatch for face_encoder_blocks.3.2.conv_block.1.running_var: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([64]). size mismatch for face_encoder_blocks.3.3.conv_block.0.weight: copying a param with shape torch.Size([128, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([64, 64, 3, 3]). size mismatch for face_encoder_blocks.3.3.conv_block.0.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([64]). size mismatch for face_encoder_blocks.3.3.conv_block.1.weight: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([64]). size mismatch for face_encoder_blocks.3.3.conv_block.1.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([64]). size mismatch for face_encoder_blocks.3.3.conv_block.1.running_mean: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([64]). size mismatch for face_encoder_blocks.3.3.conv_block.1.running_var: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([64]). size mismatch for face_encoder_blocks.4.0.conv_block.0.weight: copying a param with shape torch.Size([256, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 64, 3, 3]). size mismatch for face_encoder_blocks.4.0.conv_block.0.bias: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([128]). size mismatch for face_encoder_blocks.4.0.conv_block.1.weight: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([128]). size mismatch for face_encoder_blocks.4.0.conv_block.1.bias: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([128]). size mismatch for face_encoder_blocks.4.0.conv_block.1.running_mean: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([128]). size mismatch for face_encoder_blocks.4.0.conv_block.1.running_var: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([128]). size mismatch for face_encoder_blocks.4.1.conv_block.0.weight: copying a param with shape torch.Size([256, 256, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 128, 3, 3]). size mismatch for face_encoder_blocks.4.1.conv_block.0.bias: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([128]). size mismatch for face_encoder_blocks.4.1.conv_block.1.weight: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([128]). size mismatch for face_encoder_blocks.4.1.conv_block.1.bias: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([128]). size mismatch for face_encoder_blocks.4.1.conv_block.1.running_mean: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([128]). size mismatch for face_encoder_blocks.4.1.conv_block.1.running_var: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([128]). size mismatch for face_encoder_blocks.4.2.conv_block.0.weight: copying a param with shape torch.Size([256, 256, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 128, 3, 3]). size mismatch for face_encoder_blocks.4.2.conv_block.0.bias: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([128]). size mismatch for face_encoder_blocks.4.2.conv_block.1.weight: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([128]). size mismatch for face_encoder_blocks.4.2.conv_block.1.bias: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([128]). size mismatch for face_encoder_blocks.4.2.conv_block.1.running_mean: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([128]). size mismatch for face_encoder_blocks.4.2.conv_block.1.running_var: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([128]). size mismatch for face_encoder_blocks.5.0.conv_block.0.weight: copying a param with shape torch.Size([512, 256, 3, 3]) from checkpoint, the shape in current model is torch.Size([256, 128, 3, 3]). size mismatch for face_encoder_blocks.5.0.conv_block.0.bias: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([256]). size mismatch for face_encoder_blocks.5.0.conv_block.1.weight: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([256]). size mismatch for face_encoder_blocks.5.0.conv_block.1.bias: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([256]). size mismatch for face_encoder_blocks.5.0.conv_block.1.running_mean: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([256]). size mismatch for face_encoder_blocks.5.0.conv_block.1.running_var: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([256]). size mismatch for face_encoder_blocks.5.1.conv_block.0.weight: copying a param with shape torch.Size([512, 512, 3, 3]) from checkpoint, the shape in current model is torch.Size([256, 256, 3, 3]). size mismatch for face_encoder_blocks.5.1.conv_block.0.bias: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([256]). size mismatch for face_encoder_blocks.5.1.conv_block.1.weight: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([256]). size mismatch for face_encoder_blocks.5.1.conv_block.1.bias: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([256]). size mismatch for face_encoder_blocks.5.1.conv_block.1.running_mean: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([256]). size mismatch for face_encoder_blocks.5.1.conv_block.1.running_var: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([256]). size mismatch for face_encoder_blocks.5.2.conv_block.0.weight: copying a param with shape torch.Size([512, 512, 3, 3]) from checkpoint, the shape in current model is torch.Size([256, 256, 3, 3]). size mismatch for face_encoder_blocks.5.2.conv_block.0.bias: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([256]). size mismatch for face_encoder_blocks.5.2.conv_block.1.weight: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([256]). size mismatch for face_encoder_blocks.5.2.conv_block.1.bias: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([256]). size mismatch for face_encoder_blocks.5.2.conv_block.1.running_mean: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([256]). size mismatch for face_encoder_blocks.5.2.conv_block.1.running_var: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([256]). size mismatch for face_encoder_blocks.6.0.conv_block.0.weight: copying a param with shape torch.Size([1024, 512, 3, 3]) from checkpoint, the shape in current model is torch.Size([512, 256, 3, 3]). size mismatch for face_encoder_blocks.6.0.conv_block.0.bias: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([512]). size mismatch for face_encoder_blocks.6.0.conv_block.1.weight: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([512]). size mismatch for face_encoder_blocks.6.0.conv_block.1.bias: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([512]). size mismatch for face_encoder_blocks.6.0.conv_block.1.running_mean: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([512]). size mismatch for face_encoder_blocks.6.0.conv_block.1.running_var: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([512]). size mismatch for face_encoder_blocks.6.1.conv_block.0.weight: copying a param with shape torch.Size([1024, 1024, 3, 3]) from checkpoint, the shape in current model is torch.Size([512, 512, 3, 3]). size mismatch for face_encoder_blocks.6.1.conv_block.0.bias: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([512]). size mismatch for face_encoder_blocks.6.1.conv_block.1.weight: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([512]). size mismatch for face_encoder_blocks.6.1.conv_block.1.bias: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([512]). size mismatch for face_encoder_blocks.6.1.conv_block.1.running_mean: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([512]). size mismatch for face_encoder_blocks.6.1.conv_block.1.running_var: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([512]). size mismatch for face_encoder_blocks.7.0.conv_block.0.weight: copying a param with shape torch.Size([1024, 1024, 3, 3]) from checkpoint, the shape in current model is torch.Size([512, 512, 3, 3]). size mismatch for face_encoder_blocks.7.0.conv_block.0.bias: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([512]). size mismatch for face_encoder_blocks.7.0.conv_block.1.weight: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([512]). size mismatch for face_encoder_blocks.7.0.conv_block.1.bias: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([512]). size mismatch for face_encoder_blocks.7.0.conv_block.1.running_mean: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([512]). size mismatch for face_encoder_blocks.7.0.conv_block.1.running_var: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([512]). size mismatch for face_encoder_blocks.7.1.conv_block.0.weight: copying a param with shape torch.Size([1024, 1024, 1, 1]) from checkpoint, the shape in current model is torch.Size([512, 512, 1, 1]). size mismatch for face_encoder_blocks.7.1.conv_block.0.bias: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([512]). size mismatch for face_encoder_blocks.7.1.conv_block.1.weight: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([512]). size mismatch for face_encoder_blocks.7.1.conv_block.1.bias: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([512]). size mismatch for face_encoder_blocks.7.1.conv_block.1.running_mean: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([512]). size mismatch for face_encoder_blocks.7.1.conv_block.1.running_var: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([512]). size mismatch for audio_encoder.11.conv_block.0.weight: copying a param with shape torch.Size([256, 256, 3, 3]) from checkpoint, the shape in current model is torch.Size([512, 256, 3, 3]). size mismatch for audio_encoder.11.conv_block.0.bias: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([512]). size mismatch for audio_encoder.11.conv_block.1.weight: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([512]). size mismatch for audio_encoder.11.conv_block.1.bias: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([512]). size mismatch for audio_encoder.11.conv_block.1.running_mean: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([512]). size mismatch for audio_encoder.11.conv_block.1.running_var: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([512]). size mismatch for audio_encoder.12.conv_block.0.weight: copying a param with shape torch.Size([512, 256, 3, 3]) from checkpoint, the shape in current model is torch.Size([512, 512, 1, 1]). size mismatch for face_decoder_blocks.0.0.conv_block.0.weight: copying a param with shape torch.Size([1024, 1024, 1, 1]) from checkpoint, the shape in current model is torch.Size([512, 512, 1, 1]). size mismatch for face_decoder_blocks.0.0.conv_block.0.bias: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([512]). size mismatch for face_decoder_blocks.0.0.conv_block.1.weight: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([512]). size mismatch for face_decoder_blocks.0.0.conv_block.1.bias: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([512]). size mismatch for face_decoder_blocks.0.0.conv_block.1.running_mean: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([512]). size mismatch for face_decoder_blocks.0.0.conv_block.1.running_var: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([512]). size mismatch for face_decoder_blocks.1.0.conv_block.0.weight: copying a param with shape torch.Size([2048, 1024, 3, 3]) from checkpoint, the shape in current model is torch.Size([1024, 512, 3, 3]). size mismatch for face_decoder_blocks.1.0.conv_block.0.bias: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([512]). size mismatch for face_decoder_blocks.1.0.conv_block.1.weight: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([512]). size mismatch for face_decoder_blocks.1.0.conv_block.1.bias: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([512]). size mismatch for face_decoder_blocks.1.0.conv_block.1.running_mean: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([512]). size mismatch for face_decoder_blocks.1.0.conv_block.1.running_var: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([512]). size mismatch for face_decoder_blocks.1.1.conv_block.0.weight: copying a param with shape torch.Size([1024, 1024, 3, 3]) from checkpoint, the shape in current model is torch.Size([512, 512, 3, 3]). size mismatch for face_decoder_blocks.1.1.conv_block.0.bias: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([512]). size mismatch for face_decoder_blocks.1.1.conv_block.1.weight: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([512]). size mismatch for face_decoder_blocks.1.1.conv_block.1.bias: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([512]). size mismatch for face_decoder_blocks.1.1.conv_block.1.running_mean: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([512]). size mismatch for face_decoder_blocks.1.1.conv_block.1.running_var: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([512]). size mismatch for face_decoder_blocks.2.0.conv_block.0.weight: copying a param with shape torch.Size([2048, 1024, 3, 3]) from checkpoint, the shape in current model is torch.Size([1024, 512, 3, 3]). size mismatch for face_decoder_blocks.2.0.conv_block.0.bias: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([512]). size mismatch for face_decoder_blocks.2.0.conv_block.1.weight: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([512]). size mismatch for face_decoder_blocks.2.0.conv_block.1.bias: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([512]). size mismatch for face_decoder_blocks.2.0.conv_block.1.running_mean: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([512]). size mismatch for face_decoder_blocks.2.0.conv_block.1.running_var: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([512]). size mismatch for face_decoder_blocks.2.1.conv_block.0.weight: copying a param with shape torch.Size([1024, 1024, 3, 3]) from checkpoint, the shape in current model is torch.Size([512, 512, 3, 3]). size mismatch for face_decoder_blocks.2.1.conv_block.0.bias: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([512]). size mismatch for face_decoder_blocks.2.1.conv_block.1.weight: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([512]). size mismatch for face_decoder_blocks.2.1.conv_block.1.bias: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([512]). size mismatch for face_decoder_blocks.2.1.conv_block.1.running_mean: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([512]). size mismatch for face_decoder_blocks.2.1.conv_block.1.running_var: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([512]). size mismatch for face_decoder_blocks.2.2.conv_block.0.weight: copying a param with shape torch.Size([1024, 1024, 3, 3]) from checkpoint, the shape in current model is torch.Size([512, 512, 3, 3]). size mismatch for face_decoder_blocks.2.2.conv_block.0.bias: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([512]). size mismatch for face_decoder_blocks.2.2.conv_block.1.weight: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([512]). size mismatch for face_decoder_blocks.2.2.conv_block.1.bias: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([512]). size mismatch for face_decoder_blocks.2.2.conv_block.1.running_mean: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([512]). size mismatch for face_decoder_blocks.2.2.conv_block.1.running_var: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([512]). size mismatch for face_decoder_blocks.3.0.conv_block.0.weight: copying a param with shape torch.Size([1536, 768, 3, 3]) from checkpoint, the shape in current model is torch.Size([1024, 512, 3, 3]). size mismatch for face_decoder_blocks.3.0.conv_block.0.bias: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([512]). size mismatch for face_decoder_blocks.3.0.conv_block.1.weight: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([512]). size mismatch for face_decoder_blocks.3.0.conv_block.1.bias: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([512]). size mismatch for face_decoder_blocks.3.0.conv_block.1.running_mean: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([512]). size mismatch for face_decoder_blocks.3.0.conv_block.1.running_var: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([512]). size mismatch for face_decoder_blocks.3.1.conv_block.0.weight: copying a param with shape torch.Size([768, 768, 3, 3]) from checkpoint, the shape in current model is torch.Size([512, 512, 3, 3]). size mismatch for face_decoder_blocks.3.1.conv_block.0.bias: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([512]). size mismatch for face_decoder_blocks.3.1.conv_block.1.weight: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([512]). size mismatch for face_decoder_blocks.3.1.conv_block.1.bias: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([512]). size mismatch for face_decoder_blocks.3.1.conv_block.1.running_mean: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([512]). size mismatch for face_decoder_blocks.3.1.conv_block.1.running_var: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([512]). size mismatch for face_decoder_blocks.3.2.conv_block.0.weight: copying a param with shape torch.Size([768, 768, 3, 3]) from checkpoint, the shape in current model is torch.Size([512, 512, 3, 3]). size mismatch for face_decoder_blocks.3.2.conv_block.0.bias: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([512]). size mismatch for face_decoder_blocks.3.2.conv_block.1.weight: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([512]). size mismatch for face_decoder_blocks.3.2.conv_block.1.bias: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([512]). size mismatch for face_decoder_blocks.3.2.conv_block.1.running_mean: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([512]). size mismatch for face_decoder_blocks.3.2.conv_block.1.running_var: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([512]). size mismatch for face_decoder_blocks.4.0.conv_block.0.weight: copying a param with shape torch.Size([1024, 512, 3, 3]) from checkpoint, the shape in current model is torch.Size([768, 384, 3, 3]). size mismatch for face_decoder_blocks.4.0.conv_block.0.bias: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([384]). size mismatch for face_decoder_blocks.4.0.conv_block.1.weight: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([384]). size mismatch for face_decoder_blocks.4.0.conv_block.1.bias: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([384]). size mismatch for face_decoder_blocks.4.0.conv_block.1.running_mean: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([384]). size mismatch for face_decoder_blocks.4.0.conv_block.1.running_var: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([384]). size mismatch for face_decoder_blocks.4.1.conv_block.0.weight: copying a param with shape torch.Size([512, 512, 3, 3]) from checkpoint, the shape in current model is torch.Size([384, 384, 3, 3]). size mismatch for face_decoder_blocks.4.1.conv_block.0.bias: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([384]). size mismatch for face_decoder_blocks.4.1.conv_block.1.weight: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([384]). size mismatch for face_decoder_blocks.4.1.conv_block.1.bias: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([384]). size mismatch for face_decoder_blocks.4.1.conv_block.1.running_mean: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([384]). size mismatch for face_decoder_blocks.4.1.conv_block.1.running_var: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([384]). size mismatch for face_decoder_blocks.4.2.conv_block.0.weight: copying a param with shape torch.Size([512, 512, 3, 3]) from checkpoint, the shape in current model is torch.Size([384, 384, 3, 3]). size mismatch for face_decoder_blocks.4.2.conv_block.0.bias: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([384]). size mismatch for face_decoder_blocks.4.2.conv_block.1.weight: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([384]). size mismatch for face_decoder_blocks.4.2.conv_block.1.bias: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([384]). size mismatch for face_decoder_blocks.4.2.conv_block.1.running_mean: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([384]). size mismatch for face_decoder_blocks.4.2.conv_block.1.running_var: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([384]). size mismatch for face_decoder_blocks.5.0.conv_block.0.weight: copying a param with shape torch.Size([640, 256, 3, 3]) from checkpoint, the shape in current model is torch.Size([512, 256, 3, 3]).