After training the speech synthesiser for 100k epochs, I can't get it to synthesise due to this dimensions error. Could you help me with it please? Thanks in advance!
Using TensorFlow backend.
Command line args:
{'--conditional': '/data/Daniel/Speech_synthesis/wavenet_vocoder_orig/cmu_arctic/cmu_arctic-mel-00001.npy',
'--file-name-suffix': '',
'--help': False,
'--hparams': '',
'--initial-value': None,
'--length': '32000',
'--output-html': False,
'--preset': None,
'--speaker-id': None,
'': 'checkpoints/checkpoint_step000100000.pth',
'': 'generated/test_awb'}
Load checkpoint from checkpoints/checkpoint_step000100000.pth
Traceback (most recent call last):
File "synthesis.py", line 179, in
model.load_state_dict(checkpoint["state_dict"])
File "/data/anaconda/envs/Danielp3/lib/python3.5/site-packages/torch/nn/modules/module.py", line 721, in load_state_dict
self.class.name, "\n\t".join(error_msgs)))
RuntimeError: Error(s) in loading state_dict for WaveNet:
While copying the parameter named "first_conv.weight_v", whose dimensions in the model are torch.Size([512, 1, 1]) and whose dimensions in the checkpoint are torch.Size([512, 256, 1]).
While copying the parameter named "last_conv_layers.3.bias", whose dimensions in the model are torch.Size([30]) and whose dimensions in the checkpoint are torch.Size([256]).
While copying the parameter named "last_conv_layers.3.weight_g", whose dimensions in the model are torch.Size([30, 1, 1]) and whose dimensions in the checkpoint are torch.Size([256, 1, 1]).
While copying the parameter named "last_conv_layers.3.weight_v", whose dimensions in the model are torch.Size([30, 256, 1]) and whose dimensions in the checkpoint are torch.Size([256, 256, 1]).
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
After training the speech synthesiser for 100k epochs, I can't get it to synthesise due to this dimensions error. Could you help me with it please? Thanks in advance!
Using TensorFlow backend. Command line args: {'--conditional': '/data/Daniel/Speech_synthesis/wavenet_vocoder_orig/cmu_arctic/cmu_arctic-mel-00001.npy', '--file-name-suffix': '', '--help': False, '--hparams': '', '--initial-value': None, '--length': '32000', '--output-html': False, '--preset': None, '--speaker-id': None, '': 'checkpoints/checkpoint_step000100000.pth',
'': 'generated/test_awb'}
Load checkpoint from checkpoints/checkpoint_step000100000.pth
Traceback (most recent call last):
File "synthesis.py", line 179, in
model.load_state_dict(checkpoint["state_dict"])
File "/data/anaconda/envs/Danielp3/lib/python3.5/site-packages/torch/nn/modules/module.py", line 721, in load_state_dict
self.class.name, "\n\t".join(error_msgs)))
RuntimeError: Error(s) in loading state_dict for WaveNet:
While copying the parameter named "first_conv.weight_v", whose dimensions in the model are torch.Size([512, 1, 1]) and whose dimensions in the checkpoint are torch.Size([512, 256, 1]).
While copying the parameter named "last_conv_layers.3.bias", whose dimensions in the model are torch.Size([30]) and whose dimensions in the checkpoint are torch.Size([256]).
While copying the parameter named "last_conv_layers.3.weight_g", whose dimensions in the model are torch.Size([30, 1, 1]) and whose dimensions in the checkpoint are torch.Size([256, 1, 1]).
While copying the parameter named "last_conv_layers.3.weight_v", whose dimensions in the model are torch.Size([30, 256, 1]) and whose dimensions in the checkpoint are torch.Size([256, 256, 1]).