UserWarning: postnet.convolutions.13.bias: may contain invalid size of weight. skipping...
warn("{}: may contain invalid size of weight. skipping...".format(k))
Error(s) in loading state_dict for MultiSpeakerTTSModel:
size mismatch for seq2seq.encoder.embed_tokens.weight: copying a param with shape torch.Size([149, 128]) from checkpoint, the shape in current model is torch.Size([149, 256]).
I have given the latest pre-trained model on LJ and testing speaker adaption using preprocessed files of VCTK
UserWarning: postnet.convolutions.13.bias: may contain invalid size of weight. skipping... warn("{}: may contain invalid size of weight. skipping...".format(k)) Error(s) in loading state_dict for MultiSpeakerTTSModel: size mismatch for seq2seq.encoder.embed_tokens.weight: copying a param with shape torch.Size([149, 128]) from checkpoint, the shape in current model is torch.Size([149, 256]).
I have given the latest pre-trained model on LJ and testing speaker adaption using preprocessed files of VCTK