Open LifaSun opened 3 years ago
please use this commit: c008dd766d4c72709864df6d41854b44ccf96eea git reset c008dd766d4c72709864df6d41854b44ccf96eea
Thanks!
碰到了同样的问题,没明白怎么修改?
Hey, I got the same issue. Can you tell me how to fix it. Thanks a lot.
_loading model from ./checkpoint/checkpoint_500000.pth
Traceback (most recent call last):
File "synthesize.py", line 123, in
大佬,我也到这个问题了。要怎么解决?
Sorry for late reply. I should fix this error by opening another branch for aishell3 (multispeaker with speaker embedding), but since I don't have enough time, a simple solution to this error is reset to the biaobei version
Solution:
git clone https://github.com/ranchlai/mandarin-tts.git
cd mandarin-tts
git reset c008dd766d4c72709864df6d41854b44ccf96eea --hard
python synthesize.py --input="您的电话余额不足,请及时充值"
It should work
大佬,我也到这个问题了,麻烦 能不能说下 我咋直接 改代码,不知道原因是啥
旧版本:可使用biaobei分支:git checkout biaobei.
新版本:试试mtts分支,以后只维护这个分支。
我也碰到了这个问题,而且用git reset c008dd7也找不到仓库,还有其他解决办法吗
@ranchlai 感谢分享!按照readme,一步一步运行会报错:
loading model from ./checkpoint/checkpoint_500000.pth Traceback (most recent call last): File "synthesize.py", line 123, in
model = build_model().to(device)
File "synthesize.py", line 50 in build_model
model.load_state_dict(sd)
File "/usr/local/lib/....../module.py, line 1224, in load_state_dict
Runtime Error: Error in loading state_dict for FastSpeech2
Miss key(s) in state_dict: "decoder.speaker_fc.weight" size mismatch for encoder.position_enc: copyinga param with shape torch.size([1, 10 01,256]) from checkpiont, the shape in current model is torch.size([1, 2001, 256])。 size mismatch for encoder.src_word_emb.weight: copying a param with shape torch.size ([1612,256])Erom checkpoint,the shape in current model is torch.size([1915,256])。 size mismatch for encoder.cn_word_emb.weight: copying a param with shape torch.size( [4135,2561)From checkpoint, the shape in current model is torch.size([4502,256])。 size mismatch for decoder.position_enc: copyinga param with shape torch.size([1, 10 01,2561) from checkpoint,the shape in current model is torch.size([1, 2001, 256])/content/
Thanks!