Closed aijianiula0601 closed 2 years ago
Hi @aijianiula0601 , the current implementation doesn't support other languages and datasets than LJS yet; hence you might have to apply the proper boundary function for them. I may misunderstand your question since I'm answering by running Google translator. I can answer you better in English.
Close due to inactivity.
Hi, I have the same problem is there a chance you could help me fixing this? Our dataset is english too and I've also used mfa for alignment. I did however change some of the config fields to adjust - sampling rate, filter length, hop length and window length. I've also used the same lexicon (so I guess maybe the problem comes from words present in our dataset and not in the lexicon?)
请问下,现在的程序是不是训练不了呢? 在text的处理上,代码采用的是字符作为单元,而不是空格分开的音素为单元。发现跟phones_per_word对应不上。就是 len(phones)!=sum(phones_per_word),我分析到的原因是phones采用的字符为单元。