Open hathubkhn opened 2 years ago
Hi, @hathubkhn. What kind of dataset are you using? How's the quality and what's the style of the speech (reading style or expressive style)? These factors affect the TTS training a lot.
To my understanding, if you want to train a high-quality TTS system, you need high-quality data. And I am also doing some experiments combining a speech enhancement system as a front-end (or as a preprocessing step). You can check this repo: https://github.com/facebookresearch/denoiser. The enhanced quality is quite good.
As for the style, if your data is expressive, like emotional, you might need another encoder to model these styles. A famous work is the Global Style Token (https://arxiv.org/abs/1803.09017). However, we do not support modeling the styles in this repo.
Thank you for your reply. Actually, I am using KSS dataset (https://www.kaggle.com/datasets/bryanpark/korean-single-speaker-speech-dataset). The sampling rate is 22050. However, when I finished training and check some generated files. I see the performance is not good. I dont know when we use Hifi-Gan universal, doesnt it affect or not. About dataset, this is single speaker with normal voice.
And also I have other question, if I want to combine emotion style, I have to use another encoder like you said and also I need to add other emotion loss for training that kind of data?
eval.zip This is result that I extracted from model. Could you help me to evaluate what is the problem? :(
Hi, @hathubkhn I think using Hifi-Gan can actually improve the quality. Just remember you make sure you extract the MelSpectrogram properly (consistent with your Hifi-GAN) in the preprocessing stage.
I listened to your sample. It sounds like some phonemes cannot be appropriately pronounced, but some phonemes sound okay (like the ones at the beginning). Just guessing; maybe you can check your text preprocessing to see if it transforms the text into the correcet phonemes.
You can also check if it can synthesize well for the "training data".
Thank you, What is happend if model transform the text into correct phonemes? Could we have any other reasons for this problem?
Hello authors, First of all, thank you for giving us an impressive repository. For now, I want to re-trained your model with Korean language, for example KSS (korean single speaker). However, when I synthesize, I see it is not good for korean language. Can you give me some guidelines for that.
Thank you very much