Closed saibharani closed 5 years ago
Hi @saibharani ,
The loss will probably not decrease anymore, but the synthesis will get better and it will not skip words in the future. To get an idea, I've trained the encoder for 300 epochs for the Romanian model and I just reached 190 epochs for the English dataset (1 month+ of training). Just keep training the encoder on your dataset. (you have the --resume option)
How many hours of training data do you have?
sorry for the late reply I was on a travel yesterday. my training data is about 5.5hrs and the pronunciation is good but the only problem is it skips some words and the audio is distorted when the words are skipped do you suggest to use the updated repo or can i continue with the previous one. and if ii update the repo do i have to train it again
For 5.5 hours of training data you still requite about 200 epochs for the encoder (if it's single speaker). The new code adds global style tokens and the older models will be no longer supported. So, I suggest you update the code and restart training (re-import might be necessary). I also suggest you switch to 16khz.
Let the encoder train for two-three weeks and check the results then. I've also added support for three vocoders: wavenet, clarinet and waveglow.
Let me know if there is anything else,. Best, Tibi
ok, thank you. I wiil retrain it with the new code. do you plan on releasing any waveglow model and which has good inference time among the 3 vocoders
Yes, I will release a waveglow model. If you check the notebook from colaboratory, it already downloads a partially trained model from a google drive url. I still have to train it for 2-3 weeks, but I will add a permanent link after that.
The best results seem to go for wavenet and waveglow. Clarinet is a little bit muffled
I have trained an encoder on custom data in telugu language for about 4 days but during inference some words are not synthesized and the audio is skipping those words any do you suggest any hyperparameters adjustment or something else to make the synthesizer work correctly. I am using the ljspeech vocoder. and trained it on the last version of the repo before the new g2p pull. and the loss is around 1.8 to 2.3 it remained in that range for the past 20hrs. Thank you