keonlee9420 / Comprehensive-Transformer-TTS

A Non-Autoregressive Transformer based Text-to-Speech, supporting a family of SOTA transformers with supervised and unsupervised duration modelings. This project grows with the research community, aiming to achieve the ultimate TTS
MIT License
315 stars 41 forks source link

Prosody Loss #15

Open inconnu11 opened 1 year ago

inconnu11 commented 1 year ago

Hi, I am adding your MDN prosody modeling code segment to my tacotron but I encountered several problems about the code segment about prosody modeling. First, the prosody loss is added into the total loss only after the prosody_loss_enable_steps but in the training steps before the prosody_loss_enable_steps the prosody representation is already added with the text encoding. Does it means in the training steps before the prosody_loss_enable_steps, the prosody representation is optimized without the prosody loss? Second, in the training steps, the backward gradient of training prosody predictor should be acted like "stop gradient" but it seems little relevant code. Thanks!

eeObXqHdtF

keonlee9420 commented 1 year ago

Hi @inconnu11 , thanks for your attention.

My intention was to prevent the prosody encoder learning meaningless representations at the first few training steps. But you can remove prosody_loss_enable_steps (by setting it as 1 for example) if you don't care. Otherwise, there should be no gain from backprop through prosody encoder even it's still added to the text hidden.

inconnu11 commented 1 year ago

Hi, I got it and thanks for the reply. But when I run the code with the default setting with LJSpeech corpus except toggle the type of prosody modelings to 'du2021', the prosody loss at prosody_loss_enable_steps(10w by default) is nan. image

keonlee9420 commented 1 year ago

Hmm, it's weird. If you have room for that, could you please do some sanity checks on your side? For example, removing some part of the code to make it simpler until the nan loss disappear would be one. It will definitely be helpful for others interested in this issue.

inconnu11 commented 1 year ago

I'd like to do so. But it takes too long to train it. I have to train the model for 7days with one gpu T4. Are there any parts of the code can speed up the training process?

cpdu commented 1 year ago

Hi,

I'm the author of this paper. My code for calculating the MDN loss is here with a small numerical stability trick: MDN_loglike

Does that help?

inconnu11 commented 1 year ago

Hi, I change the mdn loss calculation from fig1 to fig2 . But it doesn't seem to work.

original MDN loss: 20221102-215537

newer MDN loss:

20221102-215548

cpdu commented 1 year ago

The MDN loss (i.e. negative log-likelihood) can be negative value. However, in your log, it is almost 0 before becoming nan. I guess maybe you can check whether you calculate the likelihood correctly.