Open inconnu11 opened 1 year ago
Hi @inconnu11 , thanks for your attention.
My intention was to prevent the prosody encoder learning meaningless representations at the first few training steps. But you can remove prosody_loss_enable_steps
(by setting it as 1
for example) if you don't care. Otherwise, there should be no gain from backprop through prosody encoder even it's still added to the text hidden.
Hi, I got it and thanks for the reply. But when I run the code with the default setting with LJSpeech corpus except toggle the type of prosody modelings to 'du2021', the prosody loss at prosody_loss_enable_steps
(10w by default) is nan.
Hmm, it's weird. If you have room for that, could you please do some sanity checks on your side? For example, removing some part of the code to make it simpler until the nan loss disappear would be one. It will definitely be helpful for others interested in this issue.
I'd like to do so. But it takes too long to train it. I have to train the model for 7days with one gpu T4. Are there any parts of the code can speed up the training process?
Hi,
I'm the author of this paper. My code for calculating the MDN loss is here with a small numerical stability trick:
Does that help?
Hi, I change the mdn loss calculation from fig1 to fig2 . But it doesn't seem to work.
original MDN loss:
newer MDN loss:
The MDN loss (i.e. negative log-likelihood) can be negative value. However, in your log, it is almost 0 before becoming nan. I guess maybe you can check whether you calculate the likelihood correctly.
Hi, I am adding your MDN prosody modeling code segment to my tacotron but I encountered several problems about the code segment about prosody modeling. First, the prosody loss is added into the total loss only after the
prosody_loss_enable_steps
but in the training steps before theprosody_loss_enable_steps
the prosody representation is already added with the text encoding. Does it means in the training steps before theprosody_loss_enable_steps
, the prosody representation is optimized without the prosody loss? Second, in the training steps, the backward gradient of training prosody predictor should be acted like "stop gradient" but it seems little relevant code. Thanks!