NVIDIA / waveglow

A Flow-based Generative Network for Speech Synthesis
BSD 3-Clause "New" or "Revised" License
2.29k stars 530 forks source link

The training log shows that the loss value is negative #251

Open AugggRush opened 3 years ago

AugggRush commented 3 years ago

I trained this model by myself with the same dataset, Ljspeech, and I found that the loss print on the terminal is negative, with iteration growing, It becomes more negative, getting far from zero. I want to know this is correct?I think that to maximize log likelihood, the loss should generally reach zero. If anyone have ideas, please note me, thanks very much.

yasntrk commented 2 years ago

It is okay to get negative loss

KB00100100 commented 1 year ago

I have the same question. The loss value is more negative when iteration growing, would the final trained model work?