jaywalnut310 / glow-tts

A Generative Flow for Text-to-Speech via Monotonic Alignment Search
MIT License
651 stars 150 forks source link

One question about the decoder compared with FastSpeech and Tacotron. #42

Open LeoniusChen opened 3 years ago

LeoniusChen commented 3 years ago

It's really amazing that Glow-TTS does such a good job. I have some confusion about the decoder framework: There is no post-net at the end of the decoder. I understand that the invertible flows require that there cannot be a post net, but how can Glow-TTS get such good results without it (while Tacotron and Fastspeech both have the post-net)?

jaywalnut310 commented 3 years ago

I have no empirical evidence, but I think the difference comes from whether to capture dependencies between output mel-spectrogram frames or not. The probabilistic modeling of each model is quite different, and therefore their factorization levels of output distribution are different. For brevity, I'll not mention some conditions explicitly. For example, I'll use p(mel-frames), not p(mel-frames | text).

The post-net can be used to refine the output mel-frames after sampling procedure is over. It makes up for the lack of in-channel or in-frame dependencies of mel-frames.

Now, look at the difference of all models: Tacotron 2: no future-frame and in-channel info -> p(mel) = product of p(mel[i,j] | mel[:i]) FastSpeech: no in-frame and in-channel info -> p(mel) = product of p(mel[i,j]) Glow-TTS: some degree of all-frames and all-channels info -> p(mel) = product of p(latent_representation[i,j]) jacobian determinant

*In Glow-TTS, an 1x1 invertible convolution captures in-channel dependencies, and an affine coupling layer captures in-frame dependencies.

Therefore, although Glow-TTS samples all mel-frames in parallel, it can use some degree of information of all previous and next channels as well as all previous and next frames, to make the current channel of current mel-frame, without the need of post-net.

LeoniusChen commented 3 years ago

Thanks for your reply. As you explained, the results benefit from the coupling layer and the invertible convolution design. It's a nice work of normalizing flows! Congratulations and I will follow your paper and research!