jaywalnut310 / glow-tts

A Generative Flow for Text-to-Speech via Monotonic Alignment Search
MIT License
651 stars 150 forks source link

Can I apply MAS method to other model ? #67

Open dohuuphu opened 2 years ago

dohuuphu commented 2 years ago

I'm applying MAS to FastSpeech2 to replace MFA tool. I'm using encoder output in FastSpeech2 as the encoder output in Glow-TTS to fed into MAS method. After several iteration, the duration ( MAS output ) was incorrect

Ex: MAS calculates for 36 phoneme and the output as below: Just 2 phonemes have the value [[[0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 3.1355, 0.0000, 0.0000, 4.9053, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000]]]

Can anyone give me some advices??

unrea1-sama commented 2 years ago

I'm applying MAS to FastSpeech2 to replace MFA tool. I'm using encoder output in FastSpeech2 as the encoder output in Glow-TTS to fed into MAS method. After several iteration, the duration ( MAS output ) was incorrect

Ex: MAS calculates for 36 phoneme and the output as below: Just 2 phonemes have the value [[[0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 3.1355, 0.0000, 0.0000, 4.9053, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000]]]

Can anyone give me some advices??

I think this may because the TextEncoder in Glow-TTS predicts the mean and variance of z, which is estimating the probability of z. Thus we can find the best alignment between frames and text by maxmizing the probability of z using MAS. However the Encoder in Fastspeech2 doesn't produce the probability, so maybe MAS can't be applied to Fastspeech2. If you want to use a network to learn the alignment between texts and frames without attention, AlignTTS provides a method.