Open dohuuphu opened 2 years ago
I'm applying MAS to FastSpeech2 to replace MFA tool. I'm using encoder output in FastSpeech2 as the encoder output in Glow-TTS to fed into MAS method. After several iteration, the duration ( MAS output ) was incorrect
Ex: MAS calculates for 36 phoneme and the output as below: Just 2 phonemes have the value [[[0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 3.1355, 0.0000, 0.0000, 4.9053, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000]]]
Can anyone give me some advices??
I think this may because the TextEncoder in Glow-TTS predicts the mean and variance of z, which is estimating the probability of z. Thus we can find the best alignment between frames and text by maxmizing the probability of z using MAS. However the Encoder in Fastspeech2 doesn't produce the probability, so maybe MAS can't be applied to Fastspeech2. If you want to use a network to learn the alignment between texts and frames without attention, AlignTTS provides a method.
I'm applying MAS to FastSpeech2 to replace MFA tool. I'm using encoder output in FastSpeech2 as the encoder output in Glow-TTS to fed into MAS method. After several iteration, the duration ( MAS output ) was incorrect
Ex: MAS calculates for 36 phoneme and the output as below: Just 2 phonemes have the value [[[0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 3.1355, 0.0000, 0.0000, 4.9053, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000]]]
Can anyone give me some advices??