syang1993 / gst-tacotron

A tensorflow implementation of the "Style Tokens: Unsupervised Style Modeling, Control and Transfer in End-to-End Speech Synthesis"
368 stars 110 forks source link

multi head attention #12

Closed Young-Sun closed 6 years ago

Young-Sun commented 6 years ago

Hi, again. The code is set to use mlp_attention, now. Did the uploaded audio demo samples use mlp_attention? Have you ever experimented using multi head attention? Do you have any audio samples?

Thanks.

syang1993 commented 6 years ago

Hi, the mlp_attention is part of the multi-head attention. dot_attention and mlp_attention are two different methods to compute attention weights in multi-head attention. The demo samples are trained using the default settings.

Young-Sun commented 6 years ago

Sorry for mis-understand. Yes, both dot_attention and mlp_attenton are in multi-head attention. I see, the audio samples synthesized using mlp_attention. Thanks for your fast and kind reply. :-)