barronalex / Tacotron

Implementation of Google's Tacotron in TensorFlow
236 stars 80 forks source link

Attention RNN #30

Closed pravn closed 5 years ago

pravn commented 5 years ago

I am wondering if the attention RNN described in the paper is included in the implementation. If so, could someone point out lines in code where it is used? The reason I ask is because it seems to me that the paper is keeping track of the attention states and using them as input to predict the next timestep's attention. This makes sense as they do a similar thing in the Chorowsky at al paper (this mechanism is also used in the Tacotron2 paper) to keep the attention moving forward. But I could very well be wrong about interpreting this.

pravn commented 5 years ago

Never mind - we just have the regular Bahdanau attention mechanism. I will close.