Closed liuyijiang1994 closed 6 years ago
@liuyijiang1994 I think v
is essentially the applied attention weights that assign a score to attention energies to help the decoder understand the context and output the most relevant information/tokens.
You might want to give this paper a read, it explains attention for seq2seq autoencoders quite well :)
@liuyijiang1994 I think
v
is essentially the applied attention weights that assign a score to attention energies to help the decoder understand the context and output the most relevant information/tokens.You might want to give this paper a read, it explains attention for seq2seq autoencoders quite well :)
much thanks!
hi, I find that in your code there is a parameter v in the model Attention , I wonder what this is for, thanks!