Closed christopher5106 closed 7 years ago
Hi,
I sounds like previous context is fed when input_feed==1
https://github.com/harvardnlp/seq2seq-attn/blob/master/s2sa/models.lua#L27-L28
but the all context vector is not used when input_feed == 0
https://github.com/harvardnlp/seq2seq-attn/blob/master/s2sa/models.lua#L82-L85
Is that desirable ?
Thanks
OK, all context vector is used later for the attention mecanism.
Hi,
I sounds like previous context is fed when input_feed==1
https://github.com/harvardnlp/seq2seq-attn/blob/master/s2sa/models.lua#L27-L28
but the all context vector is not used when input_feed == 0
https://github.com/harvardnlp/seq2seq-attn/blob/master/s2sa/models.lua#L82-L85
Is that desirable ?
Thanks