harvardnlp / seq2seq-attn

Sequence-to-sequence model with LSTM encoder/decoders and attention
http://nlp.seas.harvard.edu/code
MIT License
1.26k stars 278 forks source link

input feed #84

Closed christopher5106 closed 7 years ago

christopher5106 commented 7 years ago

Hi,

I sounds like previous context is fed when input_feed==1

https://github.com/harvardnlp/seq2seq-attn/blob/master/s2sa/models.lua#L27-L28

but the all context vector is not used when input_feed == 0

https://github.com/harvardnlp/seq2seq-attn/blob/master/s2sa/models.lua#L82-L85

Is that desirable ?

Thanks

christopher5106 commented 7 years ago

OK, all context vector is used later for the attention mecanism.