farizrahman4u / seq2seq

Sequence to Sequence Learning with Keras
GNU General Public License v2.0
3.17k stars 845 forks source link

Possible bug in AttentionDecoderCell #219

Open fahman opened 7 years ago

fahman commented 7 years ago

https://github.com/farizrahman4u/seq2seq/blob/1f1c3304991eb91b533e91ac5f96ee3290fa9c7d/seq2seq/cells.py#L85

instead of this: C = Lambda(lambda x: K.repeat(x, input_length), output_shape=(input_length, input_dim))(c_tm1)

shouldn't it be this (input_dim -> hidden_dim): C = Lambda(lambda x: K.repeat(x, input_length), output_shape=(input_length, hidden_dim))(c_tm1)

Because c_tm1 has dimensionality of hidden_dim

Lamyaaa commented 6 years ago

Can't agree more.