The paper - Semi-supervised Sequence Learning(https://arxiv.org/abs/1511.01432) - states that for training of SA-LSTM, they used the same LSTM for both encoding and for decoding. However, this implementation uses 2 LSTMs - encoder_cell, and decoder_cell. Could you please clarify.
The paper - Semi-supervised Sequence Learning(https://arxiv.org/abs/1511.01432) - states that for training of SA-LSTM, they used the same LSTM for both encoding and for decoding. However, this implementation uses 2 LSTMs - encoder_cell, and decoder_cell. Could you please clarify.
Thanks, Shishir