Open gidim opened 7 years ago
That's an interesting idea, how do we make this model remembering some facts from previous dialogue? I guess neural Turing machine might be a good candidate.
There's many ways to maintain some memory of the sequence of inputs but the easiest is just to keep the LSTM/GRU state between calls to model.step(), and not reset it.
Hi, Any plans on adding state to the encoder/decoder? the idea is that you realistically you want to predict (answer_n | question_n, answer_n-1,question_n-1 ...) and not one by one as the original translation model is doing.