thank you for your contribution for dialogue generation.
when I read your paper 'A Hierarchical Latent Variable Encoder-Decoder Model for Generating Dialogues', the formula used to compute the output of context RNN makes me feel puzzled.
It is w(n,m) in your equation, which I think should be w(n,m-1). Because, the hidden decoder state of the number m word should be computed based on the previous word, other than itself.
I am looking forward your reply.
thank you for your contribution for dialogue generation. when I read your paper 'A Hierarchical Latent Variable Encoder-Decoder Model for Generating Dialogues', the formula used to compute the output of context RNN makes me feel puzzled. It is w(n,m) in your equation, which I think should be w(n,m-1). Because, the hidden decoder state of the number m word should be computed based on the previous word, other than itself. I am looking forward your reply.