How is the dialogue history encoded here? In the paper they say "The previous two dialogue turns are transformed to a vector representation by feeding the concatenation of them into an LSTM encoder model".
I'm not sure how to interpret this and I'm interested in how it's realized here.
How is the dialogue history encoded here? In the paper they say "The previous two dialogue turns are transformed to a vector representation by feeding the concatenation of them into an LSTM encoder model".
I'm not sure how to interpret this and I'm interested in how it's realized here.
Thanks