Hi, thanks for this very detail and reader friendly tutorial and code.
I am very interested in encoder and decoder, now I finally closer to it.
just a little bit curious about the demission:
In my understanding, the input to lstm should be(seq_len, batch size, input_dimension)
but here, the input become "x_input.unsqueeze(0), encoder_hidden_states"
Can you help me to explain more detail about it?
Like why we use unsqueeze to add one dimension, and how do we combine the hidden state with the new x input?
one more question for encoder, you initialized the hidden and cell, but I didn't see you use it in encoder?
Hi, thanks for this very detail and reader friendly tutorial and code. I am very interested in encoder and decoder, now I finally closer to it. just a little bit curious about the demission:
_lstm_out, self.hidden = self.lstm(x_input.unsqueeze(0), encoder_hiddenstates)
In my understanding, the input to lstm should be(seq_len, batch size, input_dimension) but here, the input become "x_input.unsqueeze(0), encoder_hidden_states" Can you help me to explain more detail about it? Like why we use unsqueeze to add one dimension, and how do we combine the hidden state with the new x input?
one more question for encoder, you initialized the hidden and cell, but I didn't see you use it in encoder?
Thanks!!!!!!!!!!!!!!