lkulowski / LSTM_encoder_decoder

Build a LSTM encoder-decoder using PyTorch to make sequence-to-sequence prediction for time series data
MIT License
381 stars 86 forks source link

Confuse about the Dimension #7

Closed zzzertion closed 1 year ago

zzzertion commented 1 year ago

Hi, thanks for this very detail and reader friendly tutorial and code. I am very interested in encoder and decoder, now I finally closer to it. just a little bit curious about the demission:

_lstm_out, self.hidden = self.lstm(x_input.unsqueeze(0), encoder_hiddenstates)

In my understanding, the input to lstm should be(seq_len, batch size, input_dimension) but here, the input become "x_input.unsqueeze(0), encoder_hidden_states" Can you help me to explain more detail about it? Like why we use unsqueeze to add one dimension, and how do we combine the hidden state with the new x input?

one more question for encoder, you initialized the hidden and cell, but I didn't see you use it in encoder?

Thanks!!!!!!!!!!!!!!

zzzertion commented 1 year ago

I got it