The pytorch official documentation mentions for torch.nn.LSTM
"input of shape (seq_len, batch, input_size)" but in your recurrent_neural_network example I observed that the input size is [100, 28, 28] where
sequence_length = 28
input_size = 28
batch_size = 100.
Is this correct? Or should we transpose the tensor? I am confused.
The pytorch official documentation mentions for torch.nn.LSTM "input of shape (seq_len, batch, input_size)" but in your recurrent_neural_network example I observed that the input size is [100, 28, 28] where sequence_length = 28 input_size = 28 batch_size = 100. Is this correct? Or should we transpose the tensor? I am confused.