input = torch.cat((h, x_flipped), dim=self.time_axis)
RuntimeError: Tensors must have same number of dimensions: got 3 and 2
This is probably due to the line
h = h[-1].view(1, 1, -1)
which transforms the 2-dimensional tensor h into a three-dimensional tensor which cannot be concatenated with x_flipped. A potential fix would be to change the line to
h = h[-1].view(1, 1, -1)[0]
Maybe this issue is also present in other autoencoder examples, and we don't know for sure if this is the proper fix, hence the issue.
When running the examples from the LSTM autoencoder section (e.g. https://github.com/online-ml/river-torch/blob/master/docs/examples/anomaly/example_lstm_autoencoder.ipynb) we are getting the error:
This is probably due to the line
which transforms the 2-dimensional tensor
h
into a three-dimensional tensor which cannot be concatenated withx_flipped
. A potential fix would be to change the line toMaybe this issue is also present in other autoencoder examples, and we don't know for sure if this is the proper fix, hence the issue.