the example doesn't work, failing because of a dimensionality mismatch.
derek@zoe:~/projects/pytorch_examples/timeseries$ time python3 LSTM.py
Traceback (most recent call last):
File "LSTM.py", line 92, in <module>
loss = criterion(outputs, trainY)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 357, in __call__
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/loss.py", line 379, in forward
return F.mse_loss(input, target, size_average=self.size_average, reduce=self.reduce)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py", line 1282, in mse_loss
input, target, size_average, reduce)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py", line 1248, in _pointwise_loss
return lambd_optimized(input, target, size_average, reduce)
RuntimeError: input and target have different number of elements: input[828 x 1] has 828 elements, while target[414 x 1] has 414 elements at /pytorch/torch/lib/THNN/generic/MSECriterion.c:13
why would changing the number of hidden layers double the output size here?
if i change
num_layers
from 1 to 2:the example doesn't work, failing because of a dimensionality mismatch.
why would changing the number of hidden layers double the output size here?