cpuheater / pytorch_examples

Some example scripts in pytorch
26 stars 11 forks source link

num_layers param doesn't work #2

Open keredson opened 6 years ago

keredson commented 6 years ago

if i change num_layers from 1 to 2:

derek@zoe:~/projects/pytorch_examples/timeseries$ git diff LSTM.py
diff --git a/timeseries/LSTM.py b/timeseries/LSTM.py
index 59ba77b..ee4a2b3 100644
--- a/timeseries/LSTM.py
+++ b/timeseries/LSTM.py
@@ -37,7 +37,7 @@ learning_rate = 0.01
 input_size = 1
 hidden_size = 5
 num_classes = 1
-num_layers = 1
+num_layers = 2
 num_epochs = 400

the example doesn't work, failing because of a dimensionality mismatch.

derek@zoe:~/projects/pytorch_examples/timeseries$ time python3 LSTM.py 
Traceback (most recent call last):
  File "LSTM.py", line 92, in <module>
    loss = criterion(outputs, trainY)
  File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 357, in __call__
    result = self.forward(*input, **kwargs)
  File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/loss.py", line 379, in forward
    return F.mse_loss(input, target, size_average=self.size_average, reduce=self.reduce)
  File "/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py", line 1282, in mse_loss
    input, target, size_average, reduce)
  File "/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py", line 1248, in _pointwise_loss
    return lambd_optimized(input, target, size_average, reduce)
RuntimeError: input and target have different number of elements: input[828 x 1] has 828 elements, while target[414 x 1] has 414 elements at /pytorch/torch/lib/THNN/generic/MSECriterion.c:13

why would changing the number of hidden layers double the output size here?

cpuheater commented 6 years ago

in the forward method, I'm using the hidden state as an output.