lkulowski / LSTM_encoder_decoder

Build a LSTM encoder-decoder using PyTorch to make sequence-to-sequence prediction for time series data
MIT License
366 stars 84 forks source link

Possible API misunderstanding and error. #1

Closed tomsch420 closed 3 years ago

tomsch420 commented 3 years ago

Hello, I am trying to use your lstm to solve a problem. I tried the example and everything worked fine. Now I want to derive a 7 dimensional feature from a 7 dimensional time series with a window size of 20 and 60000 samples. So I thought the shape of the training data should be (20, 60000, 7) and the shape of the target data should be (7,60000,1). Am I getting this wrong?

Also when I try to learn on the custom data I am getting the following error:

Traceback (most recent call last): File "model_creation.py", line 141, in main() File "model_creation.py", line 130, in main loss = model.train_model(x_train, y_train, n_epochs=10, target_len=7, batch_size=50, training_prediction="mixed_teacher_enforcing", File "C:\Users\tomsc\gdm-python\classes\lstm_encoder_decoder.py", line 220, in train_model loss.backward() File "C:\Users\tomsc\anaconda3\lib\site-packages\torch\tensor.py", line 221, in backward torch.autograd.backward(self, gradient, retain_graph, create_graph) File "C:\Users\tomsc\anaconda3\lib\site-packages\torch\autograd__init__.py", line 130, in backward Variable._execution_engine.run_backward( RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn

Can you help me with that?

Thanks and Greetings

lkulowski commented 3 years ago

Hi,

It looks like you need to use the windowing procedure to preprocess your data before passing it to train the model. See windowed_dataset in https://github.com/lkulowski/LSTM_encoder_decoder/blob/master/code/generate_dataset.py .

You will need to decide an input and output window size. See Section 2 in the readme for more detail. For an 80 input sequence to a 20 output sequence prediction using 7 features (i.e., the 7 time series), X should have shape (80, # windowed examples, 7) and Y should have shape (20, # windowed examples, 7). The number of windowed examples is determined by the windowing procedure.

Good luck!

On Thu, Nov 19, 2020 at 4:42 PM tomsch420 notifications@github.com wrote:

Hello, I am trying to use your lstm to solve a problem. I tried the example and everything worked fine. Now I want to derive a 7 dimensional feature from a 7 dimensional time series with a window size of 20 and 60000 samples. So I thought the shape of the training data should be (20, 60000, 7) and the shape of the target data should be (7,60000,1). Am I getting this wrong?

Also when I try to learn on the custom data I am getting the following error:

Traceback (most recent call last): File "model_creation.py", line 141, in main() File "model_creation.py", line 130, in main loss = model.train_model(x_train, y_train, n_epochs=10, target_len=7, batch_size=50, training_prediction="mixed_teacher_enforcing", File "C:\Users\tomsc\gdm-python\classes\lstm_encoder_decoder.py", line 220, in train_model loss.backward() File "C:\Users\tomsc\anaconda3\lib\site-packages\torch\tensor.py", line 221, in backward torch.autograd.backward(self, gradient, retain_graph, creategraph) File "C:\Users\tomsc\anaconda3\lib\site-packages\torch\autogradinit_.py", line 130, in backward Variable._execution_engine.run_backward( RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn

Can you help me with that?

Thanks and Greetings

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/lkulowski/LSTM_encoder_decoder/issues/1, or unsubscribe https://github.com/notifications/unsubscribe-auth/AK36OR6QIJGYMD2XWXM6PDTSQWGLZANCNFSM4T36WPPA .

tomsch420 commented 3 years ago

Hello, thanks for the advice. I followed it and now I can train on a custom 7D timesieries.

Thanks you so much, finally an encoder/decoder i can deal with! :D