JulesBelveze / time-series-autoencoder

PyTorch Dual-Attention LSTM-Autoencoder For Multivariate Time Series
Apache License 2.0
614 stars 63 forks source link

“feature, y_hist, target = batch” in the "train.py" raised error "ValueError: not enough values to unpack (expected 3, got 2)" #3

Closed AmberLay closed 3 years ago

AmberLay commented 3 years ago

“feature, y_hist, target = batch” in the "train.py" raised error "ValueError: not enough values to unpack (expected 3, got 2)",because "batch" only has two values,so what does "y_hist" mean? Sorry I just know a little about Attention ,I just want to run first.

JulesBelveze commented 3 years ago

Hey @AmberLay , thanks for opening the issue. There's indeed a problem, the iterator only returns 2 arguments. I'll fix this (or feel free if you feel like doing it :) ).

y_hist is basically target but shifted one timestamp to the left. You need this to predict the first timestamp of each time window since you use the actual y[t-1] to predict y[t]. Does that make sense?

JulesBelveze commented 3 years ago

@AmberLay I've done a fix on the branch fix/dataset but I have no way of testing it atm.. would you mind testing it on your dataset and let me know if that works? I'll then merge the fix.

AmberLay commented 3 years ago

I see, thank you for your reply.But when I tested on my dataset after you fixed the bug, another bug happend. I tried to fix, but problems seem too many, such as AttnEncoder object has no attribute directions, Indexerror raised in the slice loop y_history[:, t]but y_historyhas the shape (X,1), 'TypeError:Object of type device is not JSON serializable' raised in the dump(config, f)function, ... . I should spend more time on your code and the paper, but I have no enouge free time now, maybe I will fix these in the future. If you can fix further, thank you so much.

JulesBelveze commented 3 years ago

Alright, I'll try to spend some time on it. I'm closing this issue though. Feel free to open new ones if you encounter any other bugs.