WenjieDu / SAITS

The official PyTorch implementation of the paper "SAITS: Self-Attention-based Imputation for Time Series". A fast and state-of-the-art (SOTA) deep-learning neural network model for efficient time-series imputation (impute multivariate incomplete time series containing NaN missing data/values with machine learning). https://arxiv.org/abs/2202.08516
https://doi.org/10.1016/j.eswa.2023.119619
MIT License
319 stars 50 forks source link

Test data #22

Closed abhishekju06 closed 1 year ago

abhishekju06 commented 1 year ago

Hi,

After certain modification and inclusion of code snippets I was able train, validate and get the mae for test data. I want to obtain the de-normalized value after the imputation happens in test data, both predicted and actual. Can you help?

WenjieDu commented 1 year ago

Hi there,

Thank you so much for your attention to SAITS! If you find SAITS is helpful to your work, please star⭐️ this repository. Your star is your recognition, which can let others notice SAITS. It matters and is definitely a kind of contribution.

I have received your message and will respond ASAP. Thank you again for your patience! 😃

Best,
Wenjie

WenjieDu commented 1 year ago

In data preprocessing, you normalized data with a scaler from sklearn. Keep it. After imputation, you inverse the normalization with func inverse_transform(). For example, StandardScaler.inverse_transform().

abhishekju06 commented 1 year ago

Hi, I was able to generate Actual Vs Predicted in csv format. The results are pretty amazing! Thanks for ur patience wid me.

WenjieDu commented 1 year ago

@abhishekju06 Glad to hear that! I sincerely invite you to follow me on GitHub so that you can receive PyPOTS' latest news instantly!