ServiceNow / TACTiS

TACTiS-2: Better, Faster, Simpler Attentional Copulas for Multivariate Time Series, from ServiceNow Research
Apache License 2.0
108 stars 19 forks source link

Imputation task #19

Open chenxiaodanhit opened 10 months ago

chenxiaodanhit commented 10 months ago

Thank you for your valuable contributions! I have some confusion regarding the imputation task. While the code provided showcases the prediction task, it appears that the loss calculation in decoder.py involves generating hist_encoded, pred_encoded, hist_true_x, and pred_true_x using a mask. This seems to imply that the lengths of missing values in a batch are assumed to be constant. However, if the number of missing values in the historical data varies, could you kindly provide suggestions on how to adjust the code to accommodate this scenario? Thank you for your patiance!

marcotet commented 10 months ago

Thanks for your interest in TACTiS. Sadly, its current architecture doesn't allow for variable length predictions in the decoder.

I don't think reasonable tweaks could allow you to get around this due to the decoder doing a reshuffling of the predicted variables. If I remember correctly, using a variable reshuffling for each batch element had a significant performance impact, and doing so would be the start of having a variable-length prediction window (for example by adding dummy "no need for forecast" variables at the end of the window).

Anyhow, if I was to make a suggestion, it would be to reorder the batching process such that each batch has a constant number of element in the prediction window. The model should be able to handle variable-length prediction windows, as long as each length was in its own batches.