Closed JulianNyarko closed 2 years ago
I realize this might just mean that the model doesn't update its predictions much, given an input. If that's all there is to it, then it seems to just be a small data problem. Will close for now!
Hi Julian,
I am seeing the same thing when training with output_chunk_length=1. My gut feeling is that the model is basically producing a naive forceast.
When trying to validate an
NBEATSModel
model trained viahistorical_forecast
with smallinput_chunk_length
andoutput_chunk_length
, it seems the prediction is always one step off. For instance, on simulated data, I run:When evaluating performance like this
I get
1.64
But when evaluating performance with this
I get
1.04
. This is consistent across different target series and also other models that use past covariates (likeTCNModel
). I have confirmed this through the plots, which make it pretty obvious predictions are exactly one step off. When increasing the target series (from 50 to 500 periods) and settinginput_chunk_length
andoutput_chunk_length
sufficiently high (e.g.30
and15
), the predicted series is the most accurate and does not require a shift.To make sure it is not my data, I went to this blog post. When training an
NBEATSModel
and setting input and output chunk length sufficiently small, the issue replicates on that data as well.I understand setting
input_chunk_length
andoutput_chunk_length
to small values is not super common, but the predictions are actually very good once they are lagged by one period. Any suggestions on what might be going on?