Closed JulianNyarko closed 2 years ago
Hi @JulianNyarko , from your description it looks like the model might have overfit the training part of the data. I see that you closed the issue - did you manage to solve it somehow?
Thanks, @hrzn. Yes, I eventually figured out it was a pretty standard overfitting issue. It is a bit hard to address in this case, partially because I can’t track the train and val loss over epochs to stare at the learning curve. Does that feature exist / is it planned? Could open a new request if that helps!
Hi @JulianNyarko, the way to do it is with Tensorboard. Set log_tensorboard=True
and then you can visualise the training and validation losses in Tensorboard.
Love darts! I have a problem that I can't wrap my head around, though. The out-of-sample performance of global models seems to be quite poor when compared to univariate time series modeling.
I have a simulated dataset with ~500 target series and ~400 covariate series. My training dataset spans periods 1-30, validation is 31-40, I want to predict 41-50. When training an
NBEATS()
model for an individual target series and checkinghistorical_forecasts()
, I get good results throughout. However, when training a global model and usinghistorical_forecasts()
on an individual series, then the predictions during the training period are really great, but the predictions during the validation period are very poor. This confuses me. I could understand if both in- and out-of-sample predictions for the global model were poor during backtesting, but the fact that only out-of-sample predictions for the global model are poor makes me think that something might be wrong with the validation step.I understand this might be difficult to answer in the abstract, but is that result expected at all?
Just for reference, this is how I train and backtest the univariate model:
And for the global model: