Nixtla / neuralforecast

Scalable and user friendly neural :brain: forecasting algorithms.
https://nixtlaverse.nixtla.io/neuralforecast
Apache License 2.0
2.7k stars 313 forks source link

[BUG] Model refit after cross-validation #980

Closed elephaint closed 2 months ago

elephaint commented 2 months ago

What happened + What you expected to happen

Raising the issue so that I don't forget to solve it [the change is simple but I need to spend a bit more time on it]

In Auto* models, it seems we refit the model after finding the best set of hyperparameters here: https://github.com/Nixtla/neuralforecast/blob/0c1a7607ce31aae6db8f53a583c1238e56f821e9/neuralforecast/common/_base_auto.py#L424

However, it seems it should be be val_size = val_size * self.refit_with_val instead of val_size=val_size * (1 - self.refit_with_val).

refit_with_val is a boolean that is False by default, so the current code (by default) will refit with val, as val_size will be > 0. So it seems the boolean works the wrong way around. I'd assume that if refit_with_val=False, you don't want to use a validation set in the final fit. The current implementation does the opposite.

Versions / Dependencies

1.7.1

Reproduction script

n/a

Issue Severity

Low: It annoys or frustrates me.