However, it seems it should be be val_size = val_size * self.refit_with_val instead of val_size=val_size * (1 - self.refit_with_val).
refit_with_val is a boolean that is False by default, so the current code (by default) will refit with val, as val_size will be > 0. So it seems the boolean works the wrong way around. I'd assume that if refit_with_val=False, you don't want to use a validation set in the final fit. The current implementation does the opposite.
What happened + What you expected to happen
Raising the issue so that I don't forget to solve it [the change is simple but I need to spend a bit more time on it]
In Auto* models, it seems we refit the model after finding the best set of hyperparameters here: https://github.com/Nixtla/neuralforecast/blob/0c1a7607ce31aae6db8f53a583c1238e56f821e9/neuralforecast/common/_base_auto.py#L424
However, it seems it should be be
val_size = val_size * self.refit_with_val
instead ofval_size=val_size * (1 - self.refit_with_val)
.refit_with_val
is a boolean that isFalse
by default, so the current code (by default) will refit with val, as val_size will be > 0. So it seems the boolean works the wrong way around. I'd assume that ifrefit_with_val=False
, you don't want to use a validation set in the final fit. The current implementation does the opposite.Versions / Dependencies
1.7.1
Reproduction script
n/a
Issue Severity
Low: It annoys or frustrates me.