Closed pranavvp16 closed 6 months ago
Thanks for reporting - I think input_size_multiplier
doesn't work in any case right now as a configurable hyperparameter so this is something that needs to be fixed too.
You can fix this (monkey patch) by:
input_size_multiplier
in the config, i.e. del config['input_size_multiplier']
inference_input_size_multiplier
in the config, i.e. del config['inference_input_size_multiplier']
input_size
and inference_input_size
to the config with your values of choice.It's a bit involved, unfortunately. The default_config takes the user-supplied values for the horizon and uses those to calculate default values for some parameters. It makes use of these *_multiplier
params for that, but these are not params that the underlying model can handle. So we you extract the default_config, these *_multiplier
params need to be removed from the config and replaced by their 'normal' counterparts. Not ideal, it's something we have to think about how/if to fix.
I think this issue related to the other documentation issue I created in the morning: #929. That is blocking me for the Auto* adapter, and I guess this will block @pranavvp16 to inherit it for AutoLSTM interface in sktime.
Regarding the default config update, will it help if you have separate dataclass/pydantic.BaseModel etc. per model to validate and update hyperparameters?
What happened + What you expected to happen
I tried to update the default config for Auto model
AutoLSTM
as mentioned in issues #924, but the model fails to to train when called withmodel.fit
. Seems the config is being passed toTrainer
from pytorch lightning. I got this same error when I updatedmax_steps
too, So I think updating any key from the default config and passing it to model doesn't work correctly. Here is notebook illustrating the issue.Versions / Dependencies
neuralforecast == 1.6.4
Reproduction script
Issue Severity
High: It blocks me from completing my task.