Nixtla / neuralforecast

Scalable and user friendly neural :brain: forecasting algorithms.
https://nixtlaverse.nixtla.io/neuralforecast
Apache License 2.0
2.69k stars 312 forks source link

Transfer learning tutorial issue with --ntasks error #1046

Closed neilmartindev closed 1 week ago

neilmartindev commented 1 week ago

What happened + What you expected to happen

Hey, I tried to follow the tutorial on Transfer Learning and tried to run this section:

`horizon = 12 stacks = 3 models = [NHITS(input_size=5 horizon, h=horizon, max_steps=100, stack_types = stacks['identity'], n_blocks = stacks[1], mlpunits = [[256,256] for in range(stacks)], n_pool_kernel_size = stacks[1], batch_size = 32, scaler_type='standard', n_freq_downsample=[12,4,1])] nf = NeuralForecast(models=models, freq='M') nf.fit(df=Y_df)

nf.save(path='./results/transfer/', model_index=None, overwrite=True, save_dataset=False)`

However, I got this runtime error:

RuntimeError: You set --ntasks=6 in your SLURM bash script, but this variable is not supported. HINT: Use --ntasks-per-node=6 instead.

I've not used or touched slurm but I did look int my slurm.py script but can't see anything that jumps out at me. Do you have any advice? I'm a new PhD student to any help is really appreciated!

Thanks, Neil

Versions / Dependencies

Python 3.10.4 Linux '4.18.0-372.32.1.el8_6.x86_64'

Reproduction script

horizon = 12 stacks = 3 models = [NHITS(input_size=5 horizon, h=horizon, max_steps=100, stack_types = stacks['identity'], n_blocks = stacks[1], mlpunits = [[256,256] for in range(stacks)], n_pool_kernel_size = stacks[1], batch_size = 32, scaler_type='standard', n_freq_downsample=[12,4,1])] nf = NeuralForecast(models=models, freq='M') nf.fit(df=Y_df)

nf.save(path='./results/transfer/', model_index=None, overwrite=True, save_dataset=False)

Issue Severity

Medium: It is a significant difficulty but I can work around it.

neilmartindev commented 1 week ago

Mixed up the MLForecast with this, sorry!