Nixtla / neuralforecast

Scalable and user friendly neural :brain: forecasting algorithms.
https://nixtlaverse.nixtla.io/neuralforecast
Apache License 2.0
2.7k stars 313 forks source link

Exception: Horizon `h=1` incompatible with `seasonality` or `trend` in stacks #955

Closed From-zero-tohero closed 2 months ago

From-zero-tohero commented 3 months ago

Exception: Horizon h=1 incompatible with seasonality or trend in stacks

elephaint commented 2 months ago

Hi, can you give a standalone example of code that produces the Exception so that we can reproduce?

noahvand commented 2 months ago

I came across the same issue in NBeats models

%%time
start_time = time.time()
# define local variables
horizon = 1
op_runs = 3
ensemble_times = 5

# 1 define search space
nbeats_config=dict(
    hist_exog_list = ['rain', "tide"],
    max_steps = tune.choice([300, 600, 1000]),
    input_size=24*7,
    learning_rate=tune.choice([1e-2, 1e-3, 1e-4]),
    random_seed = 1,
    scaler_type='minmax',
    batch_size = tune.choice([8, 16, 32, 64, 128, 256]),
    stack_types=['identity', 'trend', 'seasonality']
                    )

# 2 hyperparameter optimisations
model_op = AutoNBEATS(h=horizon,
                  loss=MSE(),
                  valid_loss=MSE(),
                  config=nbeats_config,
                  search_alg=HyperOptSearch(),
                  backend='ray',
                  num_samples=op_runs)

nf_op = NeuralForecast(models=[model_op], freq='h')
nf_op.cross_validation(df=Y_df, val_size = 6599,test_size=4584, n_windows=None)

# 3 ensemble forecasting with various random seeds
# retrive the optimal hyperparameters
optimised_config = nf_op.models[0].results.get_best_result().config
print (optimised_config)

# define the optmised hyperparameter for ensemble simulations
nbeats_models = []
for i in range(ensemble_times):
  nbeats_config = optimised_config.copy()
  nbeats_config['random_seed'] = random.randint(1, 20)
  nbeats_model = NBEATS(**nbeats_config,
                  early_stop_patience_steps=5,
                  val_check_steps=50)
  nbeats_models.append(nbeats_model)

# ensemble forecasting
nf_ensemble = NeuralForecast(models=nbeats_models, freq='h')
Y_hat_df = nf_ensemble.cross_validation(df=Y_df, val_size = 6599,test_size=4584, n_windows=None) #(3287, 4) (2663, 4) (4080, 4)
end_time = time.time()
differ_time = start_time - end_time

then I got the error below, (sorry for lengthy error message) but the error message will not appear if I use "identify" stack only

/usr/local/lib/python3.10/dist-packages/pytorch_lightning/utilities/parsing.py:199: Attribute 'loss' is an instance of `nn.Module` and is already saved during checkpointing. It is recommended to ignore them using `self.save_hyperparameters(ignore=['loss'])`.
/usr/local/lib/python3.10/dist-packages/pytorch_lightning/utilities/parsing.py:199: Attribute 'valid_loss' is an instance of `nn.Module` and is already saved during checkpointing. It is recommended to ignore them using `self.save_hyperparameters(ignore=['valid_loss'])`.
2024-04-11 11:31:16,420 INFO tune.py:622 -- [output] This will use the new output engine with verbosity 0. To disable the new output and use the legacy output engine, set the environment variable RAY_AIR_NEW_OUTPUT=0. For more information, please see https://github.com/ray-project/ray/issues/36949
+--------------------------------------------------------------------+
| Configuration for experiment     _train_tune_2024-04-11_11-31-16   |
+--------------------------------------------------------------------+
| Search algorithm                 SearchGenerator                   |
| Scheduler                        FIFOScheduler                     |
| Number of trials                 3                                 |
+--------------------------------------------------------------------+

View detailed results here: /root/ray_results/_train_tune_2024-04-11_11-31-16
To visualize your results with TensorBoard, run: `tensorboard --logdir /tmp/ray/session_2024-04-11_11-14-51_941718_2388/artifacts/2024-04-11_11-31-16/_train_tune_2024-04-11_11-31-16/driver_artifacts`
(_train_tune pid=8447) /usr/local/lib/python3.10/dist-packages/ray/tune/integration/pytorch_lightning.py:194: `ray.tune.integration.pytorch_lightning.TuneReportCallback` is deprecated. Use `ray.tune.integration.pytorch_lightning.TuneReportCheckpointCallback` instead.
(_train_tune pid=8447) /usr/local/lib/python3.10/dist-packages/pytorch_lightning/utilities/parsing.py:199: Attribute 'loss' is an instance of `nn.Module` and is already saved during checkpointing. It is recommended to ignore them using `self.save_hyperparameters(ignore=['loss'])`.
(_train_tune pid=8447) /usr/local/lib/python3.10/dist-packages/pytorch_lightning/utilities/parsing.py:199: Attribute 'valid_loss' is an instance of `nn.Module` and is already saved during checkpointing. It is recommended to ignore them using `self.save_hyperparameters(ignore=['valid_loss'])`.
(_train_tune pid=8447) Seed set to 1
(_train_tune pid=8447) /usr/local/lib/python3.10/dist-packages/torch/nn/init.py:452: UserWarning: Initializing zero-element tensors is a no-op
(_train_tune pid=8447)   warnings.warn("Initializing zero-element tensors is a no-op")
(_train_tune pid=8447) GPU available: True (cuda), used: True
(_train_tune pid=8447) TPU available: False, using: 0 TPU cores
(_train_tune pid=8447) IPU available: False, using: 0 IPUs
(_train_tune pid=8447) HPU available: False, using: 0 HPUs
(_train_tune pid=8447) Missing logger folder: /tmp/ray/session_2024-04-11_11-14-51_941718_2388/artifacts/2024-04-11_11-31-16/_train_tune_2024-04-11_11-31-16/working_dirs/_train_tune_19484a43_1_batch_size=16,h=1,hist_exog_list=rain_tide,input_size=168,learning_rate=0.0100,loss=ref_ph_de895953,max_ste_2024-04-11_11-31-16/lightning_logs
(_train_tune pid=8447) 2024-04-11 11:31:24.758408: E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:9261] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
(_train_tune pid=8447) 2024-04-11 11:31:24.758476: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:607] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
(_train_tune pid=8447) 2024-04-11 11:31:24.759935: E external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1515] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
(_train_tune pid=8447) 2024-04-11 11:31:25.943810: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
(_train_tune pid=8447) LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
(_train_tune pid=8447) 
(_train_tune pid=8447)   | Name         | Type          | Params
(_train_tune pid=8447) -----------------------------------------------
(_train_tune pid=8447) 0 | loss         | MSE           | 0     
(_train_tune pid=8447) 1 | valid_loss   | MSE           | 0     
(_train_tune pid=8447) 2 | padder_train | ConstantPad1d | 0     
(_train_tune pid=8447) 3 | scaler       | TemporalNorm  | 0     
(_train_tune pid=8447) 4 | blocks       | ModuleList    | 2.7 M 
(_train_tune pid=8447) -----------------------------------------------
(_train_tune pid=8447) 2.7 M     Trainable params
(_train_tune pid=8447) 845       Non-trainable params
(_train_tune pid=8447) 2.7 M     Total params
(_train_tune pid=8447) 10.856    Total estimated model params size (MB)
(_train_tune pid=8447) /usr/local/lib/python3.10/dist-packages/pytorch_lightning/trainer/connectors/data_connector.py:441: The 'val_dataloader' does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` to `num_workers=7` in the `DataLoader` to improve performance.
Sanity Checking DataLoader 0:   0%|          | 0/1 [00:00<?, ?it/s]
2024-04-11 11:31:27,432 ERROR tune_controller.py:1332 -- Trial task failed for trial _train_tune_19484a43
Traceback (most recent call last):
  File "/usr/local/lib/python3.10/dist-packages/ray/air/execution/_internal/event_manager.py", line 110, in resolve_future
    result = ray.get(future)
  File "/usr/local/lib/python3.10/dist-packages/ray/_private/auto_init_hook.py", line 21, in auto_init_wrapper
    return fn(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/ray/_private/client_mode_hook.py", line 103, in wrapper
    return func(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/ray/_private/worker.py", line 2667, in get
    values, debugger_breakpoint = worker.get_objects(object_refs, timeout=timeout)
  File "/usr/local/lib/python3.10/dist-packages/ray/_private/worker.py", line 866, in get_objects
    raise value
ray.exceptions.RayActorError: The actor died unexpectedly before finishing this task.
    class_name: ImplicitFunc
    actor_id: 386650df80c77aa8a23d9a4501000000
    pid: 8447
    namespace: 6857756c-8448-4ebf-9a50-f81d5cd41dbd
    ip: 172.28.0.12
The actor is dead because its worker process has died. Worker exit type: SYSTEM_ERROR Worker exit detail: Worker exits unexpectedly. Worker exits with an exit code None. Traceback (most recent call last):
  File "python/ray/_raylet.pyx", line 1883, in ray._raylet.execute_task
  File "python/ray/_raylet.pyx", line 1984, in ray._raylet.execute_task
  File "python/ray/_raylet.pyx", line 1889, in ray._raylet.execute_task
  File "python/ray/_raylet.pyx", line 1830, in ray._raylet.execute_task.function_executor
  File "/usr/local/lib/python3.10/dist-packages/ray/_private/function_manager.py", line 724, in actor_method_executor
    return method(__ray_actor, *args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/ray/util/tracing/tracing_helper.py", line 467, in _resume_span
    return method(self, *_args, **_kwargs)
  File "/usr/local/lib/python3.10/dist-packages/ray/tune/trainable/trainable.py", line 334, in train
    raise skipped from exception_cause(skipped)
  File "/usr/local/lib/python3.10/dist-packages/ray/air/_internal/util.py", line 88, in run
    self._ret = self._target(*self._args, **self._kwargs)
  File "/usr/local/lib/python3.10/dist-packages/ray/tune/trainable/function_trainable.py", line 53, in <lambda>
    training_func=lambda: self._trainable_func(self.config),
  File "/usr/local/lib/python3.10/dist-packages/ray/util/tracing/tracing_helper.py", line 467, in _resume_span
    return method(self, *_args, **_kwargs)
  File "/usr/local/lib/python3.10/dist-packages/ray/tune/trainable/function_trainable.py", line 261, in _trainable_func
    output = fn()
  File "/usr/local/lib/python3.10/dist-packages/ray/tune/trainable/util.py", line 130, in inner
    return trainable(config, **fn_kwargs)
  File "/usr/local/lib/python3.10/dist-packages/neuralforecast/common/_base_auto.py", line 209, in _train_tune
    _ = self._fit_model(
  File "/usr/local/lib/python3.10/dist-packages/neuralforecast/common/_base_auto.py", line 357, in _fit_model
    model = model.fit(
  File "/usr/local/lib/python3.10/dist-packages/neuralforecast/common/_base_windows.py", line 638, in fit
    return self._fit(
  File "/usr/local/lib/python3.10/dist-packages/neuralforecast/common/_base_model.py", line 215, in _fit
    trainer.fit(model, datamodule=datamodule)
  File "/usr/local/lib/python3.10/dist-packages/pytorch_lightning/trainer/trainer.py", line 544, in fit
    call._call_and_handle_interrupt(
  File "/usr/local/lib/python3.10/dist-packages/pytorch_lightning/trainer/call.py", line 44, in _call_and_handle_interrupt
    return trainer_fn(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/pytorch_lightning/trainer/trainer.py", line 580, in _fit_impl
    self._run(model, ckpt_path=ckpt_path)
  File "/usr/local/lib/python3.10/dist-packages/pytorch_lightning/trainer/trainer.py", line 987, in _run
    results = self._run_stage()
  File "/usr/local/lib/python3.10/dist-packages/pytorch_lightning/trainer/trainer.py", line 1031, in _run_stage
    self._run_sanity_check()
  File "/usr/local/lib/python3.10/dist-packages/pytorch_lightning/trainer/trainer.py", line 1060, in _run_sanity_check
    val_loop.run()
  File "/usr/local/lib/python3.10/dist-packages/pytorch_lightning/loops/utilities.py", line 182, in _decorator
    return loop_run(self, *args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/pytorch_lightning/loops/evaluation_loop.py", line 135, in run
    self._evaluation_step(batch, batch_idx, dataloader_idx, dataloader_iter)
  File "/usr/local/lib/python3.10/dist-packages/pytorch_lightning/loops/evaluation_loop.py", line 396, in _evaluation_step
    output = call._call_strategy_hook(trainer, hook_name, *step_args)
  File "/usr/local/lib/python3.10/dist-packages/pytorch_lightning/trainer/call.py", line 309, in _call_strategy_hook
    output = fn(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/pytorch_lightning/strategies/strategy.py", line 412, in validation_step
    return self.lightning_module.validation_step(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/neuralforecast/common/_base_windows.py", line 522, in validation_step
    output_batch = self(windows_batch)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1520, in _call_impl
    return forward_call(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/neuralforecast/models/nbeats.py", line 401, in forward
    backcast, block_forecast = block(insample_y=residuals)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1520, in _call_impl
    return forward_call(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/neuralforecast/models/nbeats.py", line 188, in forward
    backcast, forecast = self.basis(theta)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1520, in _call_impl
    return forward_call(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/neuralforecast/models/nbeats.py", line 139, in forward
    backcast = torch.einsum("bp,pt->bt", backcast_theta, self.backcast_basis)
  File "/usr/local/lib/python3.10/dist-packages/torch/functional.py", line 380, in einsum
    return _VF.einsum(equation, operands)  # type: ignore[attr-defined]
RuntimeError: einsum(): subscript p has size 2 for operand 1 which does not broadcast with previously seen size 0

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.10/dist-packages/ray/cloudpickle/cloudpickle.py", line 1245, in dump
    return super().dump(obj)
  File "/usr/local/lib/python3.10/dist-packages/tblib/pickling_support.py", line 46, in pickle_exception
    rv = obj.__reduce_ex__(3)
RecursionError: maximum recursion depth exceeded while calling a Python object

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "python/ray/_raylet.pyx", line 2281, in ray._raylet.task_execution_handler
  File "python/ray/_raylet.pyx", line 2177, in ray._raylet.execute_task_with_cancellation_handler
  File "python/ray/_raylet.pyx", line 1832, in ray._raylet.execute_task
  File "python/ray/_raylet.pyx", line 1833, in ray._raylet.execute_task
  File "python/ray/_raylet.pyx", line 2071, in ray._raylet.execute_task
  File "python/ray/_raylet.pyx", line 1089, in ray._raylet.store_task_errors
  File "python/ray/_raylet.pyx", line 4575, in ray._raylet.CoreWorker.store_task_outputs
  File "/usr/local/lib/python3.10/dist-packages/ray/_private/serialization.py", line 494, in serialize
    return self._serialize_to_msgpack(value)
  File "/usr/local/lib/python3.10/dist-packages/ray/_private/serialization.py", line 449, in _serialize_to_msgpack
    value = value.to_bytes()
  File "/usr/local/lib/python3.10/dist-packages/ray/exceptions.py", line 32, in to_bytes
    serialized_exception=pickle.dumps(self),
  File "/usr/local/lib/python3.10/dist-packages/ray/cloudpickle/cloudpickle.py", line 1479, in dumps
    cp.dump(obj)
  File "/usr/local/lib/python3.10/dist-packages/ray/cloudpickle/cloudpickle.py", line 1249, in dump
    raise pickle.PicklingError(msg) from e
_pickle.PicklingError: Could not pickle object as excessively deep recursion required.
An unexpected internal error occurred while the worker was executing a task.
(raylet) Traceback (most recent call last):
  File "python/ray/_raylet.pyx", line 1883, in ray._raylet.execute_task
  File "python/ray/_raylet.pyx", line 1984, in ray._raylet.execute_task
  File "python/ray/_raylet.pyx", line 1889, in ray._raylet.execute_task
  File "python/ray/_raylet.pyx", line 1830, in ray._raylet.execute_task.function_executor
  File "/usr/local/lib/python3.10/dist-packages/ray/_private/function_manager.py", line 724, in actor_method_executor
    return method(__ray_actor, *args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/ray/util/tracing/tracing_helper.py", line 467, in _resume_span
    return method(self, *_args, **_kwargs)
  File "/usr/local/lib/python3.10/dist-packages/ray/tune/trainable/trainable.py", line 334, in train
    raise skipped from exception_cause(skipped)
  File "/usr/local/lib/python3.10/dist-packages/ray/air/_internal/util.py", line 88, in run
    self._ret = self._target(*self._args, **self._kwargs)
  File "/usr/local/lib/python3.10/dist-packages/ray/tune/trainable/function_trainable.py", line 53, in <lambda>
    training_func=lambda: self._trainable_func(self.config),
  File "/usr/local/lib/python3.10/dist-packages/ray/util/tracing/tracing_helper.py", line 467, in _resume_span
    return method(self, *_args, **_kwargs)
  File "/usr/local/lib/python3.10/dist-packages/ray/tune/trainable/function_trainable.py", line 261, in _trainable_func
    output = fn()
  File "/usr/local/lib/python3.10/dist-packages/ray/tune/trainable/util.py", line 130, in inner
    return trainable(config, **fn_kwargs)
  File "/usr/local/lib/python3.10/dist-packages/neuralforecast/common/_base_auto.py", line 209, in _train_tune
    _ = self._fit_model(
  File "/usr/local/lib/python3.10/dist-packages/neuralforecast/common/_base_auto.py", line 357, in _fit_model
    model = model.fit(
  File "/usr/local/lib/python3.10/dist-packages/neuralforecast/common/_base_windows.py", line 638, in fit
    return self._fit(
  File "/usr/local/lib/python3.10/dist-packages/neuralforecast/common/_base_model.py", line 215, in _fit
    trainer.fit(model, datamodule=datamodule)
  File "/usr/local/lib/python3.10/dist-packages/pytorch_lightning/trainer/trainer.py", line 544, in fit
    call._call_and_handle_interrupt(
  File "/usr/local/lib/python3.10/dist-packages/pytorch_lightning/trainer/call.py", line 44, in _call_and_handle_interrupt
    return trainer_fn(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/pytorch_lightning/trainer/trainer.py", line 580, in _fit_impl
    self._run(model, ckpt_path=ckpt_path)
  File "/usr/local/lib/python3.10/dist-packages/pytorch_lightning/trainer/trainer.py", line 987, in _run
    results = self._run_stage()
  File "/usr/local/lib/python3.10/dist-packages/pytorch_lightning/trainer/trainer.py", line 1031, in _run_stage
    self._run_sanity_check()
  File "/usr/local/lib/python3.10/dist-packages/pytorch_lightning/trainer/trainer.py", line 1060, in _run_sanity_check
    val_loop.run()
  File "/usr/local/lib/python3.10/dist-packages/pytorch_lightning/loops/utilities.py", line 182, in _decorator
    return loop_run(self, *args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/pytorch_lightning/loops/evaluation_loop.py", line 135, in run
    self._evaluation_step(batch, batch_idx, dataloader_idx, dataloader_iter)
  File "/usr/local/lib/python3.10/dist-packages/pytorch_lightning/loops/evaluation_loop.py", line 396, in _evaluation_step
    output = call._call_strategy_hook(trainer, hook_name, *step_args)
  File "/usr/local/lib/python3.10/dist-packages/pytorch_lightning/trainer/call.py", line 309, in _call_strategy_hook
    output = fn(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/pytorch_lightning/strategies/strategy.py", line 412, in validation_step
    return self.lightning_module.validation_step(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/neuralforecast/common/_base_windows.py", line 522, in validation_step
    output_batch = self(windows_batch)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1520, in _call_impl
    return forward_call(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/neuralforecast/models/nbeats.py", line 401, in forward
    backcast, block_forecast = block(insample_y=residuals)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1520, in _call_impl
    return forward_call(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/neuralforecast/models/nbeats.py", line 188, in forward
    backcast, forecast = self.basis(theta)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1520, in _call_impl
    return forward_call(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/neuralforecast/models/nbeats.py", line 139, in forward
    backcast = torch.einsum("bp,pt->bt", backcast_theta, self.backcast_basis)
  File "/usr/local/lib/python3.10/dist-packages/torch/functional.py", line 380, in einsum
    return _VF.einsum(equation, operands)  # type: ignore[attr-defined]
RuntimeError: einsum(): subscript p has size 2 for operand 1 which does not broadcast with previously seen size 0

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.10/dist-packages/ray/cloudpickle/cloudpickle.py", line 1245, in dump
    return super().dump(obj)
  File "/usr/local/lib/python3.10/dist-packages/tblib/pickling_support.py", line 46, in pickle_exception
    rv = obj.__reduce_ex__(3)
RecursionError: maximum recursion depth exceeded while calling a Python object

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "python/ray/_raylet.pyx", line 2281, in ray._raylet.task_execution_handler
  File "python/ray/_raylet.pyx", line 2177, in ray._raylet.execute_task_with_cancellation_handler
  File "python/ray/_raylet.pyx", line 1832, in ray._raylet.execute_task
  File "python/ray/_raylet.pyx", line 1833, in ray._raylet.execute_task
  File "python/ray/_raylet.pyx", line 2071, in ray._raylet.execute_task
  File "python/ray/_raylet.pyx", line 1089, in ray._raylet.store_task_errors
  File "python/ray/_raylet.pyx", line 4575, in ray._raylet.CoreWorker.store_task_outputs
  File "/usr/local/lib/python3.10/dist-packages/ray/_private/serialization.py", line 494, in serialize
    return self._serialize_to_msgpack(value)
  File "/usr/local/lib/python3.10/dist-packages/ray/_private/serialization.py", line 449, in _serialize_to_msgpack
    value = value.to_bytes()
  File "/usr/local/lib/python3.10/dist-packages/ray/exceptions.py", line 32, in to_bytes
    serialized_exception=pickle.dumps(self),
  File "/usr/local/lib/python3.10/dist-packages/ray/cloudpickle/cloudpickle.py", line 1479, in dumps
    cp.dump(obj)
  File "/usr/local/lib/python3.10/dist-packages/ray/cloudpickle/cloudpickle.py", line 1249, in dump
    raise pickle.PicklingError(msg) from e
_pickle.PicklingError: Could not pickle object as excessively deep recursion required.
An unexpected internal error occurred while the worker was executing a task.
(raylet) A worker died or was killed while executing a task by an unexpected system error. To troubleshoot the problem, check the logs for the dead worker. RayTask ID: ffffffffffffffff386650df80c77aa8a23d9a4501000000 Worker ID: 20c482e5ea2e92bfdc7590a2469b943ec3d2e915be6dd8b83f19f688 Node ID: 3736702c9ec1e64507fbd72f7998d7271e8fa79341b6b57e9a75b392 Worker IP address: 172.28.0.12 Worker port: 34469 Worker PID: 8447 Worker exit type: SYSTEM_ERROR Worker exit detail: Worker exits unexpectedly. Worker exits with an exit code None. Traceback (most recent call last):
  File "python/ray/_raylet.pyx", line 1883, in ray._raylet.execute_task
  File "python/ray/_raylet.pyx", line 1984, in ray._raylet.execute_task
  File "python/ray/_raylet.pyx", line 1889, in ray._raylet.execute_task
  File "python/ray/_raylet.pyx", line 1830, in ray._raylet.execute_task.function_executor
  File "/usr/local/lib/python3.10/dist-packages/ray/_private/function_manager.py", line 724, in actor_method_executor
    return method(__ray_actor, *args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/ray/util/tracing/tracing_helper.py", line 467, in _resume_span
    return method(self, *_args, **_kwargs)
  File "/usr/local/lib/python3.10/dist-packages/ray/tune/trainable/trainable.py", line 334, in train
    raise skipped from exception_cause(skipped)
  File "/usr/local/lib/python3.10/dist-packages/ray/air/_internal/util.py", line 88, in run
    self._ret = self._target(*self._args, **self._kwargs)
  File "/usr/local/lib/python3.10/dist-packages/ray/tune/trainable/function_trainable.py", line 53, in <lambda>
    training_func=lambda: self._trainable_func(self.config),
  File "/usr/local/lib/python3.10/dist-packages/ray/util/tracing/tracing_helper.py", line 467, in _resume_span
    return method(self, *_args, **_kwargs)
  File "/usr/local/lib/python3.10/dist-packages/ray/tune/trainable/function_trainable.py", line 261, in _trainable_func
    output = fn()
  File "/usr/local/lib/python3.10/dist-packages/ray/tune/trainable/util.py", line 130, in inner
    return trainable(config, **fn_kwargs)
  File "/usr/local/lib/python3.10/dist-packages/neuralforecast/common/_base_auto.py", line 209, in _train_tune
    _ = self._fit_model(
  File "/usr/local/lib/python3.10/dist-packages/neuralforecast/common/_base_auto.py", line 357, in _fit_model
    model = model.fit(
  File "/usr/local/lib/python3.10/dist-packages/neuralforecast/common/_base_windows.py", line 638, in fit
    return self._fit(
  File "/usr/local/lib/python3.10/dist-packages/neuralforecast/common/_base_model.py", line 215, in _fit
    trainer.fit(model, datamodule=datamodule)
  File "/usr/local/lib/python3.10/dist-packages/pytorch_lightning/trainer/trainer.py", line 544, in fit
    call._call_and_handle_interrupt(
  File "/usr/local/lib/python3.10/dist-packages/pytorch_lightning/trainer/call.py", line 44, in _call_and_handle_interrupt
    return trainer_fn(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/pytorch_lightning/trainer/trainer.py", line 580, in _fit_impl
    self._run(model, ckpt_path=ckpt_path)
  File "/usr/local/lib/python3.10/dist-packages/pytorch_lightning/trainer/trainer.py", line 987, in _run
    results = self._run_stage()
  File "/usr/local/lib/python3.10/dist-packages/pytorch_lightning/trainer/trainer.py", line 1031, in _run_stage
    self._run_sanity_check()
  File "/usr/local/lib/python3.10/dist-packages/pytorch_lightning/trainer/trainer.py", line 1060, in _run_sanity_check
    val_loop.run()
  File "/usr/local/lib/python3.10/dist-packages/pytorch_lightning/loops/utilities.py", line 182, in _decorator
    return loop_run(self, *args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/pytorch_lightning/loops/evaluation_loop.py", line 135, in run
    self._evaluation_step(batch, batch_idx, dataloader_idx, dataloader_iter)
  File "/usr/local/lib/python3.10/dist-packages/pytorch_lightning/loops/evaluation_loop.py", line 396, in _evaluation_step
    output = call._call_strategy_hook(trainer, hook_name, *step_args)
  File "/usr/local/lib/python3.10/dist-packages/pytorch_lightning/trainer/call.py", line 309, in _call_strategy_hook
    output = fn(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/pytorch_lightning/strategies/strategy.py", line 412, in validation_step
    return self.lightning_module.validation_step(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/neuralforecast/common/_base_windows.py", line 522, in validation_step
    output_batch = self(windows_batch)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1520, in _call_impl
    return forward_call(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/neuralforecast/models/nbeats.py", line 401, in forward
    backcast, block_forecast = block(insample_y=residuals)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1520, in _call_impl
    return forward_call(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/neuralforecast/models/nbeats.py", line 188, in forward
    backcast, forecast = self.basis(theta)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1520, in _call_impl
    return forward_call(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/neuralforecast/models/nbeats.py", line 139, in forward
    backcast = torch.einsum("bp,pt->bt", backcast_theta, self.backcast_basis)
  File "/usr/local/lib/python3.10/dist-packages/torch/functional.py", line 380, in einsum
    return _VF.einsum(equation, operands)  # type: ignore[attr-defined]
RuntimeError: einsum(): subscript p has size 2 for operand 1 which does not broadcast with previously seen size 0

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.10/dist-packages/ray/cloudpickle/cloudpickle.py", line 1245, in dump
    return super().dump(obj)
  File "/usr/local/lib/python3.10/dist-packages/tblib/pickling_support.py", line 46, in pickle_exception
    rv = obj.__reduce_ex__(3)
RecursionError: maximum recursion depth exceeded while calling a Python object

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "python/ray/_raylet.pyx", line 2281, in ray._raylet.task_execution_handler
  File "python/ray/_raylet.pyx", line 2177, in ray._raylet.execute_task_with_cancellation_handler
  File "python/ray/_raylet.pyx", line 1832, in ray._raylet.execute_task
  File "python/ray/_raylet.pyx", line 1833, in ray._raylet.execute_task
  File "python/ray/_raylet.pyx", line 2071, in ray._raylet.execute_task
  File "python/ray/_raylet.pyx", line 1089, in ray._raylet.store_task_errors
  File "python/ray/_raylet.pyx", line 4575, in ray._raylet.CoreWorker.store_task_outputs
  File "/usr/local/lib/python3.10/dist-packages/ray/_private/serialization.py", line 494, in serialize
    return self._serialize_to_msgpack(value)
  File "/usr/local/lib/python3.10/dist-packages/ray/_private/serialization.py", line 449, in _serialize_to_msgpack
    value = value.to_bytes()
  File "/usr/local/lib/python3.10/dist-packages/ray/exceptions.py", line 32, in to_bytes
    serialized_exception=pickle.dumps(self),
  File "/usr/local/lib/python3.10/dist-packages/ray/cloudpickle/cloudpickle.py", line 1479, in dumps
    cp.dump(obj)
  File "/usr/local/lib/python3.10/dist-packages/ray/cloudpickle/cloudpickle.py", line 1249, in dump
    raise pickle.PicklingError(msg) from e
_pickle.PicklingError: Could not pickle object as excessively deep recursion required.
An unexpected internal error occurred while the worker was executing a task.

Trial _train_tune_19484a43 errored after 0 iterations at 2024-04-11 11:31:27. Total running time: 10s
Error file: /tmp/ray/session_2024-04-11_11-14-51_941718_2388/artifacts/2024-04-11_11-31-16/_train_tune_2024-04-11_11-31-16/driver_artifacts/_train_tune_19484a43_1_batch_size=16,h=1,hist_exog_list=rain_tide,input_size=168,learning_rate=0.0100,loss=ref_ph_de895953,max_ste_2024-04-11_11-31-16/error.txt

(_train_tune pid=8545) /usr/local/lib/python3.10/dist-packages/ray/tune/integration/pytorch_lightning.py:194: `ray.tune.integration.pytorch_lightning.TuneReportCallback` is deprecated. Use `ray.tune.integration.pytorch_lightning.TuneReportCheckpointCallback` instead.
(_train_tune pid=8545) /usr/local/lib/python3.10/dist-packages/pytorch_lightning/utilities/parsing.py:199: Attribute 'loss' is an instance of `nn.Module` and is already saved during checkpointing. It is recommended to ignore them using `self.save_hyperparameters(ignore=['loss'])`.
(_train_tune pid=8545) /usr/local/lib/python3.10/dist-packages/pytorch_lightning/utilities/parsing.py:199: Attribute 'valid_loss' is an instance of `nn.Module` and is already saved during checkpointing. It is recommended to ignore them using `self.save_hyperparameters(ignore=['valid_loss'])`.
(_train_tune pid=8545) Seed set to 1
(_train_tune pid=8545) /usr/local/lib/python3.10/dist-packages/torch/nn/init.py:452: UserWarning: Initializing zero-element tensors is a no-op
(_train_tune pid=8545)   warnings.warn("Initializing zero-element tensors is a no-op")
(_train_tune pid=8545) GPU available: True (cuda), used: True
(_train_tune pid=8545) TPU available: False, using: 0 TPU cores
(_train_tune pid=8545) IPU available: False, using: 0 IPUs
(_train_tune pid=8545) HPU available: False, using: 0 HPUs
(_train_tune pid=8545) Missing logger folder: /tmp/ray/session_2024-04-11_11-14-51_941718_2388/artifacts/2024-04-11_11-31-16/_train_tune_2024-04-11_11-31-16/working_dirs/_train_tune_32dc125f_2_batch_size=32,h=1,hist_exog_list=rain_tide,input_size=168,learning_rate=0.0100,loss=ref_ph_de895953,max_ste_2024-04-11_11-31-23/lightning_logs
(_train_tune pid=8545) 2024-04-11 11:31:35.441757: E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:9261] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
(_train_tune pid=8545) 2024-04-11 11:31:35.441816: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:607] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
(_train_tune pid=8545) 2024-04-11 11:31:35.443113: E external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1515] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
(_train_tune pid=8545) 2024-04-11 11:31:36.719848: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
(_train_tune pid=8545) LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
(_train_tune pid=8545) 
(_train_tune pid=8545)   | Name         | Type          | Params
(_train_tune pid=8545) -----------------------------------------------
(_train_tune pid=8545) 0 | loss         | MSE           | 0     
(_train_tune pid=8545) 1 | valid_loss   | MSE           | 0     
(_train_tune pid=8545) 2 | padder_train | ConstantPad1d | 0     
(_train_tune pid=8545) 3 | scaler       | TemporalNorm  | 0     
(_train_tune pid=8545) 4 | blocks       | ModuleList    | 2.7 M 
(_train_tune pid=8545) -----------------------------------------------
(_train_tune pid=8545) 2.7 M     Trainable params
(_train_tune pid=8545) 845       Non-trainable params
(_train_tune pid=8545) 2.7 M     Total params
(_train_tune pid=8545) 10.856    Total estimated model params size (MB)
Sanity Checking DataLoader 0:   0%|          | 0/1 [00:00<?, ?it/s]
(_train_tune pid=8545) /usr/local/lib/python3.10/dist-packages/pytorch_lightning/trainer/connectors/data_connector.py:441: The 'val_dataloader' does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` to `num_workers=7` in the `DataLoader` to improve performance.
2024-04-11 11:31:38,208 ERROR tune_controller.py:1332 -- Trial task failed for trial _train_tune_32dc125f
Traceback (most recent call last):
  File "/usr/local/lib/python3.10/dist-packages/ray/air/execution/_internal/event_manager.py", line 110, in resolve_future
    result = ray.get(future)
  File "/usr/local/lib/python3.10/dist-packages/ray/_private/auto_init_hook.py", line 21, in auto_init_wrapper
    return fn(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/ray/_private/client_mode_hook.py", line 103, in wrapper
    return func(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/ray/_private/worker.py", line 2667, in get
    values, debugger_breakpoint = worker.get_objects(object_refs, timeout=timeout)
  File "/usr/local/lib/python3.10/dist-packages/ray/_private/worker.py", line 866, in get_objects
    raise value
ray.exceptions.RayActorError: The actor died unexpectedly before finishing this task.
    class_name: ImplicitFunc
    actor_id: 2c88a056a0ffd227b0eb0b8401000000
    pid: 8545
    namespace: 6857756c-8448-4ebf-9a50-f81d5cd41dbd
    ip: 172.28.0.12
The actor is dead because its worker process has died. Worker exit type: SYSTEM_ERROR Worker exit detail: Worker exits unexpectedly. Worker exits with an exit code None. Traceback (most recent call last):
  File "python/ray/_raylet.pyx", line 1883, in ray._raylet.execute_task
  File "python/ray/_raylet.pyx", line 1984, in ray._raylet.execute_task
  File "python/ray/_raylet.pyx", line 1889, in ray._raylet.execute_task
  File "python/ray/_raylet.pyx", line 1830, in ray._raylet.execute_task.function_executor
  File "/usr/local/lib/python3.10/dist-packages/ray/_private/function_manager.py", line 724, in actor_method_executor
    return method(__ray_actor, *args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/ray/util/tracing/tracing_helper.py", line 467, in _resume_span
    return method(self, *_args, **_kwargs)
  File "/usr/local/lib/python3.10/dist-packages/ray/tune/trainable/trainable.py", line 334, in train
    raise skipped from exception_cause(skipped)
  File "/usr/local/lib/python3.10/dist-packages/ray/air/_internal/util.py", line 88, in run
    self._ret = self._target(*self._args, **self._kwargs)
  File "/usr/local/lib/python3.10/dist-packages/ray/tune/trainable/function_trainable.py", line 53, in <lambda>
    training_func=lambda: self._trainable_func(self.config),
  File "/usr/local/lib/python3.10/dist-packages/ray/util/tracing/tracing_helper.py", line 467, in _resume_span
    return method(self, *_args, **_kwargs)
  File "/usr/local/lib/python3.10/dist-packages/ray/tune/trainable/function_trainable.py", line 261, in _trainable_func
    output = fn()
  File "/usr/local/lib/python3.10/dist-packages/ray/tune/trainable/util.py", line 130, in inner
    return trainable(config, **fn_kwargs)
  File "/usr/local/lib/python3.10/dist-packages/neuralforecast/common/_base_auto.py", line 209, in _train_tune
    _ = self._fit_model(
  File "/usr/local/lib/python3.10/dist-packages/neuralforecast/common/_base_auto.py", line 357, in _fit_model
    model = model.fit(
  File "/usr/local/lib/python3.10/dist-packages/neuralforecast/common/_base_windows.py", line 638, in fit
    return self._fit(
  File "/usr/local/lib/python3.10/dist-packages/neuralforecast/common/_base_model.py", line 215, in _fit
    trainer.fit(model, datamodule=datamodule)
  File "/usr/local/lib/python3.10/dist-packages/pytorch_lightning/trainer/trainer.py", line 544, in fit
    call._call_and_handle_interrupt(
  File "/usr/local/lib/python3.10/dist-packages/pytorch_lightning/trainer/call.py", line 44, in _call_and_handle_interrupt
    return trainer_fn(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/pytorch_lightning/trainer/trainer.py", line 580, in _fit_impl
    self._run(model, ckpt_path=ckpt_path)
  File "/usr/local/lib/python3.10/dist-packages/pytorch_lightning/trainer/trainer.py", line 987, in _run
    results = self._run_stage()
  File "/usr/local/lib/python3.10/dist-packages/pytorch_lightning/trainer/trainer.py", line 1031, in _run_stage
    self._run_sanity_check()
  File "/usr/local/lib/python3.10/dist-packages/pytorch_lightning/trainer/trainer.py", line 1060, in _run_sanity_check
    val_loop.run()
  File "/usr/local/lib/python3.10/dist-packages/pytorch_lightning/loops/utilities.py", line 182, in _decorator
    return loop_run(self, *args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/pytorch_lightning/loops/evaluation_loop.py", line 135, in run
    self._evaluation_step(batch, batch_idx, dataloader_idx, dataloader_iter)
  File "/usr/local/lib/python3.10/dist-packages/pytorch_lightning/loops/evaluation_loop.py", line 396, in _evaluation_step
    output = call._call_strategy_hook(trainer, hook_name, *step_args)
  File "/usr/local/lib/python3.10/dist-packages/pytorch_lightning/trainer/call.py", line 309, in _call_strategy_hook
    output = fn(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/pytorch_lightning/strategies/strategy.py", line 412, in validation_step
    return self.lightning_module.validation_step(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/neuralforecast/common/_base_windows.py", line 522, in validation_step
    output_batch = self(windows_batch)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1520, in _call_impl
    return forward_call(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/neuralforecast/models/nbeats.py", line 401, in forward
    backcast, block_forecast = block(insample_y=residuals)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1520, in _call_impl
    return forward_call(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/neuralforecast/models/nbeats.py", line 188, in forward
    backcast, forecast = self.basis(theta)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1520, in _call_impl
    return forward_call(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/neuralforecast/models/nbeats.py", line 139, in forward
    backcast = torch.einsum("bp,pt->bt", backcast_theta, self.backcast_basis)
  File "/usr/local/lib/python3.10/dist-packages/torch/functional.py", line 380, in einsum
    return _VF.einsum(equation, operands)  # type: ignore[attr-defined]
RuntimeError: einsum(): subscript p has size 2 for operand 1 which does not broadcast with previously seen size 0

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.10/dist-packages/ray/cloudpickle/cloudpickle.py", line 1245, in dump
    return super().dump(obj)
  File "/usr/local/lib/python3.10/dist-packages/tblib/pickling_support.py", line 46, in pickle_exception
    rv = obj.__reduce_ex__(3)
RecursionError: maximum recursion depth exceeded while calling a Python object

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "python/ray/_raylet.pyx", line 2281, in ray._raylet.task_execution_handler
  File "python/ray/_raylet.pyx", line 2177, in ray._raylet.execute_task_with_cancellation_handler
  File "python/ray/_raylet.pyx", line 1832, in ray._raylet.execute_task
  File "python/ray/_raylet.pyx", line 1833, in ray._raylet.execute_task
  File "python/ray/_raylet.pyx", line 2071, in ray._raylet.execute_task
  File "python/ray/_raylet.pyx", line 1089, in ray._raylet.store_task_errors
  File "python/ray/_raylet.pyx", line 4575, in ray._raylet.CoreWorker.store_task_outputs
  File "/usr/local/lib/python3.10/dist-packages/ray/_private/serialization.py", line 494, in serialize
    return self._serialize_to_msgpack(value)
  File "/usr/local/lib/python3.10/dist-packages/ray/_private/serialization.py", line 449, in _serialize_to_msgpack
    value = value.to_bytes()
  File "/usr/local/lib/python3.10/dist-packages/ray/exceptions.py", line 32, in to_bytes
    serialized_exception=pickle.dumps(self),
  File "/usr/local/lib/python3.10/dist-packages/ray/cloudpickle/cloudpickle.py", line 1479, in dumps
    cp.dump(obj)
  File "/usr/local/lib/python3.10/dist-packages/ray/cloudpickle/cloudpickle.py", line 1249, in dump
    raise pickle.PicklingError(msg) from e
_pickle.PicklingError: Could not pickle object as excessively deep recursion required.
An unexpected internal error occurred while the worker was executing a task.
(raylet) Traceback (most recent call last):
  File "python/ray/_raylet.pyx", line 1883, in ray._raylet.execute_task
  File "python/ray/_raylet.pyx", line 1984, in ray._raylet.execute_task
  File "python/ray/_raylet.pyx", line 1889, in ray._raylet.execute_task
  File "python/ray/_raylet.pyx", line 1830, in ray._raylet.execute_task.function_executor
  File "/usr/local/lib/python3.10/dist-packages/ray/_private/function_manager.py", line 724, in actor_method_executor
    return method(__ray_actor, *args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/ray/util/tracing/tracing_helper.py", line 467, in _resume_span
    return method(self, *_args, **_kwargs)
  File "/usr/local/lib/python3.10/dist-packages/ray/tune/trainable/trainable.py", line 334, in train
    raise skipped from exception_cause(skipped)
  File "/usr/local/lib/python3.10/dist-packages/ray/air/_internal/util.py", line 88, in run
    self._ret = self._target(*self._args, **self._kwargs)
  File "/usr/local/lib/python3.10/dist-packages/ray/tune/trainable/function_trainable.py", line 53, in <lambda>
    training_func=lambda: self._trainable_func(self.config),
  File "/usr/local/lib/python3.10/dist-packages/ray/util/tracing/tracing_helper.py", line 467, in _resume_span
    return method(self, *_args, **_kwargs)
  File "/usr/local/lib/python3.10/dist-packages/ray/tune/trainable/function_trainable.py", line 261, in _trainable_func
    output = fn()
  File "/usr/local/lib/python3.10/dist-packages/ray/tune/trainable/util.py", line 130, in inner
    return trainable(config, **fn_kwargs)
  File "/usr/local/lib/python3.10/dist-packages/neuralforecast/common/_base_auto.py", line 209, in _train_tune
    _ = self._fit_model(
  File "/usr/local/lib/python3.10/dist-packages/neuralforecast/common/_base_auto.py", line 357, in _fit_model
    model = model.fit(
  File "/usr/local/lib/python3.10/dist-packages/neuralforecast/common/_base_windows.py", line 638, in fit
    return self._fit(
  File "/usr/local/lib/python3.10/dist-packages/neuralforecast/common/_base_model.py", line 215, in _fit
    trainer.fit(model, datamodule=datamodule)
  File "/usr/local/lib/python3.10/dist-packages/pytorch_lightning/trainer/trainer.py", line 544, in fit
    call._call_and_handle_interrupt(
  File "/usr/local/lib/python3.10/dist-packages/pytorch_lightning/trainer/call.py", line 44, in _call_and_handle_interrupt
    return trainer_fn(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/pytorch_lightning/trainer/trainer.py", line 580, in _fit_impl
    self._run(model, ckpt_path=ckpt_path)
  File "/usr/local/lib/python3.10/dist-packages/pytorch_lightning/trainer/trainer.py", line 987, in _run
    results = self._run_stage()
  File "/usr/local/lib/python3.10/dist-packages/pytorch_lightning/trainer/trainer.py", line 1031, in _run_stage
    self._run_sanity_check()
  File "/usr/local/lib/python3.10/dist-packages/pytorch_lightning/trainer/trainer.py", line 1060, in _run_sanity_check
    val_loop.run()
  File "/usr/local/lib/python3.10/dist-packages/pytorch_lightning/loops/utilities.py", line 182, in _decorator
    return loop_run(self, *args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/pytorch_lightning/loops/evaluation_loop.py", line 135, in run
    self._evaluation_step(batch, batch_idx, dataloader_idx, dataloader_iter)
  File "/usr/local/lib/python3.10/dist-packages/pytorch_lightning/loops/evaluation_loop.py", line 396, in _evaluation_step
    output = call._call_strategy_hook(trainer, hook_name, *step_args)
  File "/usr/local/lib/python3.10/dist-packages/pytorch_lightning/trainer/call.py", line 309, in _call_strategy_hook
    output = fn(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/pytorch_lightning/strategies/strategy.py", line 412, in validation_step
    return self.lightning_module.validation_step(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/neuralforecast/common/_base_windows.py", line 522, in validation_step
    output_batch = self(windows_batch)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1520, in _call_impl
    return forward_call(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/neuralforecast/models/nbeats.py", line 401, in forward
    backcast, block_forecast = block(insample_y=residuals)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1520, in _call_impl
    return forward_call(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/neuralforecast/models/nbeats.py", line 188, in forward
    backcast, forecast = self.basis(theta)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1520, in _call_impl
    return forward_call(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/neuralforecast/models/nbeats.py", line 139, in forward
    backcast = torch.einsum("bp,pt->bt", backcast_theta, self.backcast_basis)
  File "/usr/local/lib/python3.10/dist-packages/torch/functional.py", line 380, in einsum
    return _VF.einsum(equation, operands)  # type: ignore[attr-defined]
RuntimeError: einsum(): subscript p has size 2 for operand 1 which does not broadcast with previously seen size 0

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.10/dist-packages/ray/cloudpickle/cloudpickle.py", line 1245, in dump
    return super().dump(obj)
  File "/usr/local/lib/python3.10/dist-packages/tblib/pickling_support.py", line 46, in pickle_exception
    rv = obj.__reduce_ex__(3)
RecursionError: maximum recursion depth exceeded while calling a Python object

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "python/ray/_raylet.pyx", line 2281, in ray._raylet.task_execution_handler
  File "python/ray/_raylet.pyx", line 2177, in ray._raylet.execute_task_with_cancellation_handler
  File "python/ray/_raylet.pyx", line 1832, in ray._raylet.execute_task
  File "python/ray/_raylet.pyx", line 1833, in ray._raylet.execute_task
  File "python/ray/_raylet.pyx", line 2071, in ray._raylet.execute_task
  File "python/ray/_raylet.pyx", line 1089, in ray._raylet.store_task_errors
  File "python/ray/_raylet.pyx", line 4575, in ray._raylet.CoreWorker.store_task_outputs
  File "/usr/local/lib/python3.10/dist-packages/ray/_private/serialization.py", line 494, in serialize
    return self._serialize_to_msgpack(value)
  File "/usr/local/lib/python3.10/dist-packages/ray/_private/serialization.py", line 449, in _serialize_to_msgpack
    value = value.to_bytes()
  File "/usr/local/lib/python3.10/dist-packages/ray/exceptions.py", line 32, in to_bytes
    serialized_exception=pickle.dumps(self),
  File "/usr/local/lib/python3.10/dist-packages/ray/cloudpickle/cloudpickle.py", line 1479, in dumps
    cp.dump(obj)
  File "/usr/local/lib/python3.10/dist-packages/ray/cloudpickle/cloudpickle.py", line 1249, in dump
    raise pickle.PicklingError(msg) from e
_pickle.PicklingError: Could not pickle object as excessively deep recursion required.
An unexpected internal error occurred while the worker was executing a task.
(raylet) A worker died or was killed while executing a task by an unexpected system error. To troubleshoot the problem, check the logs for the dead worker. RayTask ID: ffffffffffffffff2c88a056a0ffd227b0eb0b8401000000 Worker ID: 82d05fece8843991058579aa060698684432db9f07ab0a49431dbfa6 Node ID: 3736702c9ec1e64507fbd72f7998d7271e8fa79341b6b57e9a75b392 Worker IP address: 172.28.0.12 Worker port: 37969 Worker PID: 8545 Worker exit type: SYSTEM_ERROR Worker exit detail: Worker exits unexpectedly. Worker exits with an exit code None. Traceback (most recent call last):
  File "python/ray/_raylet.pyx", line 1883, in ray._raylet.execute_task
  File "python/ray/_raylet.pyx", line 1984, in ray._raylet.execute_task
  File "python/ray/_raylet.pyx", line 1889, in ray._raylet.execute_task
  File "python/ray/_raylet.pyx", line 1830, in ray._raylet.execute_task.function_executor
  File "/usr/local/lib/python3.10/dist-packages/ray/_private/function_manager.py", line 724, in actor_method_executor
    return method(__ray_actor, *args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/ray/util/tracing/tracing_helper.py", line 467, in _resume_span
    return method(self, *_args, **_kwargs)
  File "/usr/local/lib/python3.10/dist-packages/ray/tune/trainable/trainable.py", line 334, in train
    raise skipped from exception_cause(skipped)
  File "/usr/local/lib/python3.10/dist-packages/ray/air/_internal/util.py", line 88, in run
    self._ret = self._target(*self._args, **self._kwargs)
  File "/usr/local/lib/python3.10/dist-packages/ray/tune/trainable/function_trainable.py", line 53, in <lambda>
    training_func=lambda: self._trainable_func(self.config),
  File "/usr/local/lib/python3.10/dist-packages/ray/util/tracing/tracing_helper.py", line 467, in _resume_span
    return method(self, *_args, **_kwargs)
  File "/usr/local/lib/python3.10/dist-packages/ray/tune/trainable/function_trainable.py", line 261, in _trainable_func
    output = fn()
  File "/usr/local/lib/python3.10/dist-packages/ray/tune/trainable/util.py", line 130, in inner
    return trainable(config, **fn_kwargs)
  File "/usr/local/lib/python3.10/dist-packages/neuralforecast/common/_base_auto.py", line 209, in _train_tune
    _ = self._fit_model(
  File "/usr/local/lib/python3.10/dist-packages/neuralforecast/common/_base_auto.py", line 357, in _fit_model
    model = model.fit(
  File "/usr/local/lib/python3.10/dist-packages/neuralforecast/common/_base_windows.py", line 638, in fit
    return self._fit(
  File "/usr/local/lib/python3.10/dist-packages/neuralforecast/common/_base_model.py", line 215, in _fit
    trainer.fit(model, datamodule=datamodule)
  File "/usr/local/lib/python3.10/dist-packages/pytorch_lightning/trainer/trainer.py", line 544, in fit
    call._call_and_handle_interrupt(
  File "/usr/local/lib/python3.10/dist-packages/pytorch_lightning/trainer/call.py", line 44, in _call_and_handle_interrupt
    return trainer_fn(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/pytorch_lightning/trainer/trainer.py", line 580, in _fit_impl
    self._run(model, ckpt_path=ckpt_path)
  File "/usr/local/lib/python3.10/dist-packages/pytorch_lightning/trainer/trainer.py", line 987, in _run
    results = self._run_stage()
  File "/usr/local/lib/python3.10/dist-packages/pytorch_lightning/trainer/trainer.py", line 1031, in _run_stage
    self._run_sanity_check()
  File "/usr/local/lib/python3.10/dist-packages/pytorch_lightning/trainer/trainer.py", line 1060, in _run_sanity_check
    val_loop.run()
  File "/usr/local/lib/python3.10/dist-packages/pytorch_lightning/loops/utilities.py", line 182, in _decorator
    return loop_run(self, *args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/pytorch_lightning/loops/evaluation_loop.py", line 135, in run
    self._evaluation_step(batch, batch_idx, dataloader_idx, dataloader_iter)
  File "/usr/local/lib/python3.10/dist-packages/pytorch_lightning/loops/evaluation_loop.py", line 396, in _evaluation_step
    output = call._call_strategy_hook(trainer, hook_name, *step_args)
  File "/usr/local/lib/python3.10/dist-packages/pytorch_lightning/trainer/call.py", line 309, in _call_strategy_hook
    output = fn(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/pytorch_lightning/strategies/strategy.py", line 412, in validation_step
    return self.lightning_module.validation_step(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/neuralforecast/common/_base_windows.py", line 522, in validation_step
    output_batch = self(windows_batch)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1520, in _call_impl
    return forward_call(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/neuralforecast/models/nbeats.py", line 401, in forward
    backcast, block_forecast = block(insample_y=residuals)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1520, in _call_impl
    return forward_call(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/neuralforecast/models/nbeats.py", line 188, in forward
    backcast, forecast = self.basis(theta)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1520, in _call_impl
    return forward_call(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/neuralforecast/models/nbeats.py", line 139, in forward
    backcast = torch.einsum("bp,pt->bt", backcast_theta, self.backcast_basis)
  File "/usr/local/lib/python3.10/dist-packages/torch/functional.py", line 380, in einsum
    return _VF.einsum(equation, operands)  # type: ignore[attr-defined]
RuntimeError: einsum(): subscript p has size 2 for operand 1 which does not broadcast with previously seen size 0

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.10/dist-packages/ray/cloudpickle/cloudpickle.py", line 1245, in dump
    return super().dump(obj)
  File "/usr/local/lib/python3.10/dist-packages/tblib/pickling_support.py", line 46, in pickle_exception
    rv = obj.__reduce_ex__(3)
RecursionError: maximum recursion depth exceeded while calling a Python object

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "python/ray/_raylet.pyx", line 2281, in ray._raylet.task_execution_handler
  File "python/ray/_raylet.pyx", line 2177, in ray._raylet.execute_task_with_cancellation_handler
  File "python/ray/_raylet.pyx", line 1832, in ray._raylet.execute_task
  File "python/ray/_raylet.pyx", line 1833, in ray._raylet.execute_task
  File "python/ray/_raylet.pyx", line 2071, in ray._raylet.execute_task
  File "python/ray/_raylet.pyx", line 1089, in ray._raylet.store_task_errors
  File "python/ray/_raylet.pyx", line 4575, in ray._raylet.CoreWorker.store_task_outputs
  File "/usr/local/lib/python3.10/dist-packages/ray/_private/serialization.py", line 494, in serialize
    return self._serialize_to_msgpack(value)
  File "/usr/local/lib/python3.10/dist-packages/ray/_private/serialization.py", line 449, in _serialize_to_msgpack
    value = value.to_bytes()
  File "/usr/local/lib/python3.10/dist-packages/ray/exceptions.py", line 32, in to_bytes
    serialized_exception=pickle.dumps(self),
  File "/usr/local/lib/python3.10/dist-packages/ray/cloudpickle/cloudpickle.py", line 1479, in dumps
    cp.dump(obj)
  File "/usr/local/lib/python3.10/dist-packages/ray/cloudpickle/cloudpickle.py", line 1249, in dump
    raise pickle.PicklingError(msg) from e
_pickle.PicklingError: Could not pickle object as excessively deep recursion required.
An unexpected internal error occurred while the worker was executing a task.

Trial _train_tune_32dc125f errored after 0 iterations at 2024-04-11 11:31:38. Total running time: 21s
Error file: /tmp/ray/session_2024-04-11_11-14-51_941718_2388/artifacts/2024-04-11_11-31-16/_train_tune_2024-04-11_11-31-16/driver_artifacts/_train_tune_32dc125f_2_batch_size=32,h=1,hist_exog_list=rain_tide,input_size=168,learning_rate=0.0100,loss=ref_ph_de895953,max_ste_2024-04-11_11-31-23/error.txt

(_train_tune pid=8636) /usr/local/lib/python3.10/dist-packages/ray/tune/integration/pytorch_lightning.py:194: `ray.tune.integration.pytorch_lightning.TuneReportCallback` is deprecated. Use `ray.tune.integration.pytorch_lightning.TuneReportCheckpointCallback` instead.
(_train_tune pid=8636) /usr/local/lib/python3.10/dist-packages/pytorch_lightning/utilities/parsing.py:199: Attribute 'loss' is an instance of `nn.Module` and is already saved during checkpointing. It is recommended to ignore them using `self.save_hyperparameters(ignore=['loss'])`.
(_train_tune pid=8636) /usr/local/lib/python3.10/dist-packages/pytorch_lightning/utilities/parsing.py:199: Attribute 'valid_loss' is an instance of `nn.Module` and is already saved during checkpointing. It is recommended to ignore them using `self.save_hyperparameters(ignore=['valid_loss'])`.
(_train_tune pid=8636) Seed set to 1
(_train_tune pid=8636) /usr/local/lib/python3.10/dist-packages/torch/nn/init.py:452: UserWarning: Initializing zero-element tensors is a no-op
(_train_tune pid=8636)   warnings.warn("Initializing zero-element tensors is a no-op")
(_train_tune pid=8636) GPU available: True (cuda), used: True
(_train_tune pid=8636) TPU available: False, using: 0 TPU cores
(_train_tune pid=8636) IPU available: False, using: 0 IPUs
(_train_tune pid=8636) HPU available: False, using: 0 HPUs
(_train_tune pid=8636) Missing logger folder: /tmp/ray/session_2024-04-11_11-14-51_941718_2388/artifacts/2024-04-11_11-31-16/_train_tune_2024-04-11_11-31-16/working_dirs/_train_tune_d94cae14_3_batch_size=128,h=1,hist_exog_list=rain_tide,input_size=168,learning_rate=0.0001,loss=ref_ph_de895953,max_st_2024-04-11_11-31-33/lightning_logs
(_train_tune pid=8636) 2024-04-11 11:31:46.919724: E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:9261] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
(_train_tune pid=8636) 2024-04-11 11:31:46.919788: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:607] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
(_train_tune pid=8636) 2024-04-11 11:31:46.921101: E external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1515] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
(_train_tune pid=8636) 2024-04-11 11:31:48.151458: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
(_train_tune pid=8636) LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
(_train_tune pid=8636) 
(_train_tune pid=8636)   | Name         | Type          | Params
(_train_tune pid=8636) -----------------------------------------------
(_train_tune pid=8636) 0 | loss         | MSE           | 0     
(_train_tune pid=8636) 1 | valid_loss   | MSE           | 0     
(_train_tune pid=8636) 2 | padder_train | ConstantPad1d | 0     
(_train_tune pid=8636) 3 | scaler       | TemporalNorm  | 0     
(_train_tune pid=8636) 4 | blocks       | ModuleList    | 2.7 M 
(_train_tune pid=8636) -----------------------------------------------
(_train_tune pid=8636) 2.7 M     Trainable params
(_train_tune pid=8636) 845       Non-trainable params
(_train_tune pid=8636) 2.7 M     Total params
(_train_tune pid=8636) 10.856    Total estimated model params size (MB)
Sanity Checking: |          | 0/? [00:00<?, ?it/s]
Sanity Checking DataLoader 0:   0%|          | 0/1 [00:00<?, ?it/s]
(_train_tune pid=8636) /usr/local/lib/python3.10/dist-packages/pytorch_lightning/trainer/connectors/data_connector.py:441: The 'val_dataloader' does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` to `num_workers=7` in the `DataLoader` to improve performance.
2024-04-11 11:31:49,642 ERROR tune_controller.py:1332 -- Trial task failed for trial _train_tune_d94cae14
Traceback (most recent call last):
  File "/usr/local/lib/python3.10/dist-packages/ray/air/execution/_internal/event_manager.py", line 110, in resolve_future
    result = ray.get(future)
  File "/usr/local/lib/python3.10/dist-packages/ray/_private/auto_init_hook.py", line 21, in auto_init_wrapper
    return fn(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/ray/_private/client_mode_hook.py", line 103, in wrapper
    return func(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/ray/_private/worker.py", line 2667, in get
    values, debugger_breakpoint = worker.get_objects(object_refs, timeout=timeout)
  File "/usr/local/lib/python3.10/dist-packages/ray/_private/worker.py", line 866, in get_objects
    raise value
ray.exceptions.RayActorError: The actor died unexpectedly before finishing this task.
    class_name: ImplicitFunc
    actor_id: 246bf316913a4ed9f5ffe22301000000
    pid: 8636
    namespace: 6857756c-8448-4ebf-9a50-f81d5cd41dbd
    ip: 172.28.0.12
The actor is dead because its worker process has died. Worker exit type: SYSTEM_ERROR Worker exit detail: Worker exits unexpectedly. Worker exits with an exit code None. Traceback (most recent call last):
  File "python/ray/_raylet.pyx", line 1883, in ray._raylet.execute_task
  File "python/ray/_raylet.pyx", line 1984, in ray._raylet.execute_task
  File "python/ray/_raylet.pyx", line 1889, in ray._raylet.execute_task
  File "python/ray/_raylet.pyx", line 1830, in ray._raylet.execute_task.function_executor
  File "/usr/local/lib/python3.10/dist-packages/ray/_private/function_manager.py", line 724, in actor_method_executor
    return method(__ray_actor, *args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/ray/util/tracing/tracing_helper.py", line 467, in _resume_span
    return method(self, *_args, **_kwargs)
  File "/usr/local/lib/python3.10/dist-packages/ray/tune/trainable/trainable.py", line 334, in train
    raise skipped from exception_cause(skipped)
  File "/usr/local/lib/python3.10/dist-packages/ray/air/_internal/util.py", line 88, in run
    self._ret = self._target(*self._args, **self._kwargs)
  File "/usr/local/lib/python3.10/dist-packages/ray/tune/trainable/function_trainable.py", line 53, in <lambda>
    training_func=lambda: self._trainable_func(self.config),
  File "/usr/local/lib/python3.10/dist-packages/ray/util/tracing/tracing_helper.py", line 467, in _resume_span
    return method(self, *_args, **_kwargs)
  File "/usr/local/lib/python3.10/dist-packages/ray/tune/trainable/function_trainable.py", line 261, in _trainable_func
    output = fn()
  File "/usr/local/lib/python3.10/dist-packages/ray/tune/trainable/util.py", line 130, in inner
    return trainable(config, **fn_kwargs)
  File "/usr/local/lib/python3.10/dist-packages/neuralforecast/common/_base_auto.py", line 209, in _train_tune
    _ = self._fit_model(
  File "/usr/local/lib/python3.10/dist-packages/neuralforecast/common/_base_auto.py", line 357, in _fit_model
    model = model.fit(
  File "/usr/local/lib/python3.10/dist-packages/neuralforecast/common/_base_windows.py", line 638, in fit
    return self._fit(
  File "/usr/local/lib/python3.10/dist-packages/neuralforecast/common/_base_model.py", line 215, in _fit
    trainer.fit(model, datamodule=datamodule)
  File "/usr/local/lib/python3.10/dist-packages/pytorch_lightning/trainer/trainer.py", line 544, in fit
    call._call_and_handle_interrupt(
  File "/usr/local/lib/python3.10/dist-packages/pytorch_lightning/trainer/call.py", line 44, in _call_and_handle_interrupt
    return trainer_fn(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/pytorch_lightning/trainer/trainer.py", line 580, in _fit_impl
    self._run(model, ckpt_path=ckpt_path)
  File "/usr/local/lib/python3.10/dist-packages/pytorch_lightning/trainer/trainer.py", line 987, in _run
    results = self._run_stage()
  File "/usr/local/lib/python3.10/dist-packages/pytorch_lightning/trainer/trainer.py", line 1031, in _run_stage
    self._run_sanity_check()
  File "/usr/local/lib/python3.10/dist-packages/pytorch_lightning/trainer/trainer.py", line 1060, in _run_sanity_check
    val_loop.run()
  File "/usr/local/lib/python3.10/dist-packages/pytorch_lightning/loops/utilities.py", line 182, in _decorator
    return loop_run(self, *args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/pytorch_lightning/loops/evaluation_loop.py", line 135, in run
    self._evaluation_step(batch, batch_idx, dataloader_idx, dataloader_iter)
  File "/usr/local/lib/python3.10/dist-packages/pytorch_lightning/loops/evaluation_loop.py", line 396, in _evaluation_step
    output = call._call_strategy_hook(trainer, hook_name, *step_args)
  File "/usr/local/lib/python3.10/dist-packages/pytorch_lightning/trainer/call.py", line 309, in _call_strategy_hook
    output = fn(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/pytorch_lightning/strategies/strategy.py", line 412, in validation_step
    return self.lightning_module.validation_step(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/neuralforecast/common/_base_windows.py", line 522, in validation_step
    output_batch = self(windows_batch)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1520, in _call_impl
    return forward_call(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/neuralforecast/models/nbeats.py", line 401, in forward
    backcast, block_forecast = block(insample_y=residuals)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1520, in _call_impl
    return forward_call(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/neuralforecast/models/nbeats.py", line 188, in forward
    backcast, forecast = self.basis(theta)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1520, in _call_impl
    return forward_call(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/neuralforecast/models/nbeats.py", line 139, in forward
    backcast = torch.einsum("bp,pt->bt", backcast_theta, self.backcast_basis)
  File "/usr/local/lib/python3.10/dist-packages/torch/functional.py", line 380, in einsum
    return _VF.einsum(equation, operands)  # type: ignore[attr-defined]
RuntimeError: einsum(): subscript p has size 2 for operand 1 which does not broadcast with previously seen size 0

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.10/dist-packages/ray/cloudpickle/cloudpickle.py", line 1245, in dump
    return super().dump(obj)
  File "/usr/local/lib/python3.10/dist-packages/tblib/pickling_support.py", line 46, in pickle_exception
    rv = obj.__reduce_ex__(3)
RecursionError: maximum recursion depth exceeded while calling a Python object

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "python/ray/_raylet.pyx", line 2281, in ray._raylet.task_execution_handler
  File "python/ray/_raylet.pyx", line 2177, in ray._raylet.execute_task_with_cancellation_handler
  File "python/ray/_raylet.pyx", line 1832, in ray._raylet.execute_task
  File "python/ray/_raylet.pyx", line 1833, in ray._raylet.execute_task
  File "python/ray/_raylet.pyx", line 2071, in ray._raylet.execute_task
  File "python/ray/_raylet.pyx", line 1089, in ray._raylet.store_task_errors
  File "python/ray/_raylet.pyx", line 4575, in ray._raylet.CoreWorker.store_task_outputs
  File "/usr/local/lib/python3.10/dist-packages/ray/_private/serialization.py", line 494, in serialize
    return self._serialize_to_msgpack(value)
  File "/usr/local/lib/python3.10/dist-packages/ray/_private/serialization.py", line 449, in _serialize_to_msgpack
    value = value.to_bytes()
  File "/usr/local/lib/python3.10/dist-packages/ray/exceptions.py", line 32, in to_bytes
    serialized_exception=pickle.dumps(self),
  File "/usr/local/lib/python3.10/dist-packages/ray/cloudpickle/cloudpickle.py", line 1479, in dumps
    cp.dump(obj)
  File "/usr/local/lib/python3.10/dist-packages/ray/cloudpickle/cloudpickle.py", line 1249, in dump
    raise pickle.PicklingError(msg) from e
_pickle.PicklingError: Could not pickle object as excessively deep recursion required.
An unexpected internal error occurred while the worker was executing a task.
2024-04-11 11:31:49,651 WARNING experiment_state.py:205 -- Experiment state snapshotting has been triggered multiple times in the last 5.0 seconds. A snapshot is forced if `CheckpointConfig(num_to_keep)` is set, and a trial has checkpointed >= `num_to_keep` times since the last snapshot.
You may want to consider increasing the `CheckpointConfig(num_to_keep)` or decreasing the frequency of saving checkpoints.
You can suppress this error by setting the environment variable TUNE_WARN_EXCESSIVE_EXPERIMENT_CHECKPOINT_SYNC_THRESHOLD_S to a smaller value than the current threshold (5.0).
2024-04-11 11:31:49,654 INFO tune.py:1016 -- Wrote the latest version of all result files and experiment state to '/root/ray_results/_train_tune_2024-04-11_11-31-16' in 0.0063s.
2024-04-11 11:31:49,656 ERROR tune.py:1044 -- Trials did not complete: [_train_tune_19484a43, _train_tune_32dc125f, _train_tune_d94cae14]
2024-04-11 11:31:49,661 WARNING experiment_analysis.py:568 -- Could not find best trial. Did you pass the correct `metric` parameter?
(raylet) Traceback (most recent call last):
  File "python/ray/_raylet.pyx", line 1883, in ray._raylet.execute_task
  File "python/ray/_raylet.pyx", line 1984, in ray._raylet.execute_task
  File "python/ray/_raylet.pyx", line 1889, in ray._raylet.execute_task
  File "python/ray/_raylet.pyx", line 1830, in ray._raylet.execute_task.function_executor
  File "/usr/local/lib/python3.10/dist-packages/ray/_private/function_manager.py", line 724, in actor_method_executor
    return method(__ray_actor, *args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/ray/util/tracing/tracing_helper.py", line 467, in _resume_span
    return method(self, *_args, **_kwargs)
  File "/usr/local/lib/python3.10/dist-packages/ray/tune/trainable/trainable.py", line 334, in train
    raise skipped from exception_cause(skipped)
  File "/usr/local/lib/python3.10/dist-packages/ray/air/_internal/util.py", line 88, in run
    self._ret = self._target(*self._args, **self._kwargs)
  File "/usr/local/lib/python3.10/dist-packages/ray/tune/trainable/function_trainable.py", line 53, in <lambda>
    training_func=lambda: self._trainable_func(self.config),
  File "/usr/local/lib/python3.10/dist-packages/ray/util/tracing/tracing_helper.py", line 467, in _resume_span
    return method(self, *_args, **_kwargs)
  File "/usr/local/lib/python3.10/dist-packages/ray/tune/trainable/function_trainable.py", line 261, in _trainable_func
    output = fn()
  File "/usr/local/lib/python3.10/dist-packages/ray/tune/trainable/util.py", line 130, in inner
    return trainable(config, **fn_kwargs)
  File "/usr/local/lib/python3.10/dist-packages/neuralforecast/common/_base_auto.py", line 209, in _train_tune
    _ = self._fit_model(
  File "/usr/local/lib/python3.10/dist-packages/neuralforecast/common/_base_auto.py", line 357, in _fit_model
    model = model.fit(
  File "/usr/local/lib/python3.10/dist-packages/neuralforecast/common/_base_windows.py", line 638, in fit
    return self._fit(
  File "/usr/local/lib/python3.10/dist-packages/neuralforecast/common/_base_model.py", line 215, in _fit
    trainer.fit(model, datamodule=datamodule)
  File "/usr/local/lib/python3.10/dist-packages/pytorch_lightning/trainer/trainer.py", line 544, in fit
    call._call_and_handle_interrupt(
  File "/usr/local/lib/python3.10/dist-packages/pytorch_lightning/trainer/call.py", line 44, in _call_and_handle_interrupt
    return trainer_fn(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/pytorch_lightning/trainer/trainer.py", line 580, in _fit_impl
    self._run(model, ckpt_path=ckpt_path)
  File "/usr/local/lib/python3.10/dist-packages/pytorch_lightning/trainer/trainer.py", line 987, in _run
    results = self._run_stage()
  File "/usr/local/lib/python3.10/dist-packages/pytorch_lightning/trainer/trainer.py", line 1031, in _run_stage
    self._run_sanity_check()
  File "/usr/local/lib/python3.10/dist-packages/pytorch_lightning/trainer/trainer.py", line 1060, in _run_sanity_check
    val_loop.run()
  File "/usr/local/lib/python3.10/dist-packages/pytorch_lightning/loops/utilities.py", line 182, in _decorator
    return loop_run(self, *args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/pytorch_lightning/loops/evaluation_loop.py", line 135, in run
    self._evaluation_step(batch, batch_idx, dataloader_idx, dataloader_iter)
  File "/usr/local/lib/python3.10/dist-packages/pytorch_lightning/loops/evaluation_loop.py", line 396, in _evaluation_step
    output = call._call_strategy_hook(trainer, hook_name, *step_args)
  File "/usr/local/lib/python3.10/dist-packages/pytorch_lightning/trainer/call.py", line 309, in _call_strategy_hook
    output = fn(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/pytorch_lightning/strategies/strategy.py", line 412, in validation_step
    return self.lightning_module.validation_step(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/neuralforecast/common/_base_windows.py", line 522, in validation_step
    output_batch = self(windows_batch)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1520, in _call_impl
    return forward_call(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/neuralforecast/models/nbeats.py", line 401, in forward
    backcast, block_forecast = block(insample_y=residuals)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1520, in _call_impl
    return forward_call(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/neuralforecast/models/nbeats.py", line 188, in forward
    backcast, forecast = self.basis(theta)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1520, in _call_impl
    return forward_call(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/neuralforecast/models/nbeats.py", line 139, in forward
    backcast = torch.einsum("bp,pt->bt", backcast_theta, self.backcast_basis)
  File "/usr/local/lib/python3.10/dist-packages/torch/functional.py", line 380, in einsum
    return _VF.einsum(equation, operands)  # type: ignore[attr-defined]
RuntimeError: einsum(): subscript p has size 2 for operand 1 which does not broadcast with previously seen size 0

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.10/dist-packages/ray/cloudpickle/cloudpickle.py", line 1245, in dump
    return super().dump(obj)
  File "/usr/local/lib/python3.10/dist-packages/tblib/pickling_support.py", line 46, in pickle_exception
    rv = obj.__reduce_ex__(3)
RecursionError: maximum recursion depth exceeded while calling a Python object

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "python/ray/_raylet.pyx", line 2281, in ray._raylet.task_execution_handler
  File "python/ray/_raylet.pyx", line 2177, in ray._raylet.execute_task_with_cancellation_handler
  File "python/ray/_raylet.pyx", line 1832, in ray._raylet.execute_task
  File "python/ray/_raylet.pyx", line 1833, in ray._raylet.execute_task
  File "python/ray/_raylet.pyx", line 2071, in ray._raylet.execute_task
  File "python/ray/_raylet.pyx", line 1089, in ray._raylet.store_task_errors
  File "python/ray/_raylet.pyx", line 4575, in ray._raylet.CoreWorker.store_task_outputs
  File "/usr/local/lib/python3.10/dist-packages/ray/_private/serialization.py", line 494, in serialize
    return self._serialize_to_msgpack(value)
  File "/usr/local/lib/python3.10/dist-packages/ray/_private/serialization.py", line 449, in _serialize_to_msgpack
    value = value.to_bytes()
  File "/usr/local/lib/python3.10/dist-packages/ray/exceptions.py", line 32, in to_bytes
    serialized_exception=pickle.dumps(self),
  File "/usr/local/lib/python3.10/dist-packages/ray/cloudpickle/cloudpickle.py", line 1479, in dumps
    cp.dump(obj)
  File "/usr/local/lib/python3.10/dist-packages/ray/cloudpickle/cloudpickle.py", line 1249, in dump
    raise pickle.PicklingError(msg) from e
_pickle.PicklingError: Could not pickle object as excessively deep recursion required.
An unexpected internal error occurred while the worker was executing a task.
(raylet) A worker died or was killed while executing a task by an unexpected system error. To troubleshoot the problem, check the logs for the dead worker. RayTask ID: ffffffffffffffff246bf316913a4ed9f5ffe22301000000 Worker ID: 3645e0cba0b5092ebe6493de01c8a5c932433ce5d524760d097c0e36 Node ID: 3736702c9ec1e64507fbd72f7998d7271e8fa79341b6b57e9a75b392 Worker IP address: 172.28.0.12 Worker port: 46587 Worker PID: 8636 Worker exit type: SYSTEM_ERROR Worker exit detail: Worker exits unexpectedly. Worker exits with an exit code None. Traceback (most recent call last):
  File "python/ray/_raylet.pyx", line 1883, in ray._raylet.execute_task
  File "python/ray/_raylet.pyx", line 1984, in ray._raylet.execute_task
  File "python/ray/_raylet.pyx", line 1889, in ray._raylet.execute_task
  File "python/ray/_raylet.pyx", line 1830, in ray._raylet.execute_task.function_executor
  File "/usr/local/lib/python3.10/dist-packages/ray/_private/function_manager.py", line 724, in actor_method_executor
    return method(__ray_actor, *args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/ray/util/tracing/tracing_helper.py", line 467, in _resume_span
    return method(self, *_args, **_kwargs)
  File "/usr/local/lib/python3.10/dist-packages/ray/tune/trainable/trainable.py", line 334, in train
    raise skipped from exception_cause(skipped)
  File "/usr/local/lib/python3.10/dist-packages/ray/air/_internal/util.py", line 88, in run
    self._ret = self._target(*self._args, **self._kwargs)
  File "/usr/local/lib/python3.10/dist-packages/ray/tune/trainable/function_trainable.py", line 53, in <lambda>
    training_func=lambda: self._trainable_func(self.config),
  File "/usr/local/lib/python3.10/dist-packages/ray/util/tracing/tracing_helper.py", line 467, in _resume_span
    return method(self, *_args, **_kwargs)
  File "/usr/local/lib/python3.10/dist-packages/ray/tune/trainable/function_trainable.py", line 261, in _trainable_func
    output = fn()
  File "/usr/local/lib/python3.10/dist-packages/ray/tune/trainable/util.py", line 130, in inner
    return trainable(config, **fn_kwargs)
  File "/usr/local/lib/python3.10/dist-packages/neuralforecast/common/_base_auto.py", line 209, in _train_tune
    _ = self._fit_model(
  File "/usr/local/lib/python3.10/dist-packages/neuralforecast/common/_base_auto.py", line 357, in _fit_model
    model = model.fit(
  File "/usr/local/lib/python3.10/dist-packages/neuralforecast/common/_base_windows.py", line 638, in fit
    return self._fit(
  File "/usr/local/lib/python3.10/dist-packages/neuralforecast/common/_base_model.py", line 215, in _fit
    trainer.fit(model, datamodule=datamodule)
  File "/usr/local/lib/python3.10/dist-packages/pytorch_lightning/trainer/trainer.py", line 544, in fit
    call._call_and_handle_interrupt(
  File "/usr/local/lib/python3.10/dist-packages/pytorch_lightning/trainer/call.py", line 44, in _call_and_handle_interrupt
    return trainer_fn(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/pytorch_lightning/trainer/trainer.py", line 580, in _fit_impl
    self._run(model, ckpt_path=ckpt_path)
  File "/usr/local/lib/python3.10/dist-packages/pytorch_lightning/trainer/trainer.py", line 987, in _run
    results = self._run_stage()
  File "/usr/local/lib/python3.10/dist-packages/pytorch_lightning/trainer/trainer.py", line 1031, in _run_stage
    self._run_sanity_check()
  File "/usr/local/lib/python3.10/dist-packages/pytorch_lightning/trainer/trainer.py", line 1060, in _run_sanity_check
    val_loop.run()
  File "/usr/local/lib/python3.10/dist-packages/pytorch_lightning/loops/utilities.py", line 182, in _decorator
    return loop_run(self, *args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/pytorch_lightning/loops/evaluation_loop.py", line 135, in run
    self._evaluation_step(batch, batch_idx, dataloader_idx, dataloader_iter)
  File "/usr/local/lib/python3.10/dist-packages/pytorch_lightning/loops/evaluation_loop.py", line 396, in _evaluation_step
    output = call._call_strategy_hook(trainer, hook_name, *step_args)
  File "/usr/local/lib/python3.10/dist-packages/pytorch_lightning/trainer/call.py", line 309, in _call_strategy_hook
    output = fn(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/pytorch_lightning/strategies/strategy.py", line 412, in validation_step
    return self.lightning_module.validation_step(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/neuralforecast/common/_base_windows.py", line 522, in validation_step
    output_batch = self(windows_batch)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1520, in _call_impl
    return forward_call(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/neuralforecast/models/nbeats.py", line 401, in forward
    backcast, block_forecast = block(insample_y=residuals)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1520, in _call_impl
    return forward_call(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/neuralforecast/models/nbeats.py", line 188, in forward
    backcast, forecast = self.basis(theta)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1520, in _call_impl
    return forward_call(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/neuralforecast/models/nbeats.py", line 139, in forward
    backcast = torch.einsum("bp,pt->bt", backcast_theta, self.backcast_basis)
  File "/usr/local/lib/python3.10/dist-packages/torch/functional.py", line 380, in einsum
    return _VF.einsum(equation, operands)  # type: ignore[attr-defined]
RuntimeError: einsum(): subscript p has size 2 for operand 1 which does not broadcast with previously seen size 0

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.10/dist-packages/ray/cloudpickle/cloudpickle.py", line 1245, in dump
    return super().dump(obj)
  File "/usr/local/lib/python3.10/dist-packages/tblib/pickling_support.py", line 46, in pickle_exception
    rv = obj.__reduce_ex__(3)
RecursionError: maximum recursion depth exceeded while calling a Python object

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "python/ray/_raylet.pyx", line 2281, in ray._raylet.task_execution_handler
  File "python/ray/_raylet.pyx", line 2177, in ray._raylet.execute_task_with_cancellation_handler
  File "python/ray/_raylet.pyx", line 1832, in ray._raylet.execute_task
  File "python/ray/_raylet.pyx", line 1833, in ray._raylet.execute_task
  File "python/ray/_raylet.pyx", line 2071, in ray._raylet.execute_task
  File "python/ray/_raylet.pyx", line 1089, in ray._raylet.store_task_errors
  File "python/ray/_raylet.pyx", line 4575, in ray._raylet.CoreWorker.store_task_outputs
  File "/usr/local/lib/python3.10/dist-packages/ray/_private/serialization.py", line 494, in serialize
    return self._serialize_to_msgpack(value)
  File "/usr/local/lib/python3.10/dist-packages/ray/_private/serialization.py", line 449, in _serialize_to_msgpack
    value = value.to_bytes()
  File "/usr/local/lib/python3.10/dist-packages/ray/exceptions.py", line 32, in to_bytes
    serialized_exception=pickle.dumps(self),
  File "/usr/local/lib/python3.10/dist-packages/ray/cloudpickle/cloudpickle.py", line 1479, in dumps
    cp.dump(obj)
  File "/usr/local/lib/python3.10/dist-packages/ray/cloudpickle/cloudpickle.py", line 1249, in dump
    raise pickle.PicklingError(msg) from e
_pickle.PicklingError: Could not pickle object as excessively deep recursion required.
An unexpected internal error occurred while the worker was executing a task.

Trial _train_tune_d94cae14 errored after 0 iterations at 2024-04-11 11:31:49. Total running time: 33s
Error file: /tmp/ray/session_2024-04-11_11-14-51_941718_2388/artifacts/2024-04-11_11-31-16/_train_tune_2024-04-11_11-31-16/driver_artifacts/_train_tune_d94cae14_3_batch_size=128,h=1,hist_exog_list=rain_tide,input_size=168,learning_rate=0.0001,loss=ref_ph_de895953,max_st_2024-04-11_11-31-33/error.txt

---------------------------------------------------------------------------
RuntimeError                              Traceback (most recent call last)
<timed exec> in <module>

[/usr/local/lib/python3.10/dist-packages/neuralforecast/core.py](https://localhost:8080/#) in cross_validation(self, df, static_df, n_windows, step_size, val_size, test_size, sort_df, use_init_models, verbose, refit, id_col, time_col, target_col, **data_kwargs)
    979             df = df.reset_index(id_col)
    980         if not refit:
--> 981             return self._no_refit_cross_validation(
    982                 df=df,
    983                 static_df=static_df,

2 frames
[/usr/local/lib/python3.10/dist-packages/ray/tune/result_grid.py](https://localhost:8080/#) in get_best_result(self, metric, mode, scope, filter_nan_and_inf)
    158                 else "."
    159             )
--> 160             raise RuntimeError(error_msg)
    161 
    162         return self._trial_to_result(best_trial)

RuntimeError: No best trial found for the given metric: loss. This means that no trial has reported this metric, or all values reported for this metric are NaN. To not ignore NaN values, you can set the `filter_nan_and_inf` arg to False.
elephaint commented 2 months ago

NBEATS is not compatible with h=1 and seasonality or trend in stack_types. Neuralforecast raises an exception in NBEATSx, but not in NBEATS.

We will add the exception also to NBEATS.