The example fails and returns this error message "ValueError: CategoricalDistribution does not support dynamic value space.`"
The issue is occurring due to updating the default config with
"n_pool_kernel_size": trial.suggest_categorical("n_pool_kernel_size", [[2, 2, 2], [16, 8, 1]])
This (from the tutorial/documentation) does not work:
from neuralforecast.utils import AirPassengersDF
from neuralforecast.auto import AutoNHITS
from neuralforecast.core import NeuralForecast
from neuralforecast.losses.pytorch import MAE
import optuna
Y_df = AirPassengersDF
Y_df.head()
optuna.logging.set_verbosity(optuna.logging.WARNING) # Use this to disable training prints from optuna
nhits_default_config = AutoNHITS.get_default_config(h = 12, backend="optuna") # Extract the default hyperparameter settings
def config_nhits(trial):
config = {**nhits_default_config(trial)}
config.update({
"random_seed": trial.suggest_int("random_seed", 1, 10),
"n_pool_kernel_size": trial.suggest_categorical("n_pool_kernel_size", [[2, 2, 2], [16, 8, 1]])
})
return config
model = AutoNHITS(h=12,
loss=MAE(),
config=config_nhits,
search_alg=optuna.samplers.TPESampler(),
backend='optuna',
num_samples=10)
nf = NeuralForecast(models=[model], freq='M')
nf.fit(df=Y_df, val_size=24)
Error Log:
`ValueError: CategoricalDistribution does not support dynamic value space.
[W 2024-06-19 14:02:35,193] Trial 0 failed with value None.
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Cell In[53], line 2
1 nf = NeuralForecast(models=[model], freq='1h')
----> 2 nf.fit(df=data.drop("ticker", axis=1), val_size=24)
File [~/miniconda3/envs/env_nf/lib/python3.10/site-packages/neuralforecast/core.py:462](http://localhost:8888/home/sk/miniconda3/envs/env_nf/lib/python3.10/site-packages/neuralforecast/core.py#line=461), in NeuralForecast.fit(self, df, static_df, val_size, sort_df, use_init_models, verbose, id_col, time_col, target_col, distributed_config)
459 self._reset_models()
461 for i, model in enumerate(self.models):
--> 462 self.models[i] = model.fit(
463 self.dataset, val_size=val_size, distributed_config=distributed_config
464 )
466 self._fitted = True
File [~/miniconda3/envs/env_nf/lib/python3.10/site-packages/neuralforecast/common/_base_auto.py:412](http://localhost:8888/home/sk/miniconda3/envs/env_nf/lib/python3.10/site-packages/neuralforecast/common/_base_auto.py#line=411), in BaseAuto.fit(self, dataset, val_size, test_size, random_seed, distributed_config)
410 best_config = results.get_best_result().config
411 else:
--> 412 results = self._optuna_tune_model(
413 cls_model=self.cls_model,
414 dataset=dataset,
415 val_size=val_size,
416 test_size=test_size,
417 verbose=self.verbose,
418 num_samples=self.num_samples,
419 search_alg=search_alg,
420 config=self.config,
421 distributed_config=distributed_config,
422 )
423 best_config = results.best_trial.user_attrs["ALL_PARAMS"]
424 self.model = self._fit_model(
425 cls_model=self.cls_model,
426 config=best_config,
(...)
430 distributed_config=distributed_config,
431 )
File [~/miniconda3/envs/env_nf/lib/python3.10/site-packages/neuralforecast/common/_base_auto.py:345](http://localhost:8888/home/sk/miniconda3/envs/env_nf/lib/python3.10/site-packages/neuralforecast/common/_base_auto.py#line=344), in BaseAuto._optuna_tune_model(self, cls_model, dataset, val_size, test_size, verbose, num_samples, search_alg, config, distributed_config)
342 sampler = None
344 study = optuna.create_study(sampler=sampler, direction="minimize")
--> 345 study.optimize(
346 objective,
347 n_trials=num_samples,
348 show_progress_bar=verbose,
349 callbacks=self.callbacks,
350 )
351 return study
File [~/miniconda3/envs/env_nf/lib/python3.10/site-packages/optuna/study/study.py:451](http://localhost:8888/home/sk/miniconda3/envs/env_nf/lib/python3.10/site-packages/optuna/study/study.py#line=450), in Study.optimize(self, func, n_trials, timeout, n_jobs, catch, callbacks, gc_after_trial, show_progress_bar)
348 def optimize(
349 self,
350 func: ObjectiveFuncType,
(...)
357 show_progress_bar: bool = False,
358 ) -> None:
359 """Optimize an objective function.
360
361 Optimization is done by choosing a suitable set of hyperparameter values from a given
(...)
449 If nested invocation of this method occurs.
450 """
--> 451 _optimize(
452 study=self,
453 func=func,
454 n_trials=n_trials,
455 timeout=timeout,
456 n_jobs=n_jobs,
457 catch=tuple(catch) if isinstance(catch, Iterable) else (catch,),
458 callbacks=callbacks,
459 gc_after_trial=gc_after_trial,
460 show_progress_bar=show_progress_bar,
461 )
File [~/miniconda3/envs/env_nf/lib/python3.10/site-packages/optuna/study/_optimize.py:62](http://localhost:8888/home/sk/miniconda3/envs/env_nf/lib/python3.10/site-packages/optuna/study/_optimize.py#line=61), in _optimize(study, func, n_trials, timeout, n_jobs, catch, callbacks, gc_after_trial, show_progress_bar)
60 try:
61 if n_jobs == 1:
---> 62 _optimize_sequential(
63 study,
64 func,
65 n_trials,
66 timeout,
67 catch,
68 callbacks,
69 gc_after_trial,
70 reseed_sampler_rng=False,
71 time_start=None,
72 progress_bar=progress_bar,
73 )
74 else:
75 if n_jobs == -1:
File [~/miniconda3/envs/env_nf/lib/python3.10/site-packages/optuna/study/_optimize.py:159](http://localhost:8888/home/sk/miniconda3/envs/env_nf/lib/python3.10/site-packages/optuna/study/_optimize.py#line=158), in _optimize_sequential(study, func, n_trials, timeout, catch, callbacks, gc_after_trial, reseed_sampler_rng, time_start, progress_bar)
156 break
158 try:
--> 159 frozen_trial = _run_trial(study, func, catch)
160 finally:
161 # The following line mitigates memory problems that can be occurred in some
162 # environments (e.g., services that use computing containers such as GitHub Actions).
163 # Please refer to the following PR for further details:
164 # https://github.com/optuna/optuna/pull/325.
165 if gc_after_trial:
File ~/miniconda3/envs/env_nf/lib/python3.10/site-packages/optuna/study/_optimize.py:247, in _run_trial(study, func, catch)
240 assert False, "Should not reach."
242 if (
243 frozen_trial.state == TrialState.FAIL
244 and func_err is not None
245 and not isinstance(func_err, catch)
246 ):
--> 247 raise func_err
248 return frozen_trial
File [~/miniconda3/envs/env_nf/lib/python3.10/site-packages/optuna/study/_optimize.py:196](http://localhost:8888/home/sk/miniconda3/envs/env_nf/lib/python3.10/site-packages/optuna/study/_optimize.py#line=195), in _run_trial(study, func, catch)
194 with get_heartbeat_thread(trial._trial_id, study._storage):
195 try:
--> 196 value_or_values = func(trial)
197 except exceptions.TrialPruned as e:
198 # TODO(mamu): Handle multi-objective cases.
199 state = TrialState.PRUNED
File [~/miniconda3/envs/env_nf/lib/python3.10/site-packages/neuralforecast/common/_base_auto.py:318](http://localhost:8888/home/sk/miniconda3/envs/env_nf/lib/python3.10/site-packages/neuralforecast/common/_base_auto.py#line=317), in BaseAuto._optuna_tune_model.<locals>.objective(trial)
317 def objective(trial):
--> 318 user_cfg = config(trial)
319 cfg = deepcopy(user_cfg)
320 model = self._fit_model(
321 cls_model=cls_model,
322 config=cfg,
(...)
326 distributed_config=distributed_config,
327 )
File [~/miniconda3/envs/env_nf/lib/python3.10/site-packages/neuralforecast/common/_base_auto.py:156](http://localhost:8888/home/sk/miniconda3/envs/env_nf/lib/python3.10/site-packages/neuralforecast/common/_base_auto.py#line=155), in BaseAuto.__init__.<locals>.config_f(trial)
155 def config_f(trial):
--> 156 return {**config(trial), **config_base}
Cell In[51], line 7, in config_nhits(trial)
3 def config_nhits(trial):
4 config = {**nhits_default_config(trial)}
5 config.update({
6 "random_seed": 42,
----> 7 "n_pool_kernel_size": trial.suggest_categorical("n_pool_kernel_size", [[2, 2, 2], [16, 8, 1]])
8 })
9 return config
File [~/miniconda3/envs/env_nf/lib/python3.10/site-packages/optuna/trial/_trial.py:404](http://localhost:8888/home/sk/miniconda3/envs/env_nf/lib/python3.10/site-packages/optuna/trial/_trial.py#line=403), in Trial.suggest_categorical(self, name, choices)
353 """Suggest a value for the categorical parameter.
354
355 The value is sampled from ``choices``.
(...)
399 :ref:`configurations` tutorial describes more details and flexible usages.
400 """
401 # There is no need to call self._check_distribution because
402 # CategoricalDistribution does not support dynamic value space.
--> 404 return self._suggest(name, CategoricalDistribution(choices=choices))
File [~/miniconda3/envs/env_nf/lib/python3.10/site-packages/optuna/trial/_trial.py:618](http://localhost:8888/home/sk/miniconda3/envs/env_nf/lib/python3.10/site-packages/optuna/trial/_trial.py#line=617), in Trial._suggest(self, name, distribution)
614 trial = self._get_latest_trial()
616 if name in trial.distributions:
617 # No need to sample if already suggested.
--> 618 distributions.check_distribution_compatibility(trial.distributions[name], distribution)
619 param_value = trial.params[name]
620 else:
File [~/miniconda3/envs/env_nf/lib/python3.10/site-packages/optuna/distributions.py:670](http://localhost:8888/home/sk/miniconda3/envs/env_nf/lib/python3.10/site-packages/optuna/distributions.py#line=669), in check_distribution_compatibility(dist_old, dist_new)
668 return
669 if dist_old != dist_new:
--> 670 raise ValueError(
671 CategoricalDistribution.__name__ + " does not support dynamic value space."
672 )
ValueError: CategoricalDistribution does not support dynamic value space.`
It seems the issue is related to the name parameter in Optuna's trial.suggest_categorical. If I change the name value to something different (e.g. 'n_pool_kernel_size_V2'), the code seems to run correctly. However, it's not entirely clear what Optuna does with the original "n_pool_kernel_size" parameter which is part of the default config.
So, overall it's unclear if this is a bug to be fixed or just a matter of updating the documentation/tutorials.
What happened + What you expected to happen
I was attempting to follow the example for automatic hyperparameter tuning described here: https://nixtlaverse.nixtla.io/neuralforecast/examples/automatic_hyperparameter_tuning.html#4-optuna-backend
The example fails and returns this error message "ValueError: CategoricalDistribution does not support dynamic value space.`"
The issue is occurring due to updating the default config with
"n_pool_kernel_size": trial.suggest_categorical("n_pool_kernel_size", [[2, 2, 2], [16, 8, 1]])
This (from the tutorial/documentation) does not work:
This works:
If I inspect the results dataframe after fitting, I'll see both the original n_pool_kernel_size, along with n_pool_kernel_size_V2.
It seems the issue is related to the
name
parameter in Optuna's trial.suggest_categorical. If I change the name value to something different (e.g. 'n_pool_kernel_size_V2'), the code seems to run correctly. However, it's not entirely clear what Optuna does with the original "n_pool_kernel_size" parameter which is part of the default config.So, overall it's unclear if this is a bug to be fixed or just a matter of updating the documentation/tutorials.
Versions / Dependencies
Python 3.10.14 Ubuntu 22.04.4 LTS
Reproduction script
Issue Severity
Low: It annoys or frustrates me.