Closed yarnabrina closed 2 days ago
Hello! Please, read our tutorial on hyperparameter optimization. It shows how to pass to the configuration the random_seed
and max_steps
.
Once optimization is done, you can then get the information from all runs (assuming you are using Ray) with:
results = nf.models[0].results.get_dataframe()
results.head()
You can then order the DataFrame using the loss or training loss, as you wish, and the values of each hyperparameter is stored in the columns starting with config.
Hi @marcopeix, thanks for your response. I am not sure I am able to follow you. Can you please elaborate in details for below?
Given a hyperparameter space, I would like to reproduce the selected set of points from that space which will be then tried. How can I use the above snippet for that?
Based on https://github.com/Nixtla/neuralforecast/issues/924 and related discussions with @elephaint, my understanding was that it was not possible to control hyperparameter space partially. Either I have to pass a full configuration space, or I have to use default space. For example, if I only want to fix max_steps
as 500
and everything else based on default hyperparameter space, that is not possible. Can you please tell if that problem is fixed now, because all I want is to control specific parameters and leave rest to default?
Hi @marcopeix , just checking in case you missed the above questions.
Hey @yarnabrina.
You can control this with the search_alg
argument of the auto models constructor.
None
, which is the same default as the Study. If you want to fix the space there you have to provide a fixed sampler, e.g. search_alg=optuna.samplers.TPESampler(seed=0)
https://github.com/Nixtla/neuralforecast/blob/48952849616d1284f2e75360a685fc89be869c50/neuralforecast/common/_base_auto.py#L344-L347Based on one of your previous requests all of the auto models now have a get_default_config
method, which returns the defaults and allows you to override specific keys for ray (example). There's also an example for optuna , but that doesn't work very well because optuna tracks the name of the parameters that are already defined, so you have to give them different names and the old ones will still show up in the search (even though they won't be used).
Thanks for your detailed response @jmoralez , really appreciated. I can confirm these will solve the two doubts I asked above.
However, I felt that since these are much more used in practice by end users and also because these do not affect the architecture itself, it may make sense to expose those as direct arguments of Auto*
model itself. For example, case 2 seems more user friendly to me than case 1:
# case 1
gru_defaults = AutoGRU.get_default_config(12, "ray")
lstm_defaults = AutoLSTM.get_default_config(12, "ray")
model = NeuralForecast(
[
AutoGRU(12, config={**gru_defaults, "max_steps": 100, "random_seed": 0}),
AutoLSTM(12, config={**lstm_defaults, "max_steps": 100, "random_seed": 0}),
],
"D",
)
# case 2 (this does not work now, of course)
model = NeuralForecast(
[
AutoGRU(12, max_steps=100, random_seed=0),
AutoLSTM(12, max_steps=100, random_seed=0),
],
"D",
)
But it is just a request, I can work with the case 1 as you suggested for direct use, and will recondier sktime interfacing issues later.
They are defined there because the user may want to tune those as well, e.g. config={**lstm_defaults, "max_steps": tune.choice([100, 200, 300])
. Adding those to the init signature introduces more complexity because now it's not clear which one takes precedence, etc.
Description
Most (if not all) non-auto models contain
random_seed
, but this option seems missing forAuto*
models. It will be very useful to have an option exposed so that we can ensure every time same hyperparemeter configuration will be selected, and for a specific choice, how can I reproduce that with a non-Auto model.Similarly, there does not seem to be an option to control training iterations for Auto models, even though it is present in non-auto models with
max_steps
argument. Other than that argument which restricts num_batch * num_epoch, if we can also have an argument to restrict num_epoch directlyin, case I want to be set it at a particular number irrespective of num_batch, that will be useful.Use case
No response