Closed LinearParadox closed 1 month ago
Hi @LinearParadox , Try to do something like:
search_space = {
"model_params": {"n_hidden": tune.choice([64, 128, 256]),
"n_layers": tune.choice([1, 2, 3, 4]),
"n_latent": tune.choice([10, 20, 30, 40, 50]),
"gene_likelihood": tune.choice(["nb", "zinb"])
},
"train_params": {"max_epochs": 100,
"plan_kwargs": {"lr": tune.loguniform(1e-4, 1e-2)}}}
Should work
Got it, that worked! I think the tutorial for autotune may have a slight typo related to this:
The search space is:
search_space = {
"model_params": {"n_hidden": tune.choice([64, 128, 256]), "n_layers": tune.choice([1, 2, 3])},
"train_params": {"max_epochs": 100},
}
It says we might get 2 models if we run 2 samples:
model1 = {
"n_hidden": 64,
"n_layers": 1,
"lr": 0.001,
}
model2 = {
"n_hidden": 64,
"n_layers": 3,
"lr": 0.0001,
}
The learning rate is not set in the search space, however these two seem to have different learning rates. I'm not sure if it's an extra zero, or whether it automatically varies the learning rate.
You are correct, we will change that, thanks for notice!
Before you could sample LR from a distribution. WHen I try to reimplement similar behavior with autotune, it throws an error on model training.
Versions: