Open iirekm opened 3 years ago
Hi @iirekm
Do you know if optuna
for example supports it?
The other options is, we would need to do the linear to log scaling internally and expose only the linear scale to the extarnal implementations ...
Optuna has log hyperparameters (just add log=True to suggest_xxxx), unfortunately doesn't have reverse log, have to be simulated with 1 - log parameter
Hmm that means we would have to simulate the entire feature so it is available to all optimizer.
I guess we could inherit from UniformParameterRange and just log/exp the parent choice.
they both are needed, eg logarithmic for learning rate hyperparams, reverse logarithmic eg for gamma parameter in reinforcement learning, logaritmic int param can be sometimes good eg for example for number of neurons in layer