allegroai / clearml

ClearML - Auto-Magical CI/CD to streamline your AI workload. Experiment Management, Data Management, Pipeline, Orchestration, Scheduling & Serving in one MLOps/LLMOps solution
https://clear.ml/docs
Apache License 2.0
5.61k stars 651 forks source link

HyperParameterOptimizer: Add support for logarithmic and reverse logarithmic float/int hyperparams #244

Open iirekm opened 3 years ago

iirekm commented 3 years ago

they both are needed, eg logarithmic for learning rate hyperparams, reverse logarithmic eg for gamma parameter in reinforcement learning, logaritmic int param can be sometimes good eg for example for number of neurons in layer

iirekm commented 3 years ago

eg how it's done in SageMager (https://aws.amazon.com/blogs/machine-learning/amazon-sagemaker-automatic-model-tuning-now-supports-random-search-and-hyperparameter-scaling/ )

bmartinn commented 3 years ago

Hi @iirekm Do you know if optuna for example supports it? The other options is, we would need to do the linear to log scaling internally and expose only the linear scale to the extarnal implementations ...

iirekm commented 3 years ago

Optuna has log hyperparameters (just add log=True to suggest_xxxx), unfortunately doesn't have reverse log, have to be simulated with 1 - log parameter

bmartinn commented 3 years ago

Hmm that means we would have to simulate the entire feature so it is available to all optimizer.

I guess we could inherit from UniformParameterRange and just log/exp the parent choice.