automl / ConfigSpace

Domain specific language for configuration spaces in Python. Useful for hyperparameter optimization and algorithm configuration.
https://automl.github.io/ConfigSpace/
Other
206 stars 95 forks source link

No more quantized hyperparameters? #390

Open bpkroth opened 3 months ago

bpkroth commented 3 months ago

346 removed quantization

No more quantized hyperparameters

Can I ask what was the reasoning behind this and can it be restored? That was a very useful feature especially for systems tuning applications.

eddiebergman commented 3 months ago

Hi @bpkroth,

The reason behind it was simply we didn't have anyone we know use these features, as well as the fact they had surprising and non-consistent behaviours. It complicated a lot of internal code and given the lack of man-power we can put towards maintaining things, this was a feature we removed. The plan was to eventually re-introduce them if we had requests for it, which it seems you do need them.

This should be a lot easier now since the notion of the distribution is seperated from the concern of transformation from the vectorized space, e.g. (0, 1), [0, ... N] from the actual value space, i.e. (-10, 243) and [cat, dog, ..., mouse].

Doing so would essentially involve having an integer distribution with a Transformer to take them from this integer vectorized space to the value space.

For non-uniform integer distributions, the following could be used while for uniform distributions, this distribution could be used.

When they come back, they would most likely come back as seperate hyperparameters. I believe trying to handle everything in one base-class leads to a lot of edge cases and messyness. For example, a QUniformFloat or a QUniformInt.

I can't give you a concrete timeline on this but I'm happy to review any PR's for it. If you only need a work-around and you are wrapping ConfigSpace, then applying a custom transformation to an existing integer hyperparameter is your best bet.

bpkroth commented 3 months ago

@motus when you get a moment can you please comment here with some of your observations about the Ordinal workaround issues?

bpkroth commented 3 months ago

@eddiebergman thanks for the pointers. We'll take a look at see if we can make something work. For now, we're toying with a monkeypatch workaround in our wrappers, and though not our favorite, seems like it might be workable. https://github.com/microsoft/MLOS/pull/833

BogueUser commented 3 months ago

The reason behind it was simply we didn't have anyone we know use these features, as well as the fact they had surprising and non-consistent behaviours. It complicated a lot of internal code and given the lack of man-power we can put towards maintaining things, this was a feature we removed. The plan was to eventually re-introduce them if we had requests for it, which it seems you do need them.

@eddiebergman Ray Tune uses the removed quantization option for the BOHB algorithm so it was actually in use somewhere. I can't seem to find many people talking about how it isn't working since the change so I guess it wasn't too heavily used.

bpkroth commented 3 months ago

The reason behind it was simply we didn't have anyone we know use these features, as well as the fact they had surprising and non-consistent behaviours. It complicated a lot of internal code and given the lack of man-power we can put towards maintaining things, this was a feature we removed. The plan was to eventually re-introduce them if we had requests for it, which it seems you do need them.

@eddiebergman Ray Tune uses the removed quantization option for the BOHB algorithm so it was actually in use somewhere. I can't seem to find many people talking about how it isn't working since the change so I guess it wasn't too heavily used.

I have an intern who was actually using that too.