Neuraxio / Neuraxle

The world's cleanest AutoML library ✨ - Do hyperparameter tuning with the right pipeline abstractions to write clean deep learning production pipelines. Let your pipeline steps have hyperparameter spaces. Design steps in your pipeline like components. Compatible with Scikit-Learn, TensorFlow, and most other libraries, frameworks and MLOps environments.
https://www.neuraxle.org/
Apache License 2.0
608 stars 62 forks source link

Feature: RecursiveDict.compress() to shorten paths to steps and their hyperparams #486

Closed guillaume-chevalier closed 2 years ago

guillaume-chevalier commented 3 years ago

Is your feature request related to a problem? Please describe. Hyperparam names are too long in nested steps

Describe the solution you'd like A way to compress the names so as to make them shorter. More specifically, I think that an automated algorithm for all existing ML pipelines could be built. That would be to do something like:

all_hps = pipeline.get_hyperparams()
all_hps_shortened = all_hps.compress()
pprint(all_hps_shortened)

Then we'd see something like this in the pprint:

{
    "*__MetaStep__*__SKLearnWrapper_LinearRegression__C": 1000,
    "*__SomeStep__hyperparam3": value,
    "*__SKLearnWrapper_BoostedTrees__count": 10
}

That is, the unique paths to some steps were compressed using the star (*) operator. The Star operator means "one or more steps between". But the way the paths are compressed would be lossless, in the sense that the original names could ALWAYS be retrieved given the original pipeline's tree structure.

Describe alternatives you've considered Using custom ways to flush words and compress them. That seems good, but it doesn't seem to generalize to all pipelines that could exist.

Additional context Hyperparameter names were said to be too long as well in #478

Additional idea For hyperparameters, given the fact that in the future every model may need to name its expected hyperparams, then it may be possible to use their name only and directly if there is no other step with the same hyperparams. If another step uses the same hyperparam names, then compression with the "*" could go up in the tree to find the first non-common parent name or something.

More ideas are needed to be sure we do this the right way.

Rohith295 commented 3 years ago

I would like to work on this issue.

guillaume-chevalier commented 3 years ago

Idea 1:

List of dicts containing step name and/or hyperparameter(s) and the parent list. Possible to remove parents for printing. It keeps the good order:

Idea 2:

Before: 
pipeline__a__predictor1__IncrementalFitCausalityModel__hp1
pipeline__a__predictor2__IncrementalFitCausalityModel__hp1
pipeline__b__predictor1__IncrementalFitCausalityModel__hp1

After: 
*a__predictor1*hp1: 435
*a__predictor2*hp1: 435
*b*hp1: 234

Or this Alternative version of After for instance: 
*a*predictor1*hp1: 435
*predictor2*hp1: 435
*b*hp1: 234
Rohith295 commented 3 years ago

One type of compressed format


all_hps_shortened = all_hps.compress()
print(all_hps_shortened)
[
    {

        "step_name": "step1",
        "hp": {

        },
        "parents": [""] or "_"
    },
    {

        "step_name": "step1",
        "hp": {

        },
        "parents": [""] or "_"
    },

]

**if trim parents is True**
all_hps_shortened = all_hps.compress(trim_parents = True)
print(all_hps_shortened)
[
    {

        "step_name": "step1",
        "hp": {

        },
    },
    {

        "step_name": "step1",
        "hp": {

        },
    },

]
guillaume-chevalier commented 3 years ago

@Rohith295 perfect ! I like compress(trim_parents = True) that would call remove_parents instantly as in Idea 1 :smiley:

guillaume-chevalier commented 3 years ago

Would be interesting to have this as well: CompressedHyperparameterSamples.restore() -> HyperparameterSamples

guillaume-chevalier commented 2 years ago

Completed using the "use_wildcards" argument such as in RecursiveDict.to_flat_dict(use_wildcards=True)