SimonBlanke / Gradient-Free-Optimizers

Simple and reliable optimization with local, global, population-based and sequential techniques in numerical discrete search spaces.
https://simonblanke.github.io/gradient-free-optimizers-documentation
MIT License
1.21k stars 84 forks source link

Support Keyword Arguments as non-optimizable parameters in the optimization goal function #36

Closed StHowling closed 1 year ago

StHowling commented 1 year ago

Is your feature request related to a problem? Please describe.

Hi, I came across this repository when I searched on Github for Hill Climbing optimization algorithm written in Python, to help determine the optimal weights of base classifiers in a customized ensemble machine learning model. I really appreciate the work you've done for implementing so many common optimizers with such a neat API. However, it seems oversimplified when the optimization goal is a complicated function that is based on the output of some other models that cannot be pre-determined or hard-coded. I've checked the given machine learning example, where the data input X and y are given as global variables, which is not applicable in practice. Inserting the input into Searchspace such that we can find them in para would not be helpful, since we do not really want to search anything in that space.

Describe the solution you'd like The new API is expected to look like

def objective_function(para, non-optimizable arguments):
    score = para["x1"] * para["x1"]
    # interact the score with non-optimizable arguments here 
    return score

And for the search function, it would be good if it supports both positional arguments via passing a tuple and keyword arguments via keywords or a dict, like pandas.DataFrame.apply() or multiprocessing.apply_async().

opt = RandomSearchOptimizer(search_space)
opt.search(objective_function, kwds={X: X_train, y: y_train}, n_iter=100000)

Describe alternatives you've considered Currently I've come up with a comearound using a wrapper and nonlocal statement, but I haven't checked if that works as expected.

# as a class method
def wrapper(self, non-optimizable params):
    # prepare the vairables that contributes to the score to optimize here

    def objective_function(params):
        nonlocal foo
        score = foo[blah blah] * params[blah blah]
        return score

    search_space = {
        "w1": np.linspace(0,1,20),
        ...
    }

    opt = HillClimbingOptimizer(search_space)
    opt.search(objective_function, n_iter=50)
SimonBlanke commented 1 year ago

Hello @StHowling,

thank you for this detailed explanation! It was very helpful to understand your train of thought.

The solution for your problem cannot be found in Gradient-Free-Optimizers but in Hyperactive, which is build on top of Gradient-Free-Optimizers and is specialized on machine-learning optimization. The feature you require was requested in issue SimonBlanke/Hyperactive#42 . Just search for the pass_through-parameter in the API documentation to find out more :-)

If you want to learn more about optimizing ensembles with Hyperactive you can checkout this example.

If this is the solution you require you can close this issue. Feel free to open another issue in Hyperactive or Gradient-Free-Optimizers if you have other questions or feature request.

FYI: I want to keep Gradient-Free-Optimizers simple and generalized to act mostly as an optimization-backend for other projects.

StHowling commented 1 year ago

Thanks for the explanation, I'll just change to use Hyperactive then.