Closed npschweizer closed 2 years ago
I have never used an algorithm like superlearner, but it appears to be a model ensamble optimization technique, which is quite a more complicated problem than function optimization. Typically, a "function" consists of the inputs (hyperparameters) and the outputs (validation score). To use an ensamble technique with Bayesian optimization, you would need to condense the ensamble into a function. Things like model weight and hyperparameters could be passed as arguments. However, this gets complicated if you want to change the number of models, or the type of models used.
Optimizing a model ensamble is possible with Bayesian optimization, the ensamble being optimized just needs to be defined as a function, which may be harder than it appears at first glance.
Hey There,
Any knowledge or experience regarding how your package might be used with a stacking algorithm like superlearner that involves including sub-optimized versions of your model in the final product? It seems to me that there might be many ways that this kind of algorithm could be run with a base/meta model combination, but curious to hear what your experience is. If you do know anything, maybe we could write it into the readme?
Thanks in advance!