stanfordmlgroup / ngboost

Natural Gradient Boosting for Probabilistic Prediction
Apache License 2.0
1.62k stars 214 forks source link

Monotonicity of some parameters in distribution #338

Open thomasfrederikhoeck opened 8 months ago

thomasfrederikhoeck commented 8 months ago

For some datasets (typically modeling physical properties) one knows that some montone constrain can be applied between a feature and the prediction which can help bring down noise and ensure meaningfull relative predictions.

In the point-prediction-world this can be done with a model like HistGradientBoostingRegressor using the monotonic_cst setting (see link).

When modeling using a parameterized distribution ( like ngboost does) one would probably only want to apply this constrain to some of the parameters in the distribution i.e. to the loc of a Normal and let the scale be unconstrained. How would one go about using base learners with different setting for different parameters?

alejandroschuler commented 8 months ago

Oh that's an interesting idea. Right now I don't think it can be done but it wouldn't be very hard to modify the code to allow it. Tbh something chatGPT could probably tackle! Feel free to put in a PR.

thomasfrederikhoeck commented 7 months ago

@alejandroschuler just for my understading then: the distribution parameters do not need to share a base learner - they just do right now because there was no use case for them to be different?

alejandroschuler commented 7 months ago

Yep!