elephaint / pgbm

Probabilistic Gradient Boosting Machines
Apache License 2.0
138 stars 20 forks source link

Why monotone_constraints is set as a parameter of fit() rather than PGBMRegressor itself? #8

Closed flippercy closed 2 years ago

flippercy commented 2 years ago

Hi @elephaint:

I just realized that for PGBM, monotone_constraints was set as a parameter of fit() while monotone_iterations a parameter of PGBMRegressor.

Any reason to separate them instead of also include monotone_constraints in PGBMRegressor as the scikit-learner wrapper of xgboost/lightgbm does? The current approach makes it difficult to be put on platforms for HPO/automl such as FLAML.

Best,

Yu Cao

elephaint commented 2 years ago

Hi,

Sorry for the late reply (holidays). I think I experienced issues with the sklearn_regressor when including in fit, but I'll be looking into it today, I agree with your observation that it's better to include it in the initialization to be able to do hyperparameter optimiziation over it.

elephaint commented 2 years ago

Hi,

I released a new version that fixes this issue - monotone_constraints is now part of the initializer of PGBMRegressor rather than .fit().

Best

flippercy commented 2 years ago

Thanks a lot! I will check it later.