Closed StatMixedML closed 2 years ago
@StatMixedML
can you not code a custom loss and include it in the optimization loop?
if I have time later I will paste some code
have a look here to see how I did it using the Focal loss, which depends on two parameters, alpha
and gamma
. I used hyperopt
as the optimization routine:
https://github.com/jrzaurin/LightGBM-with-Focal-Loss/blob/master/utils/train_hyperopt.py
@javier-cazana Nice, let me go through it.
Also referring to the discussion we are having here: https://github.com/StatMixedML/XGBoostLSS/issues/8#issue-651409999
This paper "NGBoost: Natural Gradient Boosting for Probabilistic Prediction" https://arxiv.org/pdf/1910.03225.pdf assumes a distribution with parameters on p(y|x) and optimizes the parameters using gradient descent. Does this help? Ignore me if it does not.
@2533245542 Many thanks for the paper link.
I do know the paper very well. The problem is that we need to find a proper way to translate the model training of LightGBM into multi-parameter training. Not quite sure how to do that. Any suggestions?
closing since issue is resolved
Description
Dear community,
I am currently working in a probabilistic extension of LightGBM called LightGBMLSS that models all parameters of a distribution. This allows to create probabilistic forecasts from which prediction intervals and quantiles of interest can be derived.
The problem is that LightGBM doesn't permit do optimize over several parameters. Assume we have a Normal distribution y ~ N(µ, sigma). So far, my approach is a two-step procedure, where I first optimize µ with sigma fixed, and then optimize sigma with µ fixed and then iterate between these two.
Since this is inefficient, are there any ways of simultaneously optimize both µ and sigma using a custom loss function?