StatMixedML / LightGBMLSS

An extension of LightGBM to probabilistic modelling
https://statmixedml.github.io/LightGBMLSS/
Apache License 2.0
272 stars 28 forks source link

Binary Classification #5

Closed firmai closed 2 years ago

firmai commented 2 years ago

Cool piece of software, could it be used for estimating probabilistic uncertainty for binary classification tasks?

StatMixedML commented 2 years ago

@firmai Thanks for your interest in the project.

As of now, it is implemented for regression tasks only, since classification tasks are probabilistic already, assigning probabilities to each class.

ihopethiswillfi commented 2 years ago

Agreed. What I'd personally be interested in is a classifier which can output calibrated probabilities in one go. Then you don't have to fit a calibrator (e.g. sklearn CalibratedClassifier) on top of the already trained model. I am not sure if this is even possible by the way. Just throwing it out here because @StatMixedML you really seem to know what you are doing :)

StatMixedML commented 2 years ago

@ihopethiswillfi Thanks for the clarification.

I definitely do see the value of probability calibration and for it being a valuable tool for evaluating any classifier, but I am no sure I how you would get that out from the specified likelihood.

Both XGBoostLSS and LightGBMLSS are trained to learn the parameters of a specified likelihood. So for the binary-case it would be a Bernoulli, and for the multi-class classification a multi-nomial distribution. For the Bernoulli, there is only one parameter p (q=1-p) that the model would learn. I don't see how to extend that to allow for a calibration, but very happy to hear your comments on this. Would be a nice extension.

ihopethiswillfi commented 2 years ago

Unfortunately I have no idea either. Thanks for the quick response.

(by the way I'm not OP)

StatMixedML commented 2 years ago

@firmai Can we close the issue?

neverfox commented 1 year ago

I was thinking about (probabilistic) classification problems in the context of a meta-model for the distribution of Bernoulii p itself (perhaps a Beta). Yes, when you get a probability from a classifier, it's Bernoulii and you can sample from it. But what if the probability is 1 (or 0)? Your samples will all be 1 (or 0) and they would have 0 variance. What I think would be interesting would be the ability to sample from the distribution of p rather than the (Bernoulli) distribution parameterized by p. That would, I think, let you examine questions about what your model might have predicted, within some credibility bounds. You'd be able to talk about the confidence of the model's predictions rather than the confidence in the model's outcomes given that the predictions are correct. Am I thinking about that correctly?