microsoft / LightGBM

A fast, distributed, high performance gradient boosting (GBT, GBDT, GBRT, GBM or MART) framework based on decision tree algorithms, used for ranking, classification and many other machine learning tasks.
https://lightgbm.readthedocs.io/en/latest/
MIT License
16.74k stars 3.84k forks source link

Classifier predict numerical precision issue with large raw_score #6405

Open drblarg opened 8 months ago

drblarg commented 8 months ago

Description

I have a trained model with a binary objective using n_estimators=1000. The model performance (AUC) is quite good. I need the raw probabilities for selection by ranking. The probabilities provided by presict_proba or predict however have a very large number with value 0 or value 1 and an odd bowl shaped distribution.

When I use raw_score=True, I get scores from -11k to +135k without a large number being squashed to the min or max, and an expected distribution. Applying a simple sigmoid to these raw scores regains the non-raw scores. This clearly shows that the numerical precision is insufficient to distinguish the very low and very high values, so they get flattened to 0 and 1 respectively.

I believe the raw_score values should be normalized in some way first so as to avoid this problem. I have an old version of the model using v2 of lightgbm that does not have this issue (with training data and parameters nearly identical). Perhaps the old version averaged rather than summed the random Forrest raw scores, to avoid a number-of-trees dependence, before applying the sigmoid function? These seems like the right approach.

Environment info

LightGBM version or commit hash: 4.3.0 Running on python 3.11 in AWS (sagemaker)

jameslamb commented 8 months ago

Thanks for using LightGBM.

Are you able to share some minimal code showing precisely what you mean? I'm unsure how to interpret some of these statements like "a very large number with value 0 or value 1".

drblarg commented 8 months ago

I cannot share much in the way of specifics, but here is the workflow:

# X = features, y = known outcome

model_pipeline = Pipeline(
    lightgbm.LGBMClassifier(
        objective="binary",
        boosting="rf",
        n_estimators=1000,
        # etc., mostly default values
    )
)

model_pipeline = model_pipeline.fit(X, y)

scores = model_pipeline.predict_proba(X)[:,1]

scores are distributed from about 1e-5 to 1.0 in a bowl shape (high population at the min and max), with a large quantity having a value of exactly 1.0 (loss of ranking information).

If instead I look at:

scores_raw = model_pipeline.predict_proba(X, raw_score=True)

Then scores_raw is distributed from about -11000 to +136000 with a shape more resembling a decaying exponential, and no repeated values at the max score (no loss of ranking information). I can apply the basic sigmoid function to scores_raw to regain scores, which illustrates the numerical precision limit on the upper end. If the scores_raw distribution was first scaled down to something close to 1, the sigmoid distribution would not run into numerical precision limitations. Then the score ranking could again be used as intended.

As I mentioned, a previous version of lightgbm did not behave in the current way, avoiding this problem.

jameslamb commented 8 months ago

Ok, so to clarify:

drblarg commented 8 months ago

Apologies, yea I am using boosting="rf", I have edited my previous comment to include that. I am also using the built-in binary loss function.