microsoft / LightGBM

A fast, distributed, high performance gradient boosting (GBT, GBDT, GBRT, GBM or MART) framework based on decision tree algorithms, used for ranking, classification and many other machine learning tasks.
https://lightgbm.readthedocs.io/en/latest/
MIT License
16.61k stars 3.83k forks source link

What happens with missing values during prediction? #2921

Closed AlbertoEAF closed 4 years ago

AlbertoEAF commented 4 years ago

Hello,

Suppose I stick to zero_as_missing=false, use_missing=true. Can you explain what happens during prediction if there are missing values?

I read a bit of the code but those parameters are only used in training, not scoring.

The only reference I saw in the documentation regarding missing values was:

  1. https://lightgbm.readthedocs.io/en/latest/Advanced-Topics.html#missing-value-handle
  2. https://www.kaggle.com/c/home-credit-default-risk/discussion/57918
  3. http://mlexplained.com/2018/01/05/lightgbm-and-xgboost-explained/

According to those sources, nulls are allocated to the bins that reduce the loss during training.

Is that true? And if so, what if there are no missing values in training?

guolinke commented 4 years ago

of course the prediction have all missing value handles. refer to the https://github.com/microsoft/LightGBM/blob/9654f16528d2d1e3462a298106cb1fc6ed43cbac/include/LightGBM/tree.h#L527-L539

AlbertoEAF commented 4 years ago

Hello @guolinke I already read a bit and still have some doubts which I can't get through.

During train:

During prediction when a missing is found for a numerical field:

During prediction when a missing is found for a categorical field:

Thank you!

guolinke commented 4 years ago

(1) yes (2) yes (3) yes, by default left (4) It will be converted to zero. refer to https://github.com/microsoft/LightGBM/blob/a8c1e0a11a5cbfac62fb57d16901fea16de95412/include/LightGBM/tree.h#L260-L264 (5) for categorical features, the split is unorder (both {(1, 3), (2, 4, nan)} and {(2, 4, nan), (1, 3) } are possible for categorical feature, but not for numerical feature). there are, forces the missing to the right side is okay.

AlbertoEAF commented 4 years ago

Sorry but I got confused reading (3) & (4).

(3) yes, by default left

Just to be sure, you're saying that during scoring, missing values for a numerical split:

Correct? :) (a)


(4) It will be converted to zero. refer to

https://github.com/microsoft/LightGBM/blob/cc6a2f5ac3396ffdadba7d05a8d5957be4530ab8/include/LightGBM/tree.h#L260-L262 (updated to latest code above)

That is only if we choose to not handle missings handle_missing=false or zero_as_missing=true right? (b)


I'm not sure I'm following your notation here:

(5) for categorical features, the split is unorder (both {(1, 3), (2, 4, nan)} and {(2, 4, nan), (1, 3) } are possible for categorical feature, but not for numerical feature). there are, forces the missing to the right side is okay.

guolinke commented 4 years ago

@AlbertoEAF for (3) and (4). only if handle_missing=true and zero_as_missing=false, and nan shown in that feature during training, the nan will be handled in prediction time, otherwise it is always converted to zeros.

For (5), yes,

AlbertoEAF commented 4 years ago

Thank you @guolinke, already understood those.

However, I don't understand the choice of nan's allocation in the case there were no missing values in train.

For numerical splits, as you explained, values below the split threshold must go to the left side, and values above it to the right. Why then by default do we allocate missing values to the left?

As for categorical splits, as they have no order, what dictactes if we split a new value during train to the left or right at all?

guolinke commented 4 years ago

@AlbertoEAF Sorry, I don't really understand your question. I try to answer below.

"by default" doesn't means they are always in that way. For numerical features, both left side and right side are tested for missing values. For categorical features, as there are not "left" and "right", and we can put anything to "left". Therefore, the missing values are in right side always.

AlbertoEAF commented 4 years ago

Sorry @guolinke , let me try to articulate a bit better! Looks huge I know but I hope it's simpler and more explicit :)

Assumptions and problem statement

My questions concern only the particular case where we didn't have missing values in train but have missing values in scoring. This means the model has no prior missing values distribution information.

Everything I say in the rest of the post assumes the paragraph above !

Seeing a missing value in scoring, the model will place the missing value to:


Questions

Missing values with numerical features

For numericals L and R sides are ordered, where non-missing values respect:

and during scoring we allocate the missing value to the L by default because we have no missing values prior information.

Question # 1:

  • We could have equally chosen to allocate to the right with the same odds of being correct, right?

Missing values with categorical features

For categoricals, missing values will be placed to the R side at all times in this scenario.

I don't understand however what happens for categoricals in train, and that is what determines the meaning of placing a missing value in scoring to the R side.

How is L vs R side controlled/chosen in train?

Let's assume that:

To find the optimal split, LightGBM sorts by objective function and finds that the optimal split is {A} vs {B or C}.

Question # 2: Which categories are now placed on each side?

AlbertoEAF commented 4 years ago

@guolinke can you clarify?

I think I have a proposal to improve missing values scores, but first I need to know if I understood the current algorithm :)

guolinke commented 4 years ago

@AlbertoEAF sorry for the late response, very busy recently... the missing value handle (unseen in training but seen in test) for categorical feature is easier. For categorical features, we choose the seen categories as split condition, and always to left. for example, if x == A or x == C or x == F then left, else right. Therefore, it is straightforward to put missing to right.

for numerical features, if not missing is seen in training, the missing value will be converted to zero, and then check it with the threshold. So it is not always the left side.

AlbertoEAF commented 4 years ago

Thank you @guolinke , no problem!

Ok, finally got it thanks! :D

Basically in categoricals you are always considering as belonging to the "other" non-split categories. That makes a lot of sense.

Regarding the numericals, that seems like imputation to the mean but assuming only that large values are less likely than smaller ones. Would it be feasible to apply mean/median imputation based on the train data for that feature? Or even basing it on already computed train statistics like the mode of the histogram ?

Thanks :)

guolinke commented 4 years ago

yeah, mean/median is a better solution than zero-fill. However, I think it is easy for user to fill mean/median as well. Maybe it is not worth for us to add this support, for we may need to record more statistical information in model file.

AlbertoEAF commented 4 years ago

I believe you are right, thank you so much for all the clarifications @guolinke :) I might do a merge request to the docs one of these days so people won't have the same doubts I had and you don't have to explain it again :)