Closed AlbertoEAF closed 4 years ago
of course the prediction have all missing value handles. refer to the https://github.com/microsoft/LightGBM/blob/9654f16528d2d1e3462a298106cb1fc6ed43cbac/include/LightGBM/tree.h#L527-L539
Hello @guolinke I already read a bit and still have some doubts which I can't get through.
During train:
During prediction when a missing is found for a numerical field:
During prediction when a missing is found for a categorical field:
Thank you!
(1) yes (2) yes (3) yes, by default left (4) It will be converted to zero. refer to https://github.com/microsoft/LightGBM/blob/a8c1e0a11a5cbfac62fb57d16901fea16de95412/include/LightGBM/tree.h#L260-L264 (5) for categorical features, the split is unorder (both {(1, 3), (2, 4, nan)} and {(2, 4, nan), (1, 3) } are possible for categorical feature, but not for numerical feature). there are, forces the missing to the right side is okay.
Sorry but I got confused reading (3) & (4).
(3) yes, by default left
Just to be sure, you're saying that during scoring, missing values for a numerical split:
Correct? :) (a)
(4) It will be converted to zero. refer to
https://github.com/microsoft/LightGBM/blob/cc6a2f5ac3396ffdadba7d05a8d5957be4530ab8/include/LightGBM/tree.h#L260-L262 (updated to latest code above)
That is only if we choose to not handle missings handle_missing=false
or zero_as_missing=true
right? (b)
I'm not sure I'm following your notation here:
(5) for categorical features, the split is unorder (both {(1, 3), (2, 4, nan)} and {(2, 4, nan), (1, 3) } are possible for categorical feature, but not for numerical feature). there are, forces the missing to the right side is okay.
I understand that numerical splits are ordered in the sense that given real values a < b < c
, I can choose splits=[{a, b}, {c}]
or [{a}, {b,c}]
but not [{a, c}, {b}]
, nor [{c}, {a, b}]
.
Au contraire, for categorical splits, since there's no order to the elements, there is no order in the leafs either, and thus, all 4 split options above are possible for categorical splits, correct? (c)
@AlbertoEAF
for (3) and (4). only if handle_missing=true
and zero_as_missing=false
, and nan
shown in that feature during training, the nan
will be handled in prediction time, otherwise it is always converted to zeros.
For (5), yes,
Thank you @guolinke, already understood those.
However, I don't understand the choice of nan's allocation in the case there were no missing values in train.
For numerical splits, as you explained, values below the split threshold must go to the left side, and values above it to the right. Why then by default do we allocate missing values to the left?
As for categorical splits, as they have no order, what dictactes if we split a new value during train to the left or right at all?
@AlbertoEAF Sorry, I don't really understand your question. I try to answer below.
"by default" doesn't means they are always in that way. For numerical features, both left side and right side are tested for missing values. For categorical features, as there are not "left" and "right", and we can put anything to "left". Therefore, the missing values are in right side always.
Sorry @guolinke , let me try to articulate a bit better! Looks huge I know but I hope it's simpler and more explicit :)
My questions concern only the particular case where we didn't have missing values in train but have missing values in scoring. This means the model has no prior missing values distribution information.
Everything I say in the rest of the post assumes the paragraph above !
Seeing a missing value in scoring, the model will place the missing value to:
For numericals L and R sides are ordered, where non-missing values respect:
and during scoring we allocate the missing value to the L by default because we have no missing values prior information.
Question # 1:
- We could have equally chosen to allocate to the right with the same odds of being correct, right?
For categoricals, missing values will be placed to the R side at all times in this scenario.
I don't understand however what happens for categoricals in train, and that is what determines the meaning of placing a missing value in scoring to the R side.
Let's assume that:
To find the optimal split, LightGBM sorts by objective function and finds that the optimal split is {A} vs {B or C}.
Question # 2: Which categories are now placed on each side?
@guolinke can you clarify?
I think I have a proposal to improve missing values scores, but first I need to know if I understood the current algorithm :)
@AlbertoEAF sorry for the late response, very busy recently...
the missing value handle (unseen in training but seen in test) for categorical feature is easier.
For categorical features, we choose the seen categories as split condition, and always to left. for example, if x == A or x == C or x == F then left, else right
. Therefore, it is straightforward to put missing to right.
for numerical features, if not missing is seen in training, the missing value will be converted to zero, and then check it with the threshold. So it is not always the left side.
Thank you @guolinke , no problem!
Ok, finally got it thanks! :D
Basically in categoricals you are always considering as belonging to the "other" non-split categories. That makes a lot of sense.
Regarding the numericals, that seems like imputation to the mean but assuming only that large values are less likely than smaller ones. Would it be feasible to apply mean/median imputation based on the train data for that feature? Or even basing it on already computed train statistics like the mode of the histogram ?
Thanks :)
yeah, mean/median is a better solution than zero-fill. However, I think it is easy for user to fill mean/median as well. Maybe it is not worth for us to add this support, for we may need to record more statistical information in model file.
I believe you are right, thank you so much for all the clarifications @guolinke :) I might do a merge request to the docs one of these days so people won't have the same doubts I had and you don't have to explain it again :)
Hello,
Suppose I stick to
zero_as_missing=false
,use_missing=true
. Can you explain what happens during prediction if there are missing values?I read a bit of the code but those parameters are only used in training, not scoring.
The only reference I saw in the documentation regarding missing values was:
According to those sources, nulls are allocated to the bins that reduce the loss during training.
Is that true? And if so, what if there are no missing values in training?