Open erykml opened 1 week ago
Thanks for using LightGBM, and for taking the time to put together an excellent reproducible example!
During tree-building, LightGBM looks at multiple "splits", (feature, threshold)
pairs. For each candidate, it computes a "gain" , basically improvement in the in-sample fit as a result of splitting the data on that feature and threshold.
If there are multiple splits that produce the "best" gain, LightGBM will just choose the "first" one, which will generally mean a split from a feature appearing "earlier" (lower column index, or "further left") in the training data.
I've narrowed it down to a smaller example that reproduces the behavior, to help us focus on the root cause:
In that code snippet, notice that I've also added saving the models out (in text format). I compared those in a text differ, and saw the following in the summary near the end:
The default "importance" reported there is "number of splits the feature is chosen for".
Notice that for the model where Longitude
appears earlier in the feature list, it is chosen for 6 more splits. In the model where Latitude
appears earlier, it's chosen for 6 more splits.
I suspect there are some regions of the distribution where it's possible to draw a split for Longitude
or Latitude
which select the exact same samples. You may have only observed this with what you called "params 2" because in general those parameters encourage LightGBM to grow more and deeper trees than it would by default.
more trees:
num_iterations = 267
(default: 100)deeper trees:
num_leaves: 128
(default: 31)min_data_in_leaf: 16
(default: 20)
Hi 🙂
I encountered some unexpected behavior and wanted to understand the reasoning behind it. The issue is regarding the impact of column order on model predictions in a regression setup. I’ve seen similar questions on this topic and tried applying various suggestions to achieve deterministic results, but without success.
Below is a toy example with:
With the default hyperparameters (params 1), I get the same results regardless of column order. However, with the second set (params 2), the results are the same for feature set 1, while they differ for feature set 2—there’s only one observation in the test set that returns a different prediction.
Could you please help me understand where the difference is coming from? In my actual use case, the discrepancies are larger than in this toy dataset.
If you need any further details regarding the environment, please let me know :)
Env:
Toy example: