Open francescomandruvs opened 1 month ago
Does your model really have 2000 iterations with substantial improvements up until the end? Could you reduce 0.8 to something like 0.05? Maybe the 1600 iterations being selected always capture all of the relevant models (which I wouldn't be surprised..)
Thanks for your answer @tiagoleonmelo. I tried a lot of things, working on a toy dataset first for binary classification, then regression (house prices), I tried with also 1 single tree at a time in the predict after shuffle, I also tried with with very few leaves etc. etc. Every combination always gives me the same outcome. Could you please share your result with a toy dataset just to understand if I'm missing something?
I was just now trying to reproduce what I initially posted and was getting to the same conclusion as you: all of them produced the same score.
Apparently I had a typo in my post: it should be num_iteration
and not num_iterations
(I already edited it in order not to mislead any more people in the future)
You should be able to get different predictions if you run this:
import lightgbm as lgb
from sklearn.datasets import load_breast_cancer
from sklearn.model_selection import train_test_split
X, y = load_breast_cancer(return_X_y=True)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=42)
gbm = lgb.LGBMClassifier(
**{
'boosting_type': 'gbdt',
'n_estimators': 200,
'objective': 'binary',
'tree_learner': 'serial',
}
)
gbm.fit(X_train, y_train)
N = 100
alpha = 0.8
total_iterations = gbm.booster_.current_iteration()
X0 = X_test[0, :]
X0 = X0.reshape((1, X_test.shape[1]))
preds = []
for _ in range(N):
pred = gbm.booster_.shuffle_models().predict(X0, num_iteration=int(alpha * total_iterations))
preds += [pred[0]]
print(preds)
It works. I've been working with a custom dataset that exhibits significant class imbalance, with around 90% of instances labeled as 0 and only 10% as 1. When attempting to compute a 95% confidence interval for predictions, I've noticed that the lower bound tends to be close to the actual prediction, while the upper bound tends to be disproportionately large for each prediction. Additionally, it appears that the magnitude of the lower and upper bounds correlates with the magnitude of the prediction itself:
LB | Pred | UB |
---|---|---|
0.012769 | 0.009298 | 0.137781 |
0.014389 | 0.010908 | 0.148293 |
0.024899 | 0.020048 | 0.208131 |
0.035270 | 0.031333 | 0.284767 |
0.052851 | 0.049575 | 0.355912 |
0.081938 | 0.081448 | 0.448467 |
0.101303 | 0.105715 | 0.510233 |
0.124052 | 0.138253 | 0.568761 |
So it seems that there's a consistent trend where the model exhibits a relatively uniform level of certainty across predictions, regardless of the actual confidence intervals. In other words, there are no instances where the model displays high certainty with close confidence intervals for some predictions while showing more uncertainty for others.
Regarding the shuffle_models method, I'm uncertain about its workings and whether it offers any guarantees similar to those provided by conformal prediction methods. Maybe there may be an issue either with my code implementation or with the dataset itself. I'd greatly appreciate your insights and advice on how to address this matter. Thank you!
Description
Probably this is not a bug. I'm trying to understand how
shuffle_models()
works based on this and this. I'm using it for my binary classification lightGBM model. However, when I try to do:My prediction is always the same:
I wonder what's wrong with my implementation (if there is something wrong) and how I should I interpret this result. I'm using lightgbm = "^4.1.0"