Closed rohan-gt closed 3 years ago
cc @Yard1 ?
I think it's not something we can influence, as this is early stopping inside LightGBM itself. Our early stopping is implemented on top of early stopping that may be inside estimators themselves. Are you using the release version or the master branch? LightGBM early stopping is not present in the former IIRC. I followed the logic @inventormc implemented for xgboost, they may have a better idea about the details, but the way I understand it is that instead of refitting a model completely for each CV fold, it instead essentially fits it on all the previous folds + the current one. Therefore, it incrementally fits the estimator on a bigger and bigger portion of the dataset each time, stopping the trial if there is no improvement.
I'm getting the following error while setting
early_stopping=True
inTuneSearchCV
whereestimator = LGBMClassifier(early_stopping_rounds=50)
:ValueError: For early stopping, at least one dataset and eval metric is required for evaluation
The same works fine for XGBoost. A couple of related questions:
early_stopping_rounds
within the estimator while settingearly_stopping=True
?max_iters
comes into play. When I setn_trials=10
andmax_iters=10
, it seems to be running 100 trials anyway. How is the early_stopping happening?early_stopping
andearly_stopping_rounds
like how it is done in LightGBMTunerCV?