Closed inventormc closed 4 years ago
This PR supports early stopping for XGBoost. We leverage the incremental learning capabilities of XGBoost:
Note that this may not necessarily improve performance but instead allows us to break the training process into multiple parts.
clf = XGBClassifier(n_estimators=10, nthread=8) base_model = None for i in range(20): z = clf.fit(x_tr, y_tr, xgb_model=base_model) y_te = z.predict(x_te) print(sklearn.metrics.mean_squared_error(y_te, y_pr)) base_model = z.get_booster()
resolves #58
microsoft/LightGBM#3057 -init_model doesn't exist in the latest stable release yet
init_model
This PR supports early stopping for XGBoost. We leverage the incremental learning capabilities of XGBoost:
Note that this may not necessarily improve performance but instead allows us to break the training process into multiple parts.
resolves #58