microsoft / LightGBM

A fast, distributed, high performance gradient boosting (GBT, GBDT, GBRT, GBM or MART) framework based on decision tree algorithms, used for ranking, classification and many other machine learning tasks.
https://lightgbm.readthedocs.io/en/latest/
MIT License
16.66k stars 3.83k forks source link

Suppress warning messages #3436

Closed jmrichardson closed 4 years ago

jmrichardson commented 4 years ago

Hi, I am unable to disable warning messages. I have found a few references in the issues list however none of the remedies are helping. I am using raytune with lightgbm to tune HPO and using sklearn cross_val_score:

    def train(config, reporter):
        model = LGBMClassifier(objective='multiclass', num_class=3, verbosity=-1, silent=True)
        model.set_params(**config)
        cv = TimeSeriesSplit()
        score = np.mean(cross_val_score(model, get_pinned_object(X_pin), y, cv=cv, scoring='accuracy'))
        tune.report(mean_accuracy=score, done=True)

    config = {
        "verbosity": -1,
        "num_leaves": hp.uniformint("num_leaves", 50, 200), # High num leaves can lead to overfitting (leaves = 2^max_depth can be high)
        "max_depth": hp.uniformint("max_depth", 5, 20),  # Max tree depth
        "min_data_in_leaf": hp.uniformint("min_data_in_leaf", 100, 500),  # High value helps prevent overfitting
        "bagging_fraction": hp.uniform("bagging_fraction", 0.2, 0.5),
        # Smaller values to prevent bagging overlap (iid)
        "feature_fraction": hp.uniform("feature_fraction", 0.1, 0.5),  # Number of features in tree
        "max_bin": hp.uniformint("max_bin", 5, 20),  # Bin numeric features
        "num_iterations": hp.uniformint("num_iterations", 100, 300),  # Number of base tree learners
    }

I have tried setting verbose and verbosity to -1 in params. Also with creation of the model. As you can see I am using cross_val_score to do cross validation score so I don't think I can set the verbosity in the X, y dataset. Unfortunately, I am getting a bunch of warning messages per tuning run (lots):

(pid=12904) [LightGBM] [Warning] feature_fraction is set=0.3203851704898071, colsample_bytree=1.0 will be ignored. Current value: feature_fraction=0.3203851704898071
(pid=12904) [LightGBM] [Warning] min_data_in_leaf is set=257, min_child_samples=20 will be ignored. Current value: min_data_in_leaf=257
(pid=12904) [LightGBM] [Warning] bagging_fraction is set=0.46702994911122353, subsample=1.0 will be ignored. Current value: bagging_fraction=0.46702994911122353

Using version LGBM v2.3.1

Can anyone help me?

StrikerRUS commented 4 years ago

@jmrichardson Hi!

Unfortunately, without a fully reproducible example it is hard to say why setting verbosity=-1 didn't help to suppress warnings. However, I'd advise you to fix a root cause of these warnings: use aliases that are used in scikit-learn wrapper as normal arguments. Then all warnings will be disappear.

jmrichardson commented 4 years ago

@StrikerRUS thank you for the tip. That worked perfectly and didn't cross my mind to actually fix the warning :)

Yingping-LI commented 3 years ago

@StrikerRUS thank you for the tip. That worked perfectly and didn't cross my mind to actually fix the warning :)

@jmrichardson Hi, I encounter a similar problem, could you please give an example about how you suppressed these warning messages? Thanks a lot in advance!

github-actions[bot] commented 1 year ago

This issue has been automatically locked since there has not been any recent activity since it was closed. To start a new related discussion, open a new issue at https://github.com/microsoft/LightGBM/issues including a reference to this.