Closed ruizhuanguw closed 3 years ago
@ruizhuanguw Thanks for using LightGBM and for reporting this issue!
Unfortunately, the piece of code you've provided is not a reproducible example for the described issue. Next time, please try to include all necessary imports, attach all required data and so on to provide a MCVE. This time I've made it for you.
import lightgbm as lgb
from sklearn.datasets import load_boston
from sklearn.model_selection import train_test_split
X, y = load_boston(return_X_y=True)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)
train_data = lgb.Dataset(X_train, y_train)
valid_data = lgb.Dataset(X_test, y_test, reference=train_data)
lgbm_params = {
"boosting_type": "gbdt",
"objective": "regression", # commenting out this line makes evals_result empty after training
'num_trees': 10,
}
evals_result = {}
separation = lgb.train(
lgbm_params,
train_set=train_data,
valid_sets=[valid_data, train_data],
valid_names=['Validation', 'Training'],
verbose_eval=False,
callbacks=[lgb.record_evaluation(evals_result)]
)
print(evals_result)
This issue has been automatically locked since there has not been any recent activity since it was closed. To start a new related discussion, open a new issue at https://github.com/microsoft/LightGBM/issues including a reference to this.
Description
I need to specify "metric" or "objective" to get the evaluation results in callBackEnv.evaluation_result_list. If both are left as the default, callBackEnv.evaluation_result_list=[]. This is not consistent with the documentation. https://lightgbm.readthedocs.io/en/latest/Parameters.html#metric
Reproducible example
Environment info
LightGBM version or commit hash: The behavior is seen in 2.3.0 and 3.2.1.
Command(s) you used to install LightGBM
Additional Comments