Closed deepx-top closed 7 years ago
@deepx-top
for the multi-class, you cannot use "l2" and "auc" metric. please try the multi_logloss
or multi_error
.
BTW, early_stopping_rounds
cannot exceed the num_boost_round
.
It works, thanks @guolinke
This issue has been automatically locked since there has not been any recent activity since it was closed. To start a new related discussion, open a new issue at https://github.com/microsoft/LightGBM/issues including a reference to this.
Please search your question on previous issues, stackoverflow or other search engines before you open a new one.
For bugs and unexpected issues, please provide following information, so that we could reproduce on our system.
Environment info
Operating System:windows 10 CPU: E5200 C++/Python/R version: Python 3.5.2
Error Message:
Just python termination when training
Reproducible examples
I‘m using LightGBM in Kaggle Mnist competition(https://www.kaggle.com/c/digit-recognizer), and got 97.37%。when i run the code again,I got python termination error every time,but when i change parameter "multiclass” to "regress" and removing “num_class”, the code working fine. So is it a "multiclass” bug or i just need to update my environment ?
Steps to reproduce
1.The main code: