Closed yama1968 closed 5 years ago
Great! I tend to work with cases where posterior calibration is important, so don't use weighting. Glad to have it nonetheless.
FYI I recently found a bug in xgboost split value reporting (see https://github.com/holub008/xrf/issues/2) which may be weakening the power of any rulesets and models you build. A quick fix for now is to set sparse=FALSE
in xrf()
Thanks for the fix, I'll try that! Yannick
Le mar. 26 mars 2019 à 07:43, Karl Holub notifications@github.com a écrit :
Great! I tend to work with cases where posterior calibration is important, so don't use weighting. Glad to have it nonetheless.
FYI I recently found a bug in xgboost split value reporting (see #2 https://github.com/holub008/xrf/issues/2) which may be weakening the power of any rulesets and models you build. A quick fix for now is to set sparse=FALSE in xrf()
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/holub008/xrf/pull/1#issuecomment-476495782, or mute the thread https://github.com/notifications/unsubscribe-auth/AHG3n7pCvu0R_xh_sCPk-tj5OVnZ096Jks5vacGogaJpZM4cJFu_ .
Great, thanks very much!
Now with weights it works pretty well on unbalanced datas. You only have to set weights to 1 for the majority class, to 1/mean(y) to the minority class, for the training.
Best Regards, Yannick
Le lun. 25 mars 2019 à 21:37, Karl Holub notifications@github.com a écrit :