Open ja-thomas opened 7 years ago
@Coorsaa what's the status here?
I haven't looked into it for a while now, but if you want to look at the current status, you can find it in the eval_metric branch
generateXgbEvalFun()
function needs improved case handling.while working on it I found another major problem:
match.fun(paste0("measure", toupper(measure$id)))
within generateXgbEvalFun()
does not give is the correct result in alle cases. However, we cannot simply use measure$function
since unlike the measureXY(truth, response)
, the `measure$fun
functions have more arguments: function(task, model, pred, feats, extra.args)
.
IMO, this problem can be only solved when we rewrite all measureXY
functions within to the form "measure" + the measure's name in capital letters. -> measureKENDALLTAU, ...
We want to get the correct eval_metric for early stopping based on the passed measure