Closed beeb closed 4 years ago
As seen on this line and the next, the evaluate_models method adds the columns eval_f1score_mean and eval_f1score_std in all cases, even when the used metric is different from f1score (e.g. for "continuous" tasks)
evaluate_models
eval_f1score_mean
eval_f1score_std
Somehow I totally missed this. Thanks. Will try to sort it out today.
As seen on this line and the next, the
evaluate_models
method adds the columnseval_f1score_mean
andeval_f1score_std
in all cases, even when the used metric is different from f1score (e.g. for "continuous" tasks)