Closed arilwan closed 1 year ago
Thanks for the note. It would be nice to modify the implementation some time to support this. I just checked and GridSearchCV
in sklearn supports it via gs = GridSearchCV(estimator=clf, scoring="roc_auc", ...)
too.
In the meantime, I think you can use
from sklearn.metrics import roc_auc_score
sffs = SFS(..., scoring=roc_auc_score, ...)
Not a valid scoring parameter:
File "~/venv/lib/python3.8/site-packages/sklearn/metrics/_scorer.py", line 432, in get_scorer
raise ValueError(
ValueError: 'roc_auc_score' is not a valid scoring value. Use sklearn.metrics.get_scorer_names() to get valid options.
>>>
Quick fix.
Using scoring= 'roc_auc_ovr_weighted'
works, according to the table.
Thanks! This should perhaps be added to the mlxtend docs in the meantime before we find more time to make "roc_auc
" work out of the box.
I wonder what sklearn uses when gs = GridSearchCV(..., scoring="roc_auc", ...)
is called. It probably defaults to either _ovo or _ovr
I know this is
sklearn
related, but maybe important to report here.Working on a multiclass (5 class) problem, and a random forest estimator.
Error:
Package info: