According to scikit-learn documentation roc_auc_score function takes target probability scores from estimator.predict_proba(X, y)[:, 1]. However, in Supervised.pyroc_auc_score takes binary predictions. This changes the output from roc_auc_score. Is there a specific reason for this, or is it a bug?
In Supervised.pyy_pred = pipe.predict(X_test)
... roc_auc = roc_auc_score(y_test, y_pred)
According to scikit-learn documentation
roc_auc_score
function takes target probability scores fromestimator.predict_proba(X, y)[:, 1]
. However, inSupervised.py
roc_auc_score
takes binary predictions. This changes the output fromroc_auc_score
. Is there a specific reason for this, or is it a bug?In
Supervised.py
y_pred = pipe.predict(X_test)
...roc_auc = roc_auc_score(y_test, y_pred)
https://scikit-learn.org/stable/modules/generated/sklearn.metrics.roc_auc_score.html#sklearn.metrics.roc_auc_score