This PR implements the ability to pass a test set, specifically an X_test and a y_test to CVEvaluator. All provided scorers will be computed on the test set if provided and recorded in the report, as it is with train_scores.
These are available in report.summary as "test_mean_{metric}", "test_std_{metric}" and also "split_{i}:test_{metric}" for each split, just as with train scores.
For most cases, the usage is rather straight forward. For passing custom parameters to the scorers at test time, all scorer parameters must be prefixed with "test_{key}", i.e.
CVEvaluator(
X, y,
X_test=X_test, y_test=y_test,
params={
"sample_weight": sample_weight,
"test_sample_weight": test_sample_weight,
"pos_label": 0,
"test_pos_label": 0, # Note that it needs to be duplicated.
}
)
Implementation is rather straightforward other than the test scorer parameters and testing that it works, hopefully there's enough commentary with the code to help guide through.
This PR implements the ability to pass a test set, specifically an
X_test
and ay_test
toCVEvaluator
. All provided scorers will be computed on the test set if provided and recorded in the report, as it is withtrain_scores
.These are available in
report.summary
as"test_mean_{metric}"
,"test_std_{metric}"
and also"split_{i}:test_{metric}"
for each split, just as with train scores.For most cases, the usage is rather straight forward. For passing custom parameters to the scorers at test time, all scorer parameters must be prefixed with
"test_{key}"
, i.e.Implementation is rather straightforward other than the test scorer parameters and testing that it works, hopefully there's enough commentary with the code to help guide through.