Open deschen1 opened 2 years ago
# We have:
true positive rate = sensitivity(), recall()
true negative rate = specificity()
# We don't have:
false positive rate = fall_out()
false negative rate = miss_rate()
If we decide to add these:
Since we went with sensitivity()
and specificity()
instead of true_positive_rate()
and true_negative_rate()
, I'd advocate for fall_out()
and miss_rate()
over false_positive_rate()
or false_negative_rate()
just to be consistent. The docs can mention that they are equivalent
Not sure if I'm missing something obvious, but I didn't find the false positive rate among the available metrics in yardstick. I know I could simply calculate it by doing
1 - specificity
, but it feels like an unnecessary extra step when there are many other metrics available by default in yardstick.https://yardstick.tidymodels.org/articles/metric-types.html