Closed oliverroc closed 5 years ago
Adding a number of references for this because there isn't a simple answer of what the result should be. Note that this affects all 3 of precision/recall/f_meas:
Suggests 1: https://github.com/dice-group/gerbil/wiki/Precision,-Recall-and-F1-measure
sklearn returns 0: https://github.com/scikit-learn/scikit-learn/blob/b4c1c4ed833db5b0fbff0d110b040a34a84e1411/sklearn/metrics/classification.py#L1198
I don't think NA
is ideal to return because in macro averaging you might compute multiple precision values and only have 1 crap out on you. So you might not want to NA
the entire averaged score because of this.
This issue has been automatically locked. If you believe you have found a related problem, please file a new issue (with a reprex: https://reprex.tidyverse.org) and link to this issue.
Hello,
Yestarday I installed yardstick and I was using it for a classification problem I am working with. I have no more than 400 hundred categories to classify.
I was using the precision and recall functions , the first one returned .estimate=NA and the second one 0.7. I think this might be because I have some true categories with none correct prediction, but I am not sure...
If the returned NA is because of that, it wouldn't be better to compute the statistics with na.rm=TRUE and raise a warning with a list of the categories with none matching prediction?
Here I created a simple example to reproduce my case:
Thanks,