I was wondering if there is a way of defining the digit numerical precision of values such as roc_auc.
To see what I mean, let me point you to sklearn API such as for Classification Report, where the parameter digits defines to what precision the values are presented.
This is specially important, for example, when one is training classifiers that are already in the top, say, +99.5% of accuracy/precision/recall/auc and we want to study differences amongst classifiers that are competing at the 0.1% level.
Namely I noticed that digit precision is not consistent throughout scikit-plot, where roc_auc is presenting three digit precision, whil precision_recall is presenting four digit precision.
As you can imagine, for scientific publication purposes it's a bit inelegant to present bound metrics with different precision.
Hi there,
I was wondering if there is a way of defining the digit numerical precision of values such as roc_auc.
To see what I mean, let me point you to
sklearn
API such as for Classification Report, where the parameterdigits
defines to what precision the values are presented.This is specially important, for example, when one is training classifiers that are already in the top, say, +99.5% of accuracy/precision/recall/auc and we want to study differences amongst classifiers that are competing at the 0.1% level.
Namely I noticed that digit precision is not consistent throughout
scikit-plot
, whereroc_auc
is presenting three digit precision, whilprecision_recall
is presenting four digit precision.As you can imagine, for scientific publication purposes it's a bit inelegant to present bound metrics with different precision.
Thanks!