I forgot to implement a few QoL improvements for comparison metrics, this issue is just for tracking that, its not very important or anything.
ComparisonMetrics.calculateMultiLabelROC has no optional parameter for passing a list of confidence thresholds. While i think it is always favorable to get all thresholds from the input data, this can take very long for large datasets, and for a quick sneek peak at the ROC, passing thresholds manually should be allowed.
the same is true for ComparisonMetrics.multiLabelThresholdMap
The last 4 metrics in the docs have the formula missing.
I forgot to implement a few QoL improvements for comparison metrics, this issue is just for tracking that, its not very important or anything.
ComparisonMetrics.calculateMultiLabelROC
has no optional parameter for passing a list of confidence thresholds. While i think it is always favorable to get all thresholds from the input data, this can take very long for large datasets, and for a quick sneek peak at the ROC, passing thresholds manually should be allowed.the same is true for
ComparisonMetrics.multiLabelThresholdMap
The last 4 metrics in the docs have the formula missing.