Open daisukelab opened 1 year ago
Other issues...
numpy
has to be fixed with np
below.
def _average_precision(y_true, pred_scores, tps_weight=None, fps_weight=None):
precisions, recalls, thresholds = precision_recall_curve(
y_true, pred_scores, tps_weight=tps_weight, fps_weight=fps_weight
)
precisions = numpy.array(precisions)
recalls = numpy.array(recalls)
AP = numpy.sum((recalls[:-1] - recalls[1:]) * precisions[:-1])
return AP
Please also consider providing requirements.txt.
I found another issue w.r.t. compatibility of sklearn. 1.3.0 works fine if I add import sklearn.utils.multiclass
and import sklearn.metrics
.
diff --git a/ontology_audio_tagging/loss_and_eval_metric.py b/ontology_audio_tagging/loss_and_eval_metric.py
index ecfa2fc..b608585 100644
--- a/ontology_audio_tagging/loss_and_eval_metric.py
+++ b/ontology_audio_tagging/loss_and_eval_metric.py
@@ -3,6 +3,8 @@ from tqdm import tqdm
import os
import warnings
import sklearn
+import sklearn.utils.multiclass
+import sklearn.metrics
import torch
from numba import jit
from ontology_audio_tagging.utils import load_pickle, save_pickle
@@ -160,9 +162,9 @@ def _average_precision(y_true, pred_scores, tps_weight=None, fps_weight=None):
precisions, recalls, thresholds = precision_recall_curve(
y_true, pred_scores, tps_weight=tps_weight, fps_weight=fps_weight
)
- precisions = numpy.array(precisions)
- recalls = numpy.array(recalls)
- AP = numpy.sum((recalls[:-1] - recalls[1:]) * precisions[:-1])
+ precisions = np.array(precisions)
+ recalls = np.array(recalls)
+ AP = np.sum((recalls[:-1] - recalls[1:]) * precisions[:-1])
return AP
@@ -182,7 +184,7 @@ def mask_weight(weight, threshold=1.0):
return ones_matrix
-@jit(nopython=True)
+#@jit(nopython=True)
def build_ontology_fps_sample_weight_min(target, weight, class_idx):
ret = []
for i in range(target.shape[0]):
P.S. Completing evaluation_metric_test.py
takes 1m17.234s when using ontology_weight.pkl.tmp
file created in the previous run. I found it acceptable for a single evaluation.
Thanks.
Hello, thanks for sharing OmAP code.
I'm trying to use the code and have found that
build_ontology_fps_sample_weight_min
fails with the error below.If I remove
@jit(nopython=True)
, the issue goes away. https://github.com/haoheliu/ontology-aware-audio-tagging/blob/main/ontology_audio_tagging/loss_and_eval_metric.py#L185I would like to evaluate as many models as possible with OmAP, but I am concerned that it seems to be very time consuming to calculate OmAP. The size of
ontology_weight.pkl.tmp
is also very large, and I began to change my mind that it would be difficult to measure OmAP every time I run the evaluation program. Any advice on computation time or temporary file size would be appreciated.Thanks.