fgnt / pb_bss

Collection of EM algorithms for blind source separation of audio signals
MIT License
270 stars 60 forks source link

Allow not computing all metrics for evaluation #21

Closed mpariente closed 4 years ago

mpariente commented 4 years ago

Hi,

Both InputMetrics and OutputMetrics have the as_dict function which returns all the metrics as a dict. I think this is a nice UI but it would be even nicer to be able to control which metric is going to be computed without calling them one by one, what do you think?

Something like this might do, with more

Index: pb_bss/evaluation/wrapper.py
===================================================================
--- pb_bss/evaluation/wrapper.py    (revision 8c31ef0b1e32d355f468170f07e41f989d8cf4c6)
+++ pb_bss/evaluation/wrapper.py    (date 1579701195206)
@@ -318,6 +318,18 @@
             return_dict=True,
         )

+    def sdr(self):
+        return self.mir_eval['sdr']
+
+    def sir(self):
+        return self.mir_eval['sir']
+
+    def sar(self):
+        return self.mir_eval['sar']
+
+    def selection(self):
+        return self.mir_eval['selection']
+
     @cached_property.cached_property
     def pesq(self):
         return pb_bss.evaluation.pesq(
@@ -342,6 +354,15 @@
         )
         return invasive_sxr

+    def invasive_sdr(self):
+        return self.invasive_sxr['sdr']
+
+    def invasive_sir(self):
+        return self.invasive_sxr['sir']
+
+    def invasive_sar(self):
+        return self.invasive_sxr['sar']
+
     @cached_property.cached_property
     def stoi(self):
         return pb_bss.evaluation.stoi(
@@ -383,3 +404,8 @@
             metrics['invasive_sxr_snr'] = self.invasive_sxr['snr']

         return metrics
+
+    def get_as_dict(self, *metric_names):
+        metrics = dict()
+        for m in metric_names:
+            metrics[m] = self.__getattribute__(m)

I'm willing to put more efforts into it if you'd be interested of course.

LukasDrude commented 4 years ago

I think in general, your idea is good.

Do you have a particular use case in mind where you need to switch between which metrics you need frequently?

In my own experiments I tend to calculate all metrics and then decide later, e.g., when writing the report, which metrics are required.

An alternative is to implement everything as an item getter (more Pythonic, less Javaesque):

result = {k: metric[k] for k in metric_names}

I think that reads good on your side and can be implemented with less redundant code internally. Would you be fine with that?

mpariente commented 4 years ago

Thanks for your answer.

Do you have a particular use case in mind where you need to switch between which metrics you need frequently?

No actually, they'll always stay the same among one script. I can write the dictionary by hand in the given script and call it. I just thought it might be good to make it a built-in function. If you're willing to implement it, you're the best person to decide how to do it ! :wink: