-
**Your Question**
I am using the following code for the evaluation of my dataset. I upgraded recently from 0.1.13 to 0.1.18 to use the new metrics ( noise_sensitivity_relevant, noise_sensitiv…
-
Hi Team,
I was wondering how you are computing the metrics for evaluation. I was going through `metrics.py` file and came across `ap_per_class` function which seems to be computing the average prec…
-
There is no clear instructions in the repo how to calculate and verify the metrics published in the paper, neither it has been calculated in training and validation step only the input and denoised im…
-
Hey can you please elaborate how did you find recall@3. The cosine similarity between true distractors and the predicted distractors will lie between 0 and 1. Please elaborate how did you convert this…
-
Hey,
I ran the inference on the 29 Huang annotated sequences from DAVIS 2017.
```
srun python video_completion.py \
--mode object_removal \
--seamless \
--path ../data/…
-
Evaluation of source separation by
- SDR (improvement)
- SIR (improvement)
- SAR
These are realized by `mir_eval`.
-
Hello, may I ask the difference between the Evaluation Metrics.:Pixel Auroc, Sample Auro0.991, Pixel Aupr0.971 in your paper and the Evaluation Metrics.:Image level-AUROC, pixel based-AUROC?
-
NOTE: This isn't really an issue, more of a discussion topic / idea that I wanted to raise and get some feedback on to see if there is interest or value in it before I implement something for it. Also…
-
We are now actively developing VERSA for general-purpose speech/audio evaluation.
- Additional Metrics to add
- [x] torch-squim (metrics done, pending adding to the interface) https://pytorch.or…
-
Hi,
I have read your paper. That's really impressive.
But I found that the topological metrcis you used cannot be found in you code repository?
Could you please share the code for topological metri…