clamsproject / aapb-evaluations

Collection of evaluation codebases
Apache License 2.0
0 stars 1 forks source link

asr_eval update #56

Closed 1192119703jzx closed 4 months ago

1192119703jzx commented 5 months ago

Replace batch_asr_eval.py and asr_eval.py with a new file evaluate.py. Change the code using goldretriver.py. Rewrite the asr_eval README.md based on the template. Fixes #38 Resolve #54

1192119703jzx commented 5 months ago

This pull request also fixes the bugs mentioned in issue #38. Issue #38 mentions there are 14 correctly outputted files, 2 list index errors, and 4 expected StopIteration errors. The new evaluate.py output has 4 list index errors, which are all due to the prep miff files only containing error messages but no annotations. The 2 list index errors in issue #38 overlap with two of the four list index errors in the new output. Meanwhile, for 4 expected StopIteration errors in issue #38, two of them return valid output using new evaluate.py, and the other two overlap with the rest two of the four list index errors in the new output.

keighrim commented 5 months ago

Can you consider replacing WER implementation from torchmetrics with https://pypi.org/project/jiwer/ ? I think installing the whole torch library (comes with cuda stuff) just to do simple counting and arithmetic is an overkill.