Closed roedoejet closed 4 weeks ago
Review changes with SemanticDiff.
Analyzed 3 of 5 files.
Overall, the semantic diff is 6% smaller than the GitHub diff.
Filename | Status | |
---|---|---|
:heavy_check_mark: | everyvoice/cli.py | Analyzed |
:grey_question: | everyvoice/evaluation.py | Unsupported file format |
:heavy_check_mark: | everyvoice/run_tests.py | 83.33% smaller |
:heavy_check_mark: | everyvoice/tests/test_cli.py | Analyzed |
:grey_question: | everyvoice/tests/test_evaluation.py | Unsupported file format |
CLI load time: 0:00.25
Pull Request HEAD: 8a9d73388efd10a817b3b1327fc0acf03dfb56a0
Imports that take more than 0.1 s:
import time: self [us] | cumulative | imported package
import time: 249 | 100583 | typer
Attention: Patch coverage is 86.25000%
with 11 lines
in your changes missing coverage. Please review.
Project coverage is 74.60%. Comparing base (
4c8bf94
) to head (8a9d733
). Report is 7 commits behind head on main.
Files with missing lines | Patch % | Lines |
---|---|---|
everyvoice/cli.py | 86.66% | 2 Missing and 4 partials :warning: |
everyvoice/evaluation.py | 85.71% | 2 Missing and 3 partials :warning: |
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
@wiitt brought up a good point - we should also print out the file basename or path when printing to evaluation.json so that we can determine which files received particular scores.
Just taking an overview look:
test_evaluation.py
by adding it to some suite in run_tests.py
@roedoejet Sam and I figured out pre-commit CI, it's fixed now.
PR Goal?
Allow EveryVoice to do some basic evaluation by running
everyvoice evaluate
on either a single wav or directory of wavs.Fixes?
Feedback sought?
Try it out, is the API intuitive?
Priority?
medium
Tests added?
A unit test of the evaluation is included, other test ideas are welcome.
How to test?
run
everyvoice evaluate --help
and then follow the instructions on some audio that you have.Confidence?
medium
Version change?
minor
Related PRs?
N/A