Closed CarlosCancino-Chacon closed 10 months ago
Attention: 2 lines
in your changes are missing coverage. Please review.
Comparison is base (
8ee8ba9
) 85.19% compared to head (1550426
) 85.23%.
Files | Patch % | Lines |
---|---|---|
tests/test_utils.py | 92.59% | 2 Missing :warning: |
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
Hey, nice correction. I have only a minor comment on the if clause of the tests. I think that it doesn't make too much sense to make the tests conditional. For the automatic tests we should definitely verify that miditok is installed and the test is working instead of skipping, for the long term it is safer to not skip. For local tests it is ok just to let it fail maybe with a message that miditok is not installed. What do you think?
I actually think that it is not very elegant to let local tests fail. In the particular case of import errors (of packages imported at the top of the file), none of the tests in the file run, i.e., this is not reported by unittest as a test "failing" but as an error. For the particular case of the miditok tests, none of the other tests (e.g, the tests inTestGetMatchedNotes
, TestGetTimeMapsFromAlignment
, etc.) are actually run. This is, in my opinion, worse than checking whether a package is installed and conditionally run the tests.
Furthermore, we already have other conditional tests (for example for test_musescore
).
I see the issue. What I would do is to check if miditok is installed and when is not assert true = false to make the tests fail with a message. To me the online tests are the most important so we can assure the robustness of the package in the long term when mistakes can happen and the workflow could change for any reason. I don't have a very strong opinion on that so if you want to do it the way you proposed is also fine by me.
Hi! This conversation fell a bit silent, so let's try to find a solution. In my opinion this solution does exactly what we should do with optional libraries. Our required libraries can be used anywhere (they should be specified in the requirements.txt
though), but for the others we have to create workarounds. Optional libraries include:
I don't really know what the issue is with checking for an installed library, in the CI machine these libraries should be installed, no? well with the exception of musescore, but the optional python libraries should be. So the tests run and all is good.
Yes don't get me wrong, I completely agree. I was thinking about long term problems such as somebody editing the workflow without taking into consideration optional libraries and then a potential problem goes unnoticed. Apart from that edge case, then everything should work fine.
true, I guess there is not that much room for error in the CI workflow and we have to be careful when changing something. Do you want to approve this PR or should I?
This pull request closes Issue #337.
It adds the following:
test_tokenize1
)