linkml / linkml-runtime

Runtime support for linkml generated models
https://linkml.io/linkml/
Creative Commons Zero v1.0 Universal
24 stars 23 forks source link

Make tests fail, not print #312

Open sneakers-the-rat opened 6 months ago

sneakers-the-rat commented 6 months ago

The output of the tests is pretty difficult to read, but there were lots of places that were saying "ERROR" without failing.

Looking further, it appears as if the default behavior for the TestEnvironment is to print, not fail on error.

There are also many print statements throughout the tests - raising an exception should be the print statement in tests.

This PR

codecov[bot] commented 6 months ago

Codecov Report

All modified and coverable lines are covered by tests :white_check_mark:

Project coverage is 62.75%. Comparing base (33ca663) to head (31a8ad0). Report is 12 commits behind head on main.

Additional details and impacted files ```diff @@ Coverage Diff @@ ## main #312 +/- ## ========================================== - Coverage 62.92% 62.75% -0.18% ========================================== Files 62 61 -1 Lines 8545 8521 -24 Branches 2437 2437 ========================================== - Hits 5377 5347 -30 - Misses 2557 2560 +3 - Partials 611 614 +3 ```

:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.

sneakers-the-rat commented 6 months ago

The errors here are from comparisons being done to output files that don't exist - so the test is trivial, and the errors are correct, but should be conditioned on something like --generate-snapshots.

So we need to either:

The fact that the tests are failing on windows but the check appears to be passing in the gh interface indicates that the tests as currently configured don't actually alert us to failures (for windows). I assume that's from some weirdness in the way unittest deals with return codes, so we should switch to pytest ASAP.