datalad / datalad-next

DataLad extension for new functionality and improved user experience
https://datalad.org
Other
7 stars 8 forks source link

Robustify tests of `iterable_subprocess` #614

Closed christian-monch closed 8 months ago

christian-monch commented 8 months ago

This PR modifies the test datalad_next.iterable_subprocess.test_iterable_subprocess.test_success_returncode_available_from_generator_with_exception to simply check for a non-None return code instead of checking for a 0 return code.

The reason is that a StopIteration-exception that occurs before the subprocess has terminated might lead to a non-0 return code, thus failing the test. Some environments might trigger such an early exception, e.g. due to some unexpected stdout-configuration. In other words, we cannot guarantee a 0 return code in all environments. But we can guarantee, that a return code is available when the iterable_subprocess-context is exited with an exception.

The test is renamed to test_returncode_available_from_generator_with_exception.

This should make the test robust enough to prevent this test failure.

codecov[bot] commented 8 months ago

Codecov Report

All modified and coverable lines are covered by tests :white_check_mark:

Comparison is base (3d0c8a4) 92.99% compared to head (d1fc39f) 92.99%. Report is 1 commits behind head on main.

Additional details and impacted files ```diff @@ Coverage Diff @@ ## main #614 +/- ## ======================================= Coverage 92.99% 92.99% ======================================= Files 160 160 Lines 11863 11863 Branches 1795 1795 ======================================= Hits 11032 11032 - Misses 640 642 +2 + Partials 191 189 -2 ```

:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.

christian-monch commented 8 months ago

That rationale is sound. I wonder, though, shouldn't the implementation follow that rationale more closely? Now we are expecting one of two possible return values, rather than any.

Is there a reason not to test for is not None?

I used this comparison before. But I think this case is already covered by the test test_error_returncode_available_from_generator_with_exception. We would not gain new information in the modified test and may as well delete it then.

I think there are only two possible values to assert. On a Linux system, all exceptions that are raised before the subprocess exited will lead to a -15 return code. If StopIteration is raised, the subprocess will either have terminated which results in a 0-return code, or the subprocess is still running and will therefore be terminated which results in a -15 return code. Any other exception than StopIteration, e.g. a CommandError because echo could not be found, would lead to an early test-exit and not proceed to the assign-statement.

In conclusion, I would only expect return codes 0 and -15 at the assert-statement, as well as uncaught exceptions that lead to a failing test before the assert-statement is reached. Having said that other unconventional setups, e.g. echo always exits with return code 1, might not match this expectation. I see the following options:

  1. Leave the test as is.
  2. Delete the test

WDYT?