Closed heidimhurst closed 2 months ago
This results in silent skipping of the offending test
I guess you are using pytest-dependency
markers, so the test will be skipped if it cannot be ordered. If using pytest-order
to order the tests, it would not be skipped.
So, in your example, would you expect 'test_a' to fail if it would be skipped (e.g. for a dependency marker), or also if the ordering could not be done using order
markers? The latter will not skip any tests, but might result in failing tests if the order matters for test execution, of course...
Hi @mrbean-bremen - we aren't using pytest-dependency
, just pytest-order
, so our tests are decorated like
import pytest
@pytest.mark.order(1)
def test_foo():
...
In the example I provided, I'd expect there to be some way to (optionally perhaps?) fail loudly if not all tests can be executed in accordance with the order
markers provided. This will just help prevent silent failures.
Ok, so you are using relative order markers (e.g. before
or after
markers), right? I'm still unclear why you write that tests are skipped in this case.
I understand that you want the tests to fail, my question was if you want the test that could not be ordered to fail, or something else (like failing all tests).
Yes, we are using relative order markers @mrbean-bremen. I think tests are skipped because the relative order we've specified isn't possible.
I think probably having the tests that could not be ordered fail would make the most sense 🤔 but open to having the whole process fail during setup if that makes more sense.
Ok - tests are not skipped by pytest-order if they cannot be ordered (this is what pytest-dependency does), but I will have a look if I can make them fail as an option.
Huge, thank you so much @mrbean-bremen
@heidimhurst - I added the option --error-on-failed-ordering
in main, please check if this is what you need.
Huge, that's exactly what I was hoping for @mrbean-bremen! Can't thank you enough for the quick turnaround!
Ok, I'll make a new release in this case.
At present, users can create logically incompatible orderings (e.g. A after B, A before C, but C before B). This results in silent skipping of the offending test with the warning log
Unfortunately this can be quite a silent/subtle error. It would be awesome if there were a flag we could add to ensure that an error is raised (e.g. tests fail) if not all tests can be ran.