str0zzapreti / pytest-retry

A simple plugin for retrying flaky tests in CI environments
MIT License
27 stars 6 forks source link

Rerun specific failures #8

Closed GektorPrime closed 11 months ago

GektorPrime commented 1 year ago

It would be beneficial to implement logic that will allow re-running all failures matching certain expressions and vise-versa, re-running all failures other than matching certain expressions 4 e.g. $ pytest --retries 2 --retry-except AssertionError

str0zzapreti commented 1 year ago

@GektorPrime So I've been messing around with this a little bit and the implementation is straightforward for the flaky mark. I have a question for you about specifically the way you suggested this as a command line option. The values from the command line will always start as strings, so to check if an exception matches, there are two options. I can either compare the string value to the exception name and see if it matches, or I can retrieve the exception type using the name and then check for type equivalency.

Option one is just always going to be more imprecise and I'd like to avoid it if possible (I mean, they're both imprecise to some degree. Once you start converting from strings to types you're always risking some kind of mixup). I've seen so many real world cases of user-defined exceptions that shadow built-in exceptions or shadow other user-defined exceptions.

With option two, for any sort of user-defined exception there needs to be some way of informing the plugin where that exception is defined so it can be retrieved. This involves an extra config step for the user with, say, some kind of definition in a conftest that the plugin can refer back to for user-defined exception types.

And really, all this has me wondering... Is there any practical need for filtering exceptions at a global level like this? I suppose one might be only retrying tests which don't fail on an AssertionError. I'm not sure why you'd want to do that, but perhaps you would. Otherwise, the use case for this seems to me to mainly be at the level of individual tests. So do you think implementing command line options for this in addition to the new mark arguments is really worth it? Let me know.

P.S. If you did need a global exception filter for all of your tests, it's easy enough to implement this using the pytest_collection_modifyitems hook. Like, that's literally what the plugin would be doing anyway if I did add the command line options, lol.

GektorPrime commented 1 year ago

At some point flaky marks started to pile up, so in the current implementation of our FW, for consistency on all the envs, we are using pytest addops in pytest.ini to rerun ANY failed test. Generally, we are looking for reruns only for fails within setup steps or the body of the test, avoiding reruns for failed assertions(assuming the flow was not corrupted, but the outcome data is miscalculated). This is paired with conditional retries requested in Conditional retries #9. Having the ability not to perform reruns on other specified exceptions would add more flexibility for us. I'm down for option two and see no issues with an extra step informing the plugin where that exception is defined so it can be retrieved. Probably having a specific file with all the imports of the user-defined exceptions would work.

p.s. An ability to manage a global exception filter was already implemented in the current lib we use similarly as described in the topic, but if it's not planned for your implementation we will def take a look into pytest_collection_modifyitems

str0zzapreti commented 11 months ago

Added in version 1.3.0