Closed rtolos-bd closed 1 year ago
Some feedback would be greatly appreciated, especially if your vision on this feature is different.
But having a way to disable shuffling only on required tests, while still leaning on the power of this plugin, would be a big plus for us.
Romel
pytest-order can already fix the order of tests, and it applies after pytest-randomly. There’s even a documented section on using the two together: https://pytest-order.readthedocs.io/en/latest/other_plugins.html#pytest-randomly
I’ll add a docs section here about it, rather than merge an extra feature.
Documentation added in #577.
Btw I’m not against ever adding such a marker, if pytest-order isn’t suitable. But for now I would prefer to avoid expanding the surface area of pytest-randomly, especially when "fixed order tests" are what you’re trying to avoid.
Thanks!
Hi Adam,
First off, thank you for taking the time to respond. I am aware of pytest-order and of how it works with pytest-randomly, as we're using it already. Sadly, it doesn't cover all the possible scenarios, one of them being using parametrized tests. I get that adding such a marker goes against the way pytest-randomly was designed, but the only other option would be to execute these tests in a separate run (with the plugin disabled), which is undesirable
Can you explain your situation a bit more? It sounds like you don’t intend to fix the inter-test dependencies ever? I’ve not seen a good reason for that before.
Sure,
We're using pytest to test a system closely connected to multiple cloud services.
Due to the nature of the aforementioned system, some of the tests have costly setups and tend to be complex.
A specific example (and the reason we want this functionality in pytest-randomly): We have a test that accepts a list of ~40 entries as a parameter and executes 6 actions for each parameter entry. Sadly, the test is hard to debug due to it's complexity. each parameter entry has a cost of ~15 seconds, and each action takes 1 or 2 seconds, generally. Which means total execution time for this test is currently ~1080 seconds ( 40 params * (15s for setup + 12 for actions) )
For ease of monitoring and debugging we want to split the test into 6 separate tests. We initially tried completely separating the tests, which considerably increased our execution time (40 params 15s setup 2s per action * 6s actions => ~7200s ) - execution times are too big So we decided to try implementing incremental tests, as it seems it's the least taxing scenario for our current use case. Now, pytest-order orders the tests after pytest-randomly shuffles them, so if we have 6 tests parametrised, with the follosing desired run order: test_1[param_1] test_2[param_1] ... test_6[param_1]
test_1[param_2] test_2[param_2] ... test_6[param_2]
pytest-randomly + pytest-order would produce a result somewhere along the following example: test_1[param_1] test_1[param_2]
test_2[param_2] test_2[param_1] ... test_6[param_1] test_6[param_2] Pytest-order ignores parameters, so it orders tests with no regard to them, which would make using it useless, according to our scenario.
The idea of using incremental tests is far from optimal, but i think it's the best option currently available to us, given the project specifics.
A new marker
randomly_dont_reorganize
allows for disabling shuffling for select classes or modules. Note: Due to how the plugin works, disabling shuffling only works for entire classes or modules, not for individual functions (including parametrized tests)Edit: Resolves #319