Closed shaunc closed 1 year ago
Would seem like I would need to:
A way to cancel & recreate timer would be great! I see debugging code can cancel ... how about the restarting step?
Hum, I don't think you can do that right now. You might be able to by
playing with the internals. E.g. the pytest_timeout_set_timer hook sets
the cancel_timeout
attribute on item
, but that's not really enough.
You could probably provide an overriding pytest_timeout_set_timer hook
to be able to do this. You may have to fiddle with the trylast
subtleties of pluggy to get this to work.
My "standard" answer to this kind of issue has been to say that this is out of scope though. I tend to advocate that timeouts are a last-resort kind of thing and that you should just set them to e.g. 30 minutes if that's what it takes to compile things. The point is that CI at some point will give up and give you a stacktrace if it really did get stuck.
Thanks for the reply.... well let me take a closer look at the internals.
I am writing/maintaining a ML library. I have a mark.slow
and a mark.x_slow
- I like to have my test suite divided up into short tests I can have running in the background that show me the code is wired correctly, and longer tests that check statistics etc that I can run overnight and/or using CI. I like to use timers to catch tests that shouldn't be in the "short" suite (or at least need "permission"): sort of a "test for tests" so I can write them quickly and tune them as necessary. (I do check coverage at least and aim for full coverage using "short" suite. But to check some things requires non-trivial computing cycles.)
If one randomly spends time compiling depending on execution order I don't want to bump it. But OTOH blanket increasing the timing for all of them makes timeout useless for this.
On Tue 26 Sep 2023 at 20:24 -0700, Shaun Cutts wrote:
Thanks for the reply.... well let me take a closer look at the internals.
I have a
mark.slow
and amark.x_slow
- I like to have my test suite divided up into short tests I can have running in the background that show me the code is wired correctly, and longer tests that check statistics etc that I can run overnight and/or using CI. I like to use timers to catch tests that shouldn't be in the "short" suite (or at least need "permission"): sort of a "test for tests" so I can write them quickly and tune them as necessary. But if one randomly spends time compiling depending on execution order I don't want to bump it. But OTOH blanket increasing the timing for all of them makes timeout useless for this.
Sounds like maybe you want to do your own timings without pytest-timeout using existing pytest hooks. You can then create a report and flag any that are taking too long. Or even inject a test and failure I guess.
You can make this much nicer and simpler than trying to use pytest-timeout for this purpose I think. pytest plugins are not that scary and can be very simple.
So far it seems unlikely I think this should be a feature of pytest-timeout directly.
I would like to disregard numba jit compilation time during testing. As compilation depends on which tests are executed & in what order, it is hard to ensure it is all relegated to fixtures (for example).
Using the numba event api I can be notified of compilation start and end.
If I create a listener in conftest.py w/ access to "item" is there some way I can get at the timer for the test to pause it during compilation, when I am notified of a compilation event?