Open basak opened 3 years ago
If you put the nursery into a fixture, then you're scoping the entire test to run inside that nursery. So your test isn't just cancelling your Foo
's background worker; you're also cancelling your test itself.
I suppose if you really wanted to, you could have the fixture open a nursery + spawn a child task, then have that child task open a second nursery and put the Foo
object into the second nursery. Then cancelling the second nursery wouldn't cancel the test itself. I feel like just putting the nursery inside the test might be easier though :-).
On Fri, Apr 09, 2021 at 07:44:15AM -0700, Nathaniel J. Smith wrote:
If you put the nursery into a fixture, then you're scoping the entire test to run inside that nursery. So your test isn't just cancelling your
Foo
's background worker; you're also cancelling your test itself.
Thanks. I didn't realise that's what was happening.
I suppose if you really wanted to, you could have the fixture open a nursery + spawn a child task, then have that child task open a second nursery and put the
Foo
object into the second nursery. Then cancelling the second nursery wouldn't cancel the test itself. I feel like just putting the nursery inside the test might be easier though :-).
Yes - I'll just put the ~test inside the nursery~ nursery inside the test [!] :)
Wouldn't you consider this surprising behaviour though? Ideally, a fixture would be able to create a nursery that a test could cancel without cancelling the test, no?
I wonder if it would be possible to put the logic you describe into a decorator and ship that as part of pytest-trio, or similar.
You could certainly make a fixture that implemented that nested nursery logic I mentioned. I'm not sure how if it would be very popular in general, though? I've never found it to onerous to write async with open_nursery() as nursery:
inside my tests when I wanted a more narrowly scoped nursery... it's only one line :-). And when I'm testing cancellation I personally like to write an explicit cancel scope inside the test, so the test body explicitly shows what's being cancelled and what isn't.
BTW, the advantage of the test being inside the nursery is that if some background task running in the nursery raises an unhandled exception, you want that to cancel the test and give you a test failure. I guess you'd still have that property even with the double-nested-nursery trick, because an unhandled exception will propagate out and cancel both nurseries.
I've never found it to onerous to write async with open_nursery() as nursery: inside my tests when I wanted a more narrowly scoped nursery... it's only one line :-)
The reason is that I'm putting complexity into a fixture (setting up fakes, etc). For tests that don't test cancellation, it's then really convenient to use that fixture (this being, AIUI, the entire purpose of fixtures). Because the fixture sets up first, I can't both create a nursery in the test and use the fixture (with that nursery). This is why I'm having my fixture create the nursery - or in the common case, I can just use the built-in nursery fixture, as I'm never cancelling it so it is safe to use that one. But then I have the odd test to test cancellation behaviour, and then I can't use the fixture I already have. I have to duplicate what that fixture does in the test.
I suppose I could factor out most of what the fixture does into a separate function that ~creates the nursery and~ sets up the fakes, etc. Then a fixture (that creates the nursery) could call that for use in my common non-cancellation case, and my cancellation behaviour test could avoid using the fixture and use my factored out function directly as an exception, setting up its own nursery. This is probably what I'll end up doing for now.
A pattern I've been using is handing a class instance a nursery when it is constructed, so that it can then run background tasks. Things that are working well:
trio.Cancelled
.However, I'm struggling to write a test that uses the fixture, cancels the nursery given to the class instances, and asserts that the cleanup was performed correctly. The docs say:
...and this is what is biting me here. Here's a reduced example:
I notice that in newer releases, the test fails with
RuntimeError: <fixture 'foo_fixture'> cancelled the test but didn't raise an error
which is nice (in older releases the test silently passes).The behaviour is therefore as documented, but that leads me to the problem: how do I test cancellation behaviour in an object created by a fixture?
Workaround: I can copy the fixture code into a test. But this seems ugly as it requires duplication, precluding code reuse.