Open jessebrennan opened 4 years ago
This is a good idea, but I wouldn't feel qualified to decide, since this issue is based on @hannes-ucsc's request.
Conversation during standup: the proposed solution is okay, but has the caveat that it would become harder to debug tests that time out and are killed by the test runner (and not unittest). Looking for another solution, but if I can't find one, -b
might be fine.
@hannes-ucsc After digging into this, I've found two main options:
-b
flag to buffer all output from tests, and only print test output in the event of a failure. There are two immediate caveats with this approach:
I believe that the first option is the best option. It is, at least seemingly, less complex to implement and enforce.
In the first option, output buffering could be made configurable, so it could be removed in a drop!
commit in a PR having problems. In this case, debugging information would only be lost for tests that fail stochastically due to timeout. (I can think of a couple of ways to mitigate that last issue -- conditionally exempting some tests from output buffering, or manually setting a timeout on certain tests, etc.)
I don't want to buffer test output. It may be easier to implement but that doesn't outweigh hassle of having to switch it on and off with drop commits when tests hang or when I want to follow along on Gitlab or Github (which I do all the time to get an idea of the timing). When child processes write to stdout or stderr, their output is not buffered and appears out of context in case of a failure. Last but not least -b
doesn't work with PyCharm which many of us use extensively to run and debug tests. I'd prefer not to see trace backs for expected exceptions when running tests in PyCharm.
I don't quite follow the complexity argument. Complexity isn't proportional to effort. It may be more tedious to track down every expected exception but that doesn't make that solution it inherently complex or complicated. Did you attempt to implement the context manager option so we can actually compare the complexity of these approaches? If not, I'd like you to go ahead and do that.
Look at how self.assertLog works
@jessebrennan Would it be sufficient to pass the
-b
flag to unittest in the Makefile, such that stdout and stderr is buffered for all tests and only printed in the event of a failure (i.e., passing tests are altogether silent)? The only consequence of this I can anticipate is that this approach would also suppress warnings, but given that unexpected warnings should result in a test failure (if I am reading azul_test_case.py correctly), I don't see that as a problem.