Other test frameworks and runners typically include this failure information like this so that automated tools can process flaky test information like this in a more structured format than plain-text logs.
<failure message="assert False + where False = bool(0) + where 0 = <built-in method getrandbits of Random object at 0x7f93c1078610>(1) + where <built-in method getrandbits of Random object at 0x7f93c1078610> = random.getrandbits">
def test_random():
> assert bool(random.getrandbits(1))
E assert False
E + where False = bool(0)
E + where 0 = <built-in method getrandbits of Random object at 0x7f93c1078610>(1)
E + where <built-in method getrandbits of Random object at 0x7f93c1078610> = random.getrandbits
test_sample.py:9: AssertionError</failure>
In order to avoid parsers mistaking the <failure> for deterministic, non-flaky failure, some test runners use <flakyFailure>. Even better, some test frameworks go so far as to mark the <testcase> as flaky="true". However, I'm not sure this library has that level of control to achieve these things, but if it does all the better for devs trying to fix flaky tests.
Given a flaky test such as:
The output of the JUnit XML contains no
<failure>
information when a test case fails and then subsequently passes.It looks like this:
Other test frameworks and runners typically include this failure information like this so that automated tools can process flaky test information like this in a more structured format than plain-text logs.
In order to avoid parsers mistaking the
<failure>
for deterministic, non-flaky failure, some test runners use<flakyFailure>
. Even better, some test frameworks go so far as to mark the<testcase>
asflaky="true"
. However, I'm not sure this library has that level of control to achieve these things, but if it does all the better for devs trying to fix flaky tests.