Open unidual opened 4 years ago
The resource waste of --repeat
without stopping would only be high if the flaky tests use significantly more resources than the passing ones (especially since the passing ones would always have to repeat N times). Hopefully this isn't the case for your test suite in general.
Would the semantics for --gtest_break_on_failure
be sufficient for your intended use case (stop running gtest-parallel
when any test fails)? I'm a bit hesitant to add flags that don't have a corresponding --gtest
ones. I can see --gtest_break_on_failure
being useful locally as you can start looking into the first detected failure without having to ctrl+c to abort execution.
Context: running a handful of tests a lot of time. If one of the test if flaky at 3%, first failure will appear on average at 33th try. With 1000 iterations, that's 967 more than needed!
--gtest_break_on_failure
is not an ideal fit, since we still want to detect all tests failing at least once.
I agree this is not specific to gtest-parallel. Opened https://github.com/google/googletest/issues/2645
Thanks, do let me know what the outcome of that is as I'm more inclined to consider upstream flags. I was more thinking if you run 100 tests 100 times and only one of them is flaky, that 33th time only lowers the total number of iterations from 10000 to 9967, at which point the savings are less than 1%.
Rationale: when tracking flaky tests, --repeat is useful, but may be wasteful (especially in continuous integration). In most cases, it would be enough to stop at first failure.
May we add a flag to have this behavior, e.g. --stop-repeat-after-failure?