Closed francoiscampbell closed 6 months ago
@francoiscampbell Out of product-and-pricing curiosity, is the “and being billed for them” aspect the only significant downside to uploading all tests including those that passed? Or are there other reasons too?
There's a lot of value that we can today and will in future offer that relies on full awareness of the test suite (for example, flaky test detection needs to see that a test passed and failed without code changes in between). We'd love to get more feedback on how pricing fits into that. Perhaps we can can get in touch privately to get your thoughts?
I replied to you privately, but also posting a summary here for public discourse:
We're not interested in the deeper analytics of successful tests since improving stuff like slow tests doesn't move the needle on CI time as much as others things we can improve, and we didn't find the flaky test detection to be to useful because of how it works internally (we're also providing this feedback to other Buildkite team members).
Because of the size of our test suites, sending each and every passing test is cost-prohibitive, and we'd be able to use Test Analytics more if we sent only failing tests since the data would be much more actionable to us.
After discussing with the Buildkite Test Analytics team, this option isn't compatible with the roadmap for the product.
In our use-cases, we're not interested in looking at successful test cases (and being billed for them), so we'd like an option to only send failures to the API.
This PR also fixes the
fake_example
helper that was ignoring thestatus
argument and always creating failing examples.I've tested this in our org and confirmed that it indeed only sends failed tests.