I've been using cypress-split for some time now, and I've noticed issues with how tests are divided in our main project, which involves running approximately 800 tests across 17 machines in parallel. Specifically, some machines finish much quicker than others, and there's a significant difference between the estimated test durations provided by Cypress-Split and the actual time these tests take for some machines.
Upon investigating these discrepancies between estimated and actual test durations, I discovered that some of our specs had all of their tests skipped. Let's refer to these as skipped_specs
Here are two important observations:
Cypress detects these skipped_specs and attempts to run them.
The results.status.passed for these specs is 0, meaning they're not counted as passed tests, and that the timings are not saved to the json file.
As a result, when Cypress-Split encounters a skipped_spec, it treats it as a new spec and assigns it the average duration. This leads to a miscalculation in how tests are distributed, resulting in some machines finishing much earlier than others due to the large difference between the duration of an average test run and a skipped test run.
My proposal for the solution is to include the results for the skipped_spec in the timings.json by detecting the specs that had all their tests skipped ( pending ).
Having the proper timings will result in the load-balancer working as expected.
I've been using
cypress-split
for some time now, and I've noticed issues with how tests are divided in our main project, which involves running approximately 800 tests across 17 machines in parallel. Specifically, some machines finish much quicker than others, and there's a significant difference between the estimated test durations provided by Cypress-Split and the actual time these tests take for some machines.Upon investigating these discrepancies between estimated and actual test durations, I discovered that some of our specs had all of their tests skipped. Let's refer to these as
skipped_specs
Here are two important observations:
skipped_specs
and attempts to run them.results.status.passed
for these specs is 0, meaning they're not counted as passed tests, and that the timings are not saved to the json file.As a result, when Cypress-Split encounters a
skipped_spec
, it treats it as a new spec and assigns it the average duration. This leads to a miscalculation in how tests are distributed, resulting in some machines finishing much earlier than others due to the large difference between the duration of an average test run and a skipped test run.My proposal for the solution is to include the results for the
skipped_spec
in the timings.json by detecting the specs that had all their tests skipped ( pending ). Having the proper timings will result in the load-balancer working as expected.