Open joshgamache opened 6 months ago
A POC branch was run using Nx Atomizer to split up the e2e tests. The result was that e2e was flakier than berfore. More errors regularly appeared. A potential cause of this is due to the extra overhead of restarting/reseting playwright and running so many in parallel. Additionally, with splitting them and Matrixing them, they each need to spin up a development server to run the tests against.
Will explore further to see if there are any benefits we can draw out of Nx with regards to e2e.
A POC branch was run using Nx Atomizer to split up the e2e tests. The result was that e2e was flakier than berfore. More errors regularly appeared. A potential cause of this is due to the extra overhead of restarting/reseting playwright and running so many in parallel. Additionally, with splitting them and Matrixing them, they each need to spin up a development server to run the tests against.
Will explore further to see if there are any benefits we can draw out of Nx with regards to e2e.
would this be helped by the Nx Replay caching? Part of this card: https://github.com/bcgov/cas-reporting/issues/232
Iceboxing reason: When this was initially written and the POC was made, E2E tests had a tendency to be flakier. Enhancements and refinements to E2E suites by the team greatly decreased flakiness, making issues (particularly increased flakiness) found with the splitting implementations in this be an overall detriment without parallel benefits. Time spent to resolve this would be better spent elsewhere.
If we find a big increase in E2E time, this might be worth exploring again.
Description:
e2e tests currently work, but could they work **better?** This ticket aims to address that question. We should be able to use Nx Atomizer, an automatic e2e test splitter. It breaks e2e runs into smaller pieces, allowing for CI to be rerun on any flaky tests or identify any errors faster.
Dev story:
I want to use Nx Atomizer because it will automatically split E2E tests and this will help me to allow for easier identification/reruns of flaky tests and allow for parallelization.
Development Checklist and tasks:
Consideration and notes
Definition of Ready (Note: If any of these points are not applicable, mark N/A)
User story is includedUser role and type are identifiedWireframes are included (if required)Definition of Done (Note: If any of these points are not applicable, mark N/A)