Closed ChaseBig closed 3 years ago
How parallelization works is that it will run all specfiles in isolation. So if one spec file is reliant on any previous specfile running first - then your tests could fail in unexpected ways. Just expect each specfile to run in a random order (although it's slightly more sophisticated than that).
I'm a little confused by your description on how your set up 'broke parallelization functionality', but with your setup you'll need every spec file to be able to run independently.
If you need some piece of code to run before the entire run or before each spec, you may want to look at our experimentalRunEvents. But again, I'd need some more concrete example on what you're trying to do exactly to recommend an approach.
I think I understand what you're saying. Basically, trying to parallelize the looping test file wouldn't work because it would screw up the index when running across multiple machines. So I need to find a new approach.
Is it possible that I could use one single file and use that file to read data from the fixture file, then have a temp test file generated for each block of data in the fixture file so that tests could still be run in parallel but only after temp test files are created?
You may want to generate all of this outside of Cypress before calling cypress run
. So may need to just write something by hand using node.
Closing as resolved.
If you're experiencing a bug similar to this in Cypress, please open a new issue with a fully reproducible example that we can run. There may be a specific edge case with the issue that we need more detail to fix.
@ChaseBig did you ever resolve this or find a workaround? I having this exact same setup, and pretty much the exact same issue with Cypress Parallelization and unsure how to proceed
Background: I have been tasked with writing an automated test suite to test the UI of an e-commerce interface. We're essentially building a tool that e-commerce sites can use to inject a UI container object onto any webpage. My task is to build an E2E test framework to verify that the UI object container gets injected and rendered correctly on over 100-different web pages. My first draft project had an individual spec file for each individual web-page with only minor differences being present between each test spec. This resulted in having a tonnnn of copy-pasta spec files and ultimately became an incredibly inefficient and unmaintainable solution.
My second-draft refactor was to write a single test file that loops over data sourced from a cypress fixture using
cy.fixture()
. This is working PERFECTLY and I am very happy with how well Cypress was able to handle supporting my use case. But there is one major problem, parallelization.Issue: Using the
cy.fixture()
command, my test spec file reads data from a json object data file and passes it into the test spec file. Using this looping method has seemingly broke parallelization functionality.Below you'll find a few screenshots. One is a comparison view of how test were being executed before and after the refactor. You can see that only one single machine is being used to run the looped spec file whereas the individual test files are run in parallel.
I have since skipped all of the individual first-draft test specs now that the loop refactor is working locally.
I have also sanitized the example spec and fixture files below to a shortened version with dummy data.
Looped Spec File Example:
PARENT_one-time.spec.js
Json Data Fixture File:
Here's what the gitlab cypress run output looks like when running two looped test specs and skipping the first-version individual spec files.
How do I tweak the parallelization configuration to allow these looped cases to be run as individual test specs?