Closed jmr0 closed 5 years ago
Hi @jmr0, are you running your builds in parallel on CI by any chance? Also, could you let me know which CI provider you're using. Thanks!
Hey @djones -- we're not using the Cypress parallel
flag, but we do have various Cypress processes running running like so percy exec -- cypress run --spec some/subset/of/our_tests
. We are running these on Codeship, so each one of these scripts runs in its own Docker container and should be isolated that way. I had previously integrated with https://github.com/percy/percy-capybara successfully on this CI setup, if that helps. Thanks for the quick reply!
ok thanks for that info. The general architecture I think you'll want is something like this:
Your CI run starts and in your case you've got several docker containers running to split up your Cypress tests. This is what I've called a "Test Runner" in the diagram. Since you're using Codeship, a fully Percy supported CI provider, you will not need to set the PERCY_PARALLEL_NONCE
as we've figured that out for you here. And you must be setting the PERCY_TOKEN
correctly currently, so that bit is all good.
The one change you'll need to make is set the PERCY_PARALLEL_TOTAL
to -1
. This tells Percy "hey we're going to create a bunch of builds in Percy with the same nonce, but we don't know exactly how many will run".
Finally to tell Percy that all your builds are 100% done, you need to add a finalize all step at the end of your workflow where you know that all test runners have completed. You can simply install @percy/agent
and call percy finalize --all
for this step. Since you're using docker, you're very welcome to simply pull down this image https://hub.docker.com/r/percyio/agent/ which has @percy/agent
pre-installed on it and run percy finalize --all
.
Let me know if you have any questions and definitely report back if you get it working.
Thanks, David.
Hi @jmr0, just checking in. How's it going?
Hey @djones, thanks for getting back to me on this one! I've been out of the office for the holidays but will let you know ASAP next week if I get it to work with this setup :smiley:
Hi @djones -- sorry I'm getting to this just now. I gave this a shot and the finalize step resulted in this:
StatusCodeError 409 - {"errors":[{"status":"conflict","detail":"Can only set all_shards: true for parallel builds"}]}
while each parallel build seems to have succeeded and uploaded independently, e.g.:
2018-12-14 02:06:43
[percy] stopping percy...
2018-12-14 02:06:43
[percy] waiting for 6 snapshots to complete...
2018-12-14 02:06:43
[percy] done.
2018-12-14 02:06:43
[percy] finalized build #10912: <build url here>
I'm probably missing some obvious setting here!
@jmr0 Our SDK relies on Codeship providing a CI_NODE_TOTAL
environment variable, but I don't see that in their docs right now. To help troubleshoot this, could you try providing and env var of PERCY_PARALLEL_TOTAL
and set it to the number of parallelized build processes you have, and let us know the results please?
Actually, sorry, if you could try setting PERCY_PARALLEL_TOTAL
to -1, that would be a better option with what you're trying.
@timhaines ah I did try setting that as djones suggested above. Is there a message I should be seeing indicating it's set correctly?
Hmm, either PERCY_PARALLEL_TOTAL
or Codeship's own CI_BUILD_NUMBER
aren't making it into the environment where the command is being run. You could try logging both of those where you're executing the command, to see which is missing, and help with troubleshooting getting them into your environment?
Actually, it looks like Codeship Pro doesn't support CI_BUILD_NUMBER
. The docs for Pro don't list it, they do show CI_BUILD_ID
though.. That could be the reason this is breaking.
Hey @timhaines -- looks like that was it. I finally got everything to work through a combination of CI_BUILD_ID
and setting PERCY_PARALLEL_TOTAL
to 2, since I know how many builds to expect. Thanks a lot for looking into this one! Happy to put a PR up for adding this Codeship Pro env var setting to percy-js if you'd like
Looks like everything was settled here, right? If not feel free to comment & we can reopen 👍
Hello!
We're seeing the following output on our CI build:
I wasn't sure where to begin investigating since it seems like the upload of snapshots happened successfully. Would you happen to know what this error message could be related to?