Closed snpy closed 3 years ago
Hey @snpy thanks for opening an issue! This is a known thing for us internally. It's currently not possible to configure or change the port the server is running on. Even though agent supports it (the underlying SDK), none of the SDKs consuming agent can change the port. This is due to how the options are passed around. You would have to pass it as a snapshot option each time take a snapshot: cy.percySnapshot('home', { port: 5555 })
for it to work, which is an API we don't want. We haven't fixed this yet since there's not quite a good path forward yet (we need to take a hard look at how to pass our various options & make that consistent and configurable from the CLI & a config file).
I wanted to ask why you would need to change the port? Are you running many Percy tests for different projects at the same time? Or are you trying to run tests in parallel?
Hi, @Robdel12 we're currently running Percy around Cypress each time branch gets updated. Using percy -- cypress
for each sometimes results in
[percy] created build #N: https://percy.io/Oberst/gg-dev/builds/N
events.js:174
throw er; // Unhandled 'error' event
^
Error: listen EADDRINUSE: address already in use :::5338
at Server.setupListenHandle [as _listen2] (net.js:1280:14)
at listenInCluster (net.js:1328:12)
at Server.listen (net.js:1415:7)
at Function.listen (./node_modules/express/lib/application.js:618:24)
at AgentService.start (./node_modules/@percy/agent/dist/services/agent-service.js:36:36)
Emitted 'error' event at:
at emitErrorNT (net.js:1307:8)
at process._tickCallback (internal/process/next_tick.js:63:19)
script returned exit code 1
For now I've ported some code from @percy/cypress and am using now directly @percy/agent with CYPRESS_percyPort
environment variable to pass on port to Cypress and -p
option to pass the same port to Percy.
Not pretty solution, but it works.
I'm a little confused, does the server not get shut down after each test run? This error should only happen if there are multiple instances of the agent server running (either percy start
or percy exec -- [test command that should exit]
).
I see there's a message in your builds that mentions hotfix-parallel-percy
-- it seems like this error is happening because you're running multiple instances of percy exec
on the same CI machine? You could stop wrapping the test command with percy exec
, run the tests normally, but start Percy before (percy start
then run the tests)
Problem with percy start
is that I would have to have it running all the time as I wound't know when to stop it.
However it would work, yes.
Ah, gotcha. You would want to stop it as soon as all of your parallel tests have exited. This is the recommended approach for tests that run in parallel in this manner. You need a single agent server up and ready to accept snapshots in this world and it can make the config a little more simple -- you no longer have to worry about PARALLEL_TOTALs or setting a correct NONCE
For now our ugly solution works just fine. Maybe in the future we'll refine that to have Percy running in other way.
Thanks for response; at least now I know we can't use @percy/cypress integration out of the box. Fortunately workaround was not that hard to come by.
Please feel free to close this PR.
at least now I know we can't use @percy/cypress integration out of the box.
This isn't true, using percy start
here is the correct way out of the box. :)
In any case, if you're happy with your fork feel free to keep using it! I would recommend watching this repo so you can merge the updates in (we tend to ship often). That way you can keep getting the bug fixes/new features going forward 👍
Good to know; by "out of the box" I meant solution that you've described on your docs.
I have not found any documentation or use case for percy start
thus assuming it's workaround for our case.
Ran into the same issue when running 2 parallel percy+cypress processes from jenkins, the error led me here but i'm unsure now if i should follow https://docs.percy.io/docs/parallel-test-suites and use percy finalize, or what's described here and run percy start before my testsuites.
Can you please clarify in the documentation in which cases percy start should be used and how? Is there any command to run after the completion of the testsuites?
Looks like there's a stop command too, but it's not documented, nor the start command, in https://docs.percy.io/docs/command-line-client nor with npx percy help --all
I ran into this exact same issue while running two parallel processes of percy in Jenkins. I would also appreciate some clarification regarding the percy start
command
In case it might be useful to anybody. Please be warned it's not the pretties approach and I'm using Percy binary directly.
Initial problem was caused by using incorrectly agent any
inside each stage making Percy instance inaccessible once it was created.
After moving agent any
just under pipeline
setup for Percy works for me like this:
pipeline {
agent any
environment {
PERCY_PORT = sh(script: 'echo $(python -c \'import socket; s=socket.socket(); s.bind(("", 0)); print(s.getsockname()[1]); s.close()\');', returnStdout: true).trim()
PERCY_TOKEN = credentials('PERCY_TOKEN')
}
stage('Start Percy') {
sh '''
nohup node_modules/.bin/percy start -t 350 -p ${PERCY_PORT} &
sleep 5
node_modules/.bin/percy health-check -p ${PERCY_PORT}
'''
}
...
stage('Tests') {
...
}
...
post {
always {
echo 'Stopping Percy server'
sh '''
curl -XPOST http://localhost:${PERCY_PORT}/percy/stop || true
'''
}
}
}
PERCY_PORT
is required, so we could run more then one job at a time
post.always
might be replaced with binary's stop
argument.
Actual screenshots are being done during Cypress test; however because Cypress tests do not allow custom port for Percy we've also modified Percy helper and created our own based on official package.
Hey everyone! I've updated our docs to officially recommend using percy start
over setting PERCY_PARALLEL_*
env vars when you're running parallel tests on the same machine: https://docs.percy.io/docs/parallel-test-suites#parallel-tests-on-the-same-machine
In this case using percy start
is the officially recommended way.
Hey @Robdel12, I encountered a similar issue. Let me know if I should create a separate issue or continue talking about it here.
I was locally running percy where it seemed to get stuck:
[percy] stopping percy...
[percy] waiting for 1 snapshots to complete...
So I killed the process and tried to run the test again (via cypress
) when I got:
Error: listen EADDRINUSE: address already in use :::5338
at Server.setupListenHandle [as _listen2] (node:net:1290:16)
at listenInCluster (node:net:1338:12)
at Server.listen (node:net:1424:7)
at Function.listen (/Users/mo/ts/ts-frontend/node_modules/express/lib/application.js:618:24)
at AgentService.start (/Users/mo/ts/ts-frontend/node_modules/@percy/agent/dist/services/agent-service.js:46:36)
at async Exec.start (/Users/mo/ts/ts-frontend/node_modules/@percy/agent/dist/commands/percy-command.js:36:13)
at async Exec.run (/Users/mo/ts/ts-frontend/node_modules/@percy/agent/dist/commands/exec.js:25:13)
at async Exec._run (/Users/mo/ts/ts-frontend/node_modules/@percy/agent/node_modules/@oclif/command/lib/command.js:44:20)
at async Config.runCommand (/Users/mo/ts/ts-frontend/node_modules/@oclif/config/lib/config.js:173:24)
at async Main.run (/Users/mo/ts/ts-frontend/node_modules/@percy/agent/node_modules/@oclif/command/lib/main.js:21:9)
at async Main._run (/Users/mo/ts/ts-frontend/node_modules/@percy/agent/node_modules/@oclif/command/lib/command.js:44:20)
Emitted 'error' event on Server instance at:
at emitErrorNT (node:net:1317:8)
at processTicksAndRejections (node:internal/process/task_queues:80:21) {
code: 'EADDRINUSE',
errno: -48,
syscall: 'listen',
address: '::',
port: 5338
}
Running yarn percy stop
got stuck well:
➜ yarn percy stop
yarn run v1.22.10
$ /Users/mo/ts/ts-frontend/node_modules/.bin/percy stop
[percy] stopping percy...
[percy] waiting for 1 snapshots to complete...
So at the end all I had to do was to kill the node process manually.
Any clue what caused this, or a more important question, how to make sure this doesn't happen on CI. Thank you!
With the last agent based SDK migrated to CLI, you can check the migration doc to see how you can upgrade: https://docs.percy.io/docs/migrating-to-percy-cli#completed
Changing the port is possible now, though likely not the right thing to reach for (in most instances). The parallel docs have been updated to reflect this new world, too: https://docs.percy.io/docs/parallel-test-suites
Summary
Currently when running Percy with Cypress we're stuck with 5338 port.
Steps to reproduce
Prerequisite:
cypress/integrations/snapshot.spec.js
file.To reproduce this issue one needs to run
percy exec
with custom port i.e.:Detailed description
Using this command we won't be able to take any snapshot because we're stuck with 5338 port in
node_modules/@percy/cypress/task.js
file (it's literally set to be 5338).Furthermore when we update this to be 55555. In this case we'll end up with an exception:
This on the other hand is caused by passing default port
Possible quick fix
Read Cypress environment variable inside Percy tasks. For this CLI would might look like:
and mentioned tasks would read that port with
Cypress.env('percyPort')
.Alternative solution
If that's not an option at least you should implement an exception saying we can't use custom port with Cypress as I was sure I was doing something wrong when integrating Percy with Cypress on CI for some time.
Locked versions
Version of Percy related packages are:
Thanks
Thanks for good work. Above is just a bump, but at the moment it prevents us from running multiple Cypress jobs on CI wrapped with Percy.