percy / percy-agent

[Deprecated in favor of `@percy/cli`] An agent process for integrating with Percy.
https://percy.io
MIT License
22 stars 23 forks source link

Cannot configure server port #482

Closed snpy closed 3 years ago

snpy commented 4 years ago

Summary

Currently when running Percy with Cypress we're stuck with 5338 port.

Steps to reproduce

Prerequisite:

To reproduce this issue one needs to run percy exec with custom port i.e.:

PERCY_TOKEN=**** ./node_modules/.bin/percy exec -p 55555 -- ./node_modules/cypress/bin/cypress run --spec cypress/integration/snapshot.spec.js

Detailed description

Using this command we won't be able to take any snapshot because we're stuck with 5338 port in node_modules/@percy/cypress/task.js file (it's literally set to be 5338).

Furthermore when we update this to be 55555. In this case we'll end up with an exception:

CypressError: cy.request() failed trying to load:

    http://localhost:5338/percy/snapshot

        We attempted to make an http request to this URL but the request failed without a response.

    We received this error at the network level:

    > Error: connect ECONNREFUSED 127.0.0.1:5338

-----------------------------------------------------------

    The request we sent was:

    Method: POST
URL: http://localhost:5338/percy/snapshot

    -----------------------------------------------------------

Common situations why this would fail:
- you don't have internet access
- you forgot to run / boot your web server
- your web server isn't accessible
- you have weird network configuration settings on your computer

The stack trace for this error is:

    RequestError: Error: connect ECONNREFUSED 127.0.0.1:5338
at new RequestError (/home/_/.cache/Cypress/3.8.3/Cypress/resources/app/packages/server/node_modules/request-promise-core/lib/errors.js:14:15)
at Request.plumbing.callback (/home/_/.cache/Cypress/3.8.3/Cypress/resources/app/packages/server/node_modules/request-promise-core/lib/plumbing.js:87:29)
at Request.RP$callback [as _callback] (/home/_/.cache/Cypress/3.8.3/Cypress/resources/app/packages/server/node_modules/request-promise-core/lib/plumbing.js:46:31)
at self.callback (/home/_/.cache/Cypress/3.8.3/Cypress/resources/app/packages/server/node_modules/request/request.js:185:22)
at Request.emit (events.js:203:13)
at Request.onRequestError (/home/_/.cache/Cypress/3.8.3/Cypress/resources/app/packages/server/node_modules/request/request.js:877:8)
at ClientRequest.emit (events.js:208:15)
at Socket.socketErrorListener (_http_client.js:399:9)
at Socket.emit (events.js:203:13)
at emitErrorNT (internal/streams/destroy.js:91:8)
at emitErrorAndCloseNT (internal/streams/destroy.js:59:3)
at processTicksAndRejections (internal/process/task_queues.js:77:11)

at Object.cypressErr (https://localhost/__cypress/runner/cypress_runner.js:86207:11)
at Object.throwErr (https://localhost/__cypress/runner/cypress_runner.js:86162:18)
at Object.throwErrByPath (https://localhost/__cypress/runner/cypress_runner.js:86194:17)
at https://localhost/__cypress/runner/cypress_runner.js:72524:23
    at tryCatcher (https://localhost/__cypress/runner/cypress_runner.js:120203:23)
at https://localhost/__cypress/runner/cypress_runner.js:115361:37
    at tryCatcher (https://localhost/__cypress/runner/cypress_runner.js:120203:23)
at Promise._settlePromiseFromHandler (https://localhost/__cypress/runner/cypress_runner.js:118139:31)
at Promise._settlePromise (https://localhost/__cypress/runner/cypress_runner.js:118196:18)
at Promise._settlePromise0 (https://localhost/__cypress/runner/cypress_runner.js:118241:10)
at Promise._settlePromises (https://localhost/__cypress/runner/cypress_runner.js:118316:18)
at Async../node_modules/bluebird/js/release/async.js.Async._drainQueue (https://localhost/__cypress/runner/cypress_runner.js:114928:16)
at Async../node_modules/bluebird/js/release/async.js.Async._drainQueues (https://localhost/__cypress/runner/cypress_runner.js:114938:10)
at Async.drainQueues (https://localhost/__cypress/runner/cypress_runner.js:114812:14)

This on the other hand is caused by passing default port

Possible quick fix

Read Cypress environment variable inside Percy tasks. For this CLI would might look like:

PERCY_TOKEN=**** CYPRESS_percyPort=55555 ./node_modules/.bin/percy exec -p 55555 -- ./node_modules/cypress/bin/cypress run --spec cypress/integration/snapshot.spec.js

and mentioned tasks would read that port with Cypress.env('percyPort').

Alternative solution

If that's not an option at least you should implement an exception saying we can't use custom port with Cypress as I was sure I was doing something wrong when integrating Percy with Cypress on CI for some time.

Locked versions

Version of Percy related packages are:

@percy/agent: 0.21.0
@percy/cypress: 2.3.0
percy-client: 3.6.1

Thanks

Thanks for good work. Above is just a bump, but at the moment it prevents us from running multiple Cypress jobs on CI wrapped with Percy.

Robdel12 commented 4 years ago

Hey @snpy thanks for opening an issue! This is a known thing for us internally. It's currently not possible to configure or change the port the server is running on. Even though agent supports it (the underlying SDK), none of the SDKs consuming agent can change the port. This is due to how the options are passed around. You would have to pass it as a snapshot option each time take a snapshot: cy.percySnapshot('home', { port: 5555 }) for it to work, which is an API we don't want. We haven't fixed this yet since there's not quite a good path forward yet (we need to take a hard look at how to pass our various options & make that consistent and configurable from the CLI & a config file).

I wanted to ask why you would need to change the port? Are you running many Percy tests for different projects at the same time? Or are you trying to run tests in parallel?

snpy commented 4 years ago

Hi, @Robdel12 we're currently running Percy around Cypress each time branch gets updated. Using percy -- cypress for each sometimes results in

[percy] created build #N: https://percy.io/Oberst/gg-dev/builds/N
events.js:174
      throw er; // Unhandled 'error' event
      ^

Error: listen EADDRINUSE: address already in use :::5338
    at Server.setupListenHandle [as _listen2] (net.js:1280:14)
    at listenInCluster (net.js:1328:12)
    at Server.listen (net.js:1415:7)
    at Function.listen (./node_modules/express/lib/application.js:618:24)
    at AgentService.start (./node_modules/@percy/agent/dist/services/agent-service.js:36:36)
Emitted 'error' event at:
    at emitErrorNT (net.js:1307:8)
    at process._tickCallback (internal/process/next_tick.js:63:19)
script returned exit code 1

For now I've ported some code from @percy/cypress and am using now directly @percy/agent with CYPRESS_percyPort environment variable to pass on port to Cypress and -p option to pass the same port to Percy.

Not pretty solution, but it works.

Robdel12 commented 4 years ago

I'm a little confused, does the server not get shut down after each test run? This error should only happen if there are multiple instances of the agent server running (either percy start or percy exec -- [test command that should exit]).

Robdel12 commented 4 years ago

I see there's a message in your builds that mentions hotfix-parallel-percy -- it seems like this error is happening because you're running multiple instances of percy exec on the same CI machine? You could stop wrapping the test command with percy exec, run the tests normally, but start Percy before (percy start then run the tests)

snpy commented 4 years ago

Problem with percy start is that I would have to have it running all the time as I wound't know when to stop it.

However it would work, yes.

Robdel12 commented 4 years ago

Ah, gotcha. You would want to stop it as soon as all of your parallel tests have exited. This is the recommended approach for tests that run in parallel in this manner. You need a single agent server up and ready to accept snapshots in this world and it can make the config a little more simple -- you no longer have to worry about PARALLEL_TOTALs or setting a correct NONCE

snpy commented 4 years ago

For now our ugly solution works just fine. Maybe in the future we'll refine that to have Percy running in other way.

Thanks for response; at least now I know we can't use @percy/cypress integration out of the box. Fortunately workaround was not that hard to come by.

Please feel free to close this PR.

Robdel12 commented 4 years ago

at least now I know we can't use @percy/cypress integration out of the box.

This isn't true, using percy start here is the correct way out of the box. :)

In any case, if you're happy with your fork feel free to keep using it! I would recommend watching this repo so you can merge the updates in (we tend to ship often). That way you can keep getting the bug fixes/new features going forward 👍

snpy commented 4 years ago

Good to know; by "out of the box" I meant solution that you've described on your docs. I have not found any documentation or use case for percy start thus assuming it's workaround for our case.

emexelem commented 4 years ago

Ran into the same issue when running 2 parallel percy+cypress processes from jenkins, the error led me here but i'm unsure now if i should follow https://docs.percy.io/docs/parallel-test-suites and use percy finalize, or what's described here and run percy start before my testsuites. Can you please clarify in the documentation in which cases percy start should be used and how? Is there any command to run after the completion of the testsuites? Looks like there's a stop command too, but it's not documented, nor the start command, in https://docs.percy.io/docs/command-line-client nor with npx percy help --all

felipeauol commented 4 years ago

I ran into this exact same issue while running two parallel processes of percy in Jenkins. I would also appreciate some clarification regarding the percy start command

snpy commented 4 years ago

In case it might be useful to anybody. Please be warned it's not the pretties approach and I'm using Percy binary directly.

Initial problem was caused by using incorrectly agent any inside each stage making Percy instance inaccessible once it was created.

After moving agent any just under pipeline setup for Percy works for me like this:

pipeline {
    agent any
    environment {
        PERCY_PORT = sh(script: 'echo $(python -c \'import socket; s=socket.socket(); s.bind(("", 0)); print(s.getsockname()[1]); s.close()\');', returnStdout: true).trim()
        PERCY_TOKEN = credentials('PERCY_TOKEN')
    }
    stage('Start Percy') {
        sh '''
            nohup node_modules/.bin/percy start -t 350 -p ${PERCY_PORT} &
            sleep 5
            node_modules/.bin/percy health-check -p ${PERCY_PORT}
        '''
    }
...
    stage('Tests') {
        ...
    }
...
    post {
        always {
            echo 'Stopping Percy server'
            sh '''
                curl -XPOST http://localhost:${PERCY_PORT}/percy/stop || true
            '''
        }
    }
}

PERCY_PORT is required, so we could run more then one job at a time post.always might be replaced with binary's stop argument.

Actual screenshots are being done during Cypress test; however because Cypress tests do not allow custom port for Percy we've also modified Percy helper and created our own based on official package.

Robdel12 commented 4 years ago

Hey everyone! I've updated our docs to officially recommend using percy start over setting PERCY_PARALLEL_* env vars when you're running parallel tests on the same machine: https://docs.percy.io/docs/parallel-test-suites#parallel-tests-on-the-same-machine

In this case using percy start is the officially recommended way.

mohsenny commented 3 years ago

Hey @Robdel12, I encountered a similar issue. Let me know if I should create a separate issue or continue talking about it here.

I was locally running percy where it seemed to get stuck:

[percy] stopping percy...
[percy] waiting for 1 snapshots to complete...

So I killed the process and tried to run the test again (via cypress) when I got:

Error: listen EADDRINUSE: address already in use :::5338
    at Server.setupListenHandle [as _listen2] (node:net:1290:16)
    at listenInCluster (node:net:1338:12)
    at Server.listen (node:net:1424:7)
    at Function.listen (/Users/mo/ts/ts-frontend/node_modules/express/lib/application.js:618:24)
    at AgentService.start (/Users/mo/ts/ts-frontend/node_modules/@percy/agent/dist/services/agent-service.js:46:36)
    at async Exec.start (/Users/mo/ts/ts-frontend/node_modules/@percy/agent/dist/commands/percy-command.js:36:13)
    at async Exec.run (/Users/mo/ts/ts-frontend/node_modules/@percy/agent/dist/commands/exec.js:25:13)
    at async Exec._run (/Users/mo/ts/ts-frontend/node_modules/@percy/agent/node_modules/@oclif/command/lib/command.js:44:20)
    at async Config.runCommand (/Users/mo/ts/ts-frontend/node_modules/@oclif/config/lib/config.js:173:24)
    at async Main.run (/Users/mo/ts/ts-frontend/node_modules/@percy/agent/node_modules/@oclif/command/lib/main.js:21:9)
    at async Main._run (/Users/mo/ts/ts-frontend/node_modules/@percy/agent/node_modules/@oclif/command/lib/command.js:44:20)
Emitted 'error' event on Server instance at:
    at emitErrorNT (node:net:1317:8)
    at processTicksAndRejections (node:internal/process/task_queues:80:21) {
  code: 'EADDRINUSE',
  errno: -48,
  syscall: 'listen',
  address: '::',
  port: 5338
}

Running yarn percy stop got stuck well:

➜  yarn percy stop
yarn run v1.22.10
$ /Users/mo/ts/ts-frontend/node_modules/.bin/percy stop
[percy] stopping percy...
[percy] waiting for 1 snapshots to complete...

So at the end all I had to do was to kill the node process manually.

Any clue what caused this, or a more important question, how to make sure this doesn't happen on CI. Thank you!

Robdel12 commented 3 years ago

With the last agent based SDK migrated to CLI, you can check the migration doc to see how you can upgrade: https://docs.percy.io/docs/migrating-to-percy-cli#completed

Changing the port is possible now, though likely not the right thing to reach for (in most instances). The parallel docs have been updated to reflect this new world, too: https://docs.percy.io/docs/parallel-test-suites