cypress-io / cypress

Fast, easy and reliable testing for anything that runs in a browser.
https://cypress.io
MIT License
46.71k stars 3.16k forks source link

Cypress sometimes stalls/hangs with no output when running in Jenkins with Docker #8206

Closed kaiyoma closed 1 year ago

kaiyoma commented 4 years ago

Current behavior:

I feel like this kind of issue has been referenced many times before, but even after trying all the solutions I've found, we still continue to run into this problem. We've been seeing this issue for many months now, with early versions of Cypress 4.x and even with the most recent version today.

Long-running (multi-hour) test suites often "stall": the output stops appearing, but nothing actually fails or crashes. We actually have to rely on the timeout mechanism in Jenkins to kill the task because Cypress is just stuck.

I have enabled debug logs and honestly I don't see anything helpful in them. I'll upload the relevant portion next, but there are no mentions of failures, crashes, or even being out of memory. We're already following all the guidance in the Cypress docs about disabling /dev/shm and using the correct IPC setting.

Desired behavior:

Tests should either run to completion or Cypress should fail with a clear error.

Versions

Cypress: 4.12.1 OS: Linux Browser: Chrome 83 (headless)

kutenai commented 2 years ago

I'm seeing this also.. it's really causing us huge issues. Not sure what the issue is.. it needs to just FAIL if it won't work

bporcelli commented 2 years ago

We were experiencing this on GitHub Actions. Switching from Electron to Chrome resolved the hangs for us. Thanks @stuart-clark!

liverant commented 2 years ago

Any news? It's almost 2 years now... This issue is on the same subject but a year after this one.

mattvb91 commented 2 years ago

We were experiencing this on GitHub Actions. Switching from Electron to Chrome resolved the hangs for us. Thanks @stuart-clark!

Thanks @bporcelli this fixed the issue for me.

flotwig commented 2 years ago

Users typically see this when the browser launched by Cypress uses too much memory and is automatically killed by the OS. I recommend folks who are impacted by this to follow #6170 and the related issue #349 which will greatly improve visibility of this issue.

mattvb91 commented 2 years ago

We were experiencing this on GitHub Actions. Switching from Electron to Chrome resolved the hangs for us. Thanks @stuart-clark!

Thanks @bporcelli this fixed the issue for me.

Just tried it with 10.0.0 and it cant even get as far as loading the tests before crashing so will be reverting again

> cypress run --browser chrome

The Test Runner unexpectedly exited via a exit event with signal SIGILL

Please search Cypress documentation for possible solutions:

https://on.cypress.io/

Check if there is a GitHub issue describing this crash:

https://github.com/cypress-io/cypress/issues

Consider opening a new issue.

----------

Platform: linux-x64 (Debian - 10.11)
Cypress Version: 10.0.0
Komojo7 commented 2 years ago

Was experiencing this issue and switching to Firefox was the only thing that worked.

cosmith commented 2 years ago

I found this option that I thought would maybe help but it made no difference:

    /**
     * The number of tests for which snapshots and command data are kept in memory. Reduce this number if you are experiencing high memory consumption in your browser during a test run.
     * @default 50
     */
    numTestsKeptInMemory: number

@stuart-clark Are you able to share your script for executing tests one at a time? We are currently doing this: https://gist.github.com/martincarstens/70dd91176e420c6c9ca3de96cafac71d - feel free to use it or adapt for your own use.

In the end this is the only thing that worked for us on CircleCI. We adapted the script to pass spec filenames split by CircleCI to parallelize runs. It's 1.5x slower but at least it finishes successfully...

martincarstens commented 2 years ago

@cosmith thank you for the feedback, glad it's helping someone.

stuart-clark commented 2 years ago

This may be relevant to some folks here:

https://github.com/cypress-io/cypress/issues/17627#issuecomment-1199285216 https://github.com/cypress-io/cypress/pull/23001

mattvb91 commented 1 year ago

Still the same on 10.10.0

The Test Runner unexpectedly exited via a exit event with signal SIGILL

Please search Cypress documentation for possible solutions:

https://on.cypress.io/

Check if there is a GitHub issue describing this crash:

https://github.com/cypress-io/cypress/issues

Consider opening a new issue.

----------

Platform: linux-x64 (Debian - 11.4)
Cypress Version: 10.10.0
cosmith commented 1 year ago

Update after a while, it turns out for us there was no issue apart from tests running for too long that were killed by CircleCI. So check that your retries param is not set too high and fix your flaky tests!

mattvb91 commented 1 year ago

Update after a while, it turns out for us there was no issue apart from tests running for too long that were killed by CircleCI. So check that your retries param is not set too high and fix your flaky tests!

in my case I cant even get the tests to start, this crashes before it even gets to the first test. The only stable cypress version so far is 9.7.0

Neoxrus86 commented 1 year ago

Any news? I have this error reproduced in versions 10.8.0, 12.2.0 and 12.3.0. I don't know how to deal with it anymore. I'm running tests in Docker containers via gitlab-runner and the containers freeze up intermittently.

GastonMujica commented 1 year ago

Hey! To anyone still having this issue, this might help you out! We've tried almost everything, from reducing the amount of tests in memory, to the--ipc=host todisable-dev-shm-usage. Nothing worked. At the end we where able to fix this issue by cleaning both the cache and the cookies.

export const clearCache = () => {
  cy.wrap(
    Cypress.automation('remote:debugger:protocol', {
      command: 'Network.clearBrowserCache',
    }),
  );
};

export const clearCookies = () => {
  cy.clearCookies();
};

Hope it helps!

bahunov commented 1 year ago

Is this done beforeEach I assume?

On Mon, 6 Feb 2023 at 23:27, GastonMujica @.***> wrote:

Hey! To anyone still having this issue, this might help you out! We've tried almost everything, from reducing the amount of tests in memory, to the --ipc=host to disable-dev-shm-usage. Nothing worked. At the end we where able to fix this issue by cleaning both the cache and the cookies.

export const clearCache = () => { cy.wrap( Cypress.automation('remote:debugger:protocol', { command: 'Network.clearBrowserCache', }), ); };

export const clearCookies = () => { cy.clearCookies(); };

Hope it helps!

— Reply to this email directly, view it on GitHub https://github.com/cypress-io/cypress/issues/8206#issuecomment-1419630366, or unsubscribe https://github.com/notifications/unsubscribe-auth/AFKDNFHOL2GP3YSN5DRK3TLWWFGCLANCNFSM4PWYC6YA . You are receiving this because you commented.Message ID: @.***>

GastonMujica commented 1 year ago

Is this done beforeEach I assume?

Yep, before anything else!

Neoxrus86 commented 1 year ago

Hi all! I had a freeze inside a docker container. This solution helped me. I disable loading all images with intercept

    if (Cypress.env('disableMedia')) {
        cy.intercept(/\.(jpg|jpeg|png|mp4|gif|webp)+$/gi, []).as(
            'trash_requests_media'
        );
    }
tausif29 commented 1 year ago

Hey! To anyone still having this issue, this might help you out! We've tried almost everything, from reducing the amount of tests in memory, to the--ipc=host todisable-dev-shm-usage. Nothing worked. At the end we where able to fix this issue by cleaning both the cache and the cookies.

export const clearCache = () => {
  cy.wrap(
    Cypress.automation('remote:debugger:protocol', {
      command: 'Network.clearBrowserCache',
    }),
  );
};

export const clearCookies = () => {
  cy.clearCookies();
};

Hope it helps!

Hey! To anyone still having this issue, this might help you out! We've tried almost everything, from reducing the amount of tests in memory, to the--ipc=host todisable-dev-shm-usage. Nothing worked. At the end we where able to fix this issue by cleaning both the cache and the cookies.

export const clearCache = () => {
  cy.wrap(
    Cypress.automation('remote:debugger:protocol', {
      command: 'Network.clearBrowserCache',
    }),
  );
};

export const clearCookies = () => {
  cy.clearCookies();
};

Hope it helps!

Hello @GastonMujica I believe this code should be placed in command.js , Could you please share where and how to call these methods?

vallme2003 commented 1 year ago

Confirming that this still happens in jenkins. Tried most of above solutions but doesn't work.

FranciscoKnebel commented 1 year ago

This should be a priority issue, just searching for Cypress and hanging/frozen returns years of responses with no actual solution. I have tried every suggestion on this thread, and none of them worked for my case. Job runs with GitHub Actions in a self-hosted runner, with 8 GB of RAM, which should be more than enough.

A temporary solution I made to reduce the effects was to create multiple separate jobs using a matrix of jobs, testing each spec in different jobs. Still has the frozen processes, but at least it doesn't hang the entire test suite.

amakhrov commented 1 year ago

It is affecting us badly lately. It used to be a rare case, but seems to have become worse for the last two months or so.

The debug output (via DEBUG='cypress:*') doesn't show anything suspicious. It's just that at some point the tests stop executing at all. The browser seems to be running for some time after that (there is a periodic network request issued by the last opened page). After some time it stops, too - and the only output produced is CPU/memory profiler.

Looks like this:

2023-06-29T00:04:26.131Z cypress:server:remote-states getting remote state: { auth: null, origin: '<EDIT>', strategy: 'http', fileServer: null, domainName: '<EDIT>', props: { port: '443', protocol: 'https:', subdomain: '<EDIT>', domain: '<EDIT>', tld: 'io' } } for: <EDIT>
2023-06-29T00:04:27.758Z cypress:server:video capture codec data: { format: 'image2pipe', audio: '', video: 'mjpeg (Baseline)', duration: 'N/A', video_details: [ 'mjpeg (Baseline)', 'yuvj420p(pc', 'bt470bg/unknown/unknown)', '1280x720 [SAR 1:1 DAR 16:9]', '25 fps', '25 tbr', '25 tbn', '25 tbc' ] }
2023-06-29T00:04:28.821Z cypress:server:util:process_profiler current & mean memory and CPU usage by process group:
┌─────────┬───────────────────┬──────────────┬──────────────────────────┬────────────┬────────────────┬──────────┬──────────────┬─────────────┐
│ (index) │       group       │ processCount │           pids           │ cpuPercent │ meanCpuPercent │ memRssMb │ meanMemRssMb │ maxMemRssMb │
├─────────┼───────────────────┼──────────────┼──────────────────────────┼────────────┼────────────────┼──────────┼──────────────┼─────────────┤
│    0    │     'cypress'     │      1       │          '5842'          │    3.55    │      4.33      │  444.29  │    430.59    │   467.24    │
│    1    │    'Electron'     │      1       │          '7305'          │     2      │      3.22      │  399.79  │    413.27    │    522.8    │
│    2    │ 'electron-shared' │      4       │ '5878, 6061, 5879, 6137' │    5.24    │      2.84      │  266.69  │    248.81    │   276.31    │
│    3    │     'plugin'      │      1       │          '6127'          │     0      │      0.03      │  247.06  │    228.53    │   385.09    │
│    4    │     'ffmpeg'      │      1       │          '7319'          │    3.26    │      2.01      │  139.73  │    117.53    │   140.15    │
│    5    │      'other'      │      2       │       '7495, 7496'       │     0      │       0        │   3.86   │     3.74     │    3.86     │
│    6    │      'TOTAL'      │      10      │           '-'            │   14.05    │     12.24      │ 1501.43  │   1399.71    │   1623.33   │
└─────────┴───────────────────┴──────────────┴──────────────────────────┴────────────┴────────────────┴──────────┴──────────────┴─────────────┘
2023-06-29T00:04:29.796Z cypress:server:util:socket_allowed allowed socket closed, removing { localPort: 56684 }
2023-06-29T00:04:29.807Z cypress:server:util:socket_allowed allowed socket closed, removing { localPort: 42114 }
2023-06-29T00:04:29.807Z cypress:server:util:socket_allowed allowed socket closed, removing { localPort: 46274 }
2023-06-29T00:04:29.825Z cypress:server:util:socket_allowed allowed socket closed, removing { localPort: 46292 }
2023-06-29T00:04:29.946Z cypress:server:util:socket_allowed allowed socket closed, removing { localPort: 46320 }
2023-06-29T00:04:29.946Z cypress:server:util:socket_allowed allowed socket closed, removing { localPort: 46306 }
2023-06-29T00:04:29.949Z cypress:server:util:socket_allowed allowed socket closed, removing { localPort: 46330 }
2023-06-29T00:04:30.062Z cypress:server:util:socket_allowed allowed socket closed, removing { localPort: 46352 }
2023-06-29T00:04:30.194Z cypress:server:util:socket_allowed allowed socket closed, removing { localPort: 46254 }
2023-06-29T00:04:30.340Z cypress:server:util:socket_allowed allowed socket closed, removing { localPort: 46278 }
2023-06-29T00:04:30.462Z cypress:server:util:socket_allowed allowed socket closed, removing { localPort: 46258 }
2023-06-29T00:04:30.508Z cypress:server:util:socket_allowed allowed socket closed, removing { localPort: 46342 }
2023-06-29T00:04:31.138Z cypress:server:util:socket_allowed allowed socket closed, removing { localPort: 46356 }
2023-06-29T00:04:39.079Z cypress:server:util:process_profiler current & mean memory and CPU usage by process group:
┌─────────┬───────────────────┬──────────────┬──────────────────────────┬────────────┬────────────────┬──────────┬──────────────┬─────────────┐
│ (index) │       group       │ processCount │           pids           │ cpuPercent │ meanCpuPercent │ memRssMb │ meanMemRssMb │ maxMemRssMb │
├─────────┼───────────────────┼──────────────┼──────────────────────────┼────────────┼────────────────┼──────────┼──────────────┼─────────────┤
│    0    │     'cypress'     │      1       │          '5842'          │    2.18    │      4.18      │  415.82  │    429.54    │   467.24    │
│    1    │    'Electron'     │      1       │          '7305'          │    1.25    │      3.06      │  380.07  │    410.5     │    522.8    │
│    2    │ 'electron-shared' │      4       │ '5878, 6061, 5879, 6137' │    5.71    │      3.05      │  264.29  │    249.92    │   276.31    │
│    3    │     'plugin'      │      1       │          '6127'          │    0.09    │      0.03      │  171.81  │    224.16    │   385.09    │
│    4    │     'ffmpeg'      │      1       │          '7319'          │    1.32    │      1.96      │  139.98  │    119.26    │   140.15    │
│    5    │      'other'      │      2       │       '7579, 7580'       │     0      │       0        │   3.71   │     3.73     │    3.86     │
│    6    │      'TOTAL'      │      10      │           '-'            │   10.54    │     12.12      │ 1375.68  │   1397.99    │   1623.33   │
└─────────┴───────────────────┴──────────────┴──────────────────────────┴────────────┴────────────────┴──────────┴──────────────┴─────────────┘
[6061:0629/000445.710042:ERROR:gl_utils.cc(319)] [.WebGL-0x3a4800345500]GL Driver Message (OpenGL, Performance, GL_CLOSE_PATH_NV, High): GPU stall due to ReadPixels
2023-06-29T00:04:47.223Z cypress:server:server-base Got CONNECT request from collector-pxz6penin0.px-cloud.net:443
2023-06-29T00:04:47.223Z cypress:https-proxy Writing browserSocket connection headers { url: 'collector-pxz6penin0.px-cloud.net:443', headLength: 0, headers: { host: 'collector-pxz6penin0.px-cloud.net:443', 'proxy-connection': 'keep-alive', 'user-agent': 'Mozilla/5.0 (X11; Linux x86_64) 

The last messages above are from that periodic network request (from a PerimeterX sensor). Nothing test-specific is printed after that, just Perimeterx requests interleaved with CPU profiling

Cypress 12.11 with Electron

martincarstens commented 1 year ago

🔔 Reposting this for visibility as it might get have gotten buried under the other comments. We use this script to run our Cypress tests: https://gist.github.com/martincarstens/70dd91176e420c6c9ca3de96cafac71d

It's a simple script that auto detects hangs and prevents runaway jobs. Since implementing it over a year ago, Cypress has been stable for us. We have 157 spec files running in various configurations in GitHub Actions with parallel jobs. We are using the latest version of Cypress.

To be fair, we shouldn't need to be using this script, but it is the only thing that returned sanity to our workflows.

@stuart-clark might be able to provide some updates running Cypress with the script in Circle CI.

@cosmith any updates from your side?

Rooting for the Cypress team to put this one to bed.

cosmith commented 1 year ago

Hello, on our side we migrated to Playwright.

I do remember that in the end the "no output" that we were seeing in CircleCI was actually a "very long output" when some tests were failing and retrying several times in a row. So it was not really a bug in Cypress (even though it would have been a lot easier to debug if there was some log stating that the test was restarting).

martincarstens commented 1 year ago

Hello, on our side we migrated to Playwright.

I do remember that in the end the "no output" that we were seeing in CircleCI was actually a "very long output" when some tests were failing and retrying several times in a row. So it was not really a bug in Cypress (even though it would have been a lot easier to debug if there was some log stating that the test was restarting).

We've also started migrating to Playwright, but it's taking us a bit because of all the effort invested into writing Cypress specs. Thanks for the update.

FranciscoKnebel commented 1 year ago

This bug is very breaking for my use case, so I'm studying a soft migration to Playwright now as well, at least the non-working tests for now.

We also have a big problem caused by how Cypress handles cross-origin and iframe tests, which unfortunately is not stable enough right now, but some test cases are very dependant on it.

This is a real bummer, since the last couple of months I spent on actually migrating our tests from another platform to Cypress, so finding out some very crucial steps are impossible is really demotivating.

For the others watching the thread, some links I read on this migration:

Migration from Cypress to Playwright - hype or great?

On Migrating from Cypress to Playwright

Migrating from Cypress to Playwright

Rewriting tests from Cypress to Playwright using GPT3 by Gajus Kuizinas

Convert Cypress to Playwright (tool built for this migration)

Hopefully the Cypress team fixes these crucial things eventually, but reading over a year old (in this case, almost three) posts reporting the same problem to something as important as just running tests, in a test framework, and that functionality not working, is a deal-breaker.

SIGSTACKFAULT commented 1 year ago

also affecting me in GitLab CI. Using the cypress/browsers Docker image.

Cypress package version: 12.17.0
Cypress binary version: 12.17.0
Electron version: 21.0.0
Bundled Node version: 16.16.0
alexszilagyi commented 1 year ago

Any updates on this?

amakhrov commented 1 year ago

We ended up patching the runner (we currently use cypress-cloud) to implement timeout/retry per spec. Hacky af, but our CI pipelines are much more stable now with this change.

GarrisonD commented 1 year ago

TLDR: Add cy.wait(1000) or more at the end of the test to wait for all requests to finish.

I have an observation that could assist the Cypress team in fixing a potential bug in Cypress (if confirmed as a bug), or individuals like myself in applying a temporary workaround to address the issue.

I came across this thread when my test suite began encountering issues during execution in the CI environment, sometimes stalling midway and necessitating a restart. While restarting occasionally resolved the problem, it wasn't a consistent solution. Despite my attempts at investigation, I was unable to pinpoint the root cause.

Subsequently, I upgraded Cypress to the latest version and encountered a similar issue with a very straightforward test involving the Sign In form. This test includes visiting localhost, inputting login credentials, submitting the form, and then verifying the redirected URL.

The sign-in.cy.ts file wouldn't complete execution in Cypress. It failed to display the statistics for the passed/failed/skipped tests; the process simply hung. Given the simplicity of the test, it became evident that the problem lay with Cypress itself.

I speculated that the issue might be related to requests spawned by the main view of the application, to which users are redirected after a successful sign-in. As a quick workaround, I appended cy.wait(1000) to the test's end, and to my delight, it resolved the problem 🎉

However, I tried to find a more elegant solution utilizing cy.intercept('/api/**').as('api') followed by cy.wait('@api'). Regrettably, this approach didn't yield the desired outcome.

Consequently, I am currently relying on the slightly inelegant but functional cy.wait(1000) workaround, with the hope that this issue will eventually be addressed within Cypress itself. 💡 It's possible that all pending requests could be automatically canceled at the end of each test...

bahunov commented 1 year ago

Thanks for this valuable input!!!

On Sun, 20 Aug 2023 at 17:08, Ihor Dotsenko @.***> wrote:

TLDR: Add cy.wait(1000) or more at the end of the test to wait for all requests to finish.

I have an observation that could assist the Cypress team in fixing a potential bug in Cypress (if confirmed as a bug), or individuals like myself in applying a temporary workaround to address the issue.

I came across this thread when my test suite began encountering issues during execution in the CI environment, sometimes stalling midway and necessitating a restart. While restarting occasionally resolved the problem, it wasn't a consistent solution. Despite my attempts at investigation, I was unable to pinpoint the root cause.

Subsequently, I upgraded Cypress to the latest version and encountered a similar issue with a very straightforward test involving the Sign In form. This test includes visiting localhost, inputting login credentials, submitting the form, and then verifying the redirected URL.

The sign-in.cy.ts file wouldn't complete execution in Cypress. It failed to display the statistics for the passed/failed/skipped tests; the process simply hung. Given the simplicity of the test, it became evident that the problem lay with Cypress itself.

I speculated that the issue might be related to requests spawned by the main view of the application, to which users are redirected after a successful sign-in. As a quick workaround, I appended cy.wait(1000) to the test's end, and to my delight, it resolved the problem 🎉

However, I tried to find a more elegant solution utilizing cy.intercept('/api/').as('api') followed by **@.***'). Regrettably, this approach didn't yield the desired outcome.

Consequently, I am currently relying on the slightly inelegant but functional cy.wait(1000) workaround, with the hope that this issue will eventually be addressed within Cypress itself. 💡 It's possible that all pending requests could be automatically canceled at the end of each test...

— Reply to this email directly, view it on GitHub https://github.com/cypress-io/cypress/issues/8206#issuecomment-1685267659, or unsubscribe https://github.com/notifications/unsubscribe-auth/AFKDNFFVS6TKELSK42WQJ5LXWH42HANCNFSM4PWYC6YA . You are receiving this because you commented.Message ID: @.***>

stuart-clark commented 1 year ago

@stuart-clark might be able to provide some updates running Cypress with the script in Circle CI.

Hi Martin! 👋 To clarify, we've always used Github Actions, not Circle CI.

We don't run into this issue anymore. So we are stable, some notes on our configuration off the top of my head:

While I still think there are issues here, we get around them by being matrixed in such a way that a runner is only running 3-6 spec files of very few it blocks each. Our largest spec file only has 6 it blocks, and each runner only does about 5-6 tests.

To do this efficiently (💸) on Github Actions one needs to leverage actions/cache (https://github.com/actions/cache) and other strategies so you're not spending a ton of time on setup for a large amount of separate runners.

For anyone facing this, I would really just suggest breaking your tests up as much as possible and using a matrix strategy or a script similar to what @martincarstens wrote.

Phonesis commented 1 year ago

Thanks @GarrisonD

We discovered this today as part of using the new Test Replay feature with Cypress v13. There was always a 1 min wait in our after hook after an assertion around network responses failed.

It is definitely far from ideal adding a wait in like this. There is clearly a bug here around disposing of network call promises or something perhaps

bahunov commented 1 year ago

@GarrisonD https://github.com/GarrisonD You mean add afterEach?

On Sun, 20 Aug 2023 at 16:08, Ihor Dotsenko @.***> wrote:

TLDR: Add cy.wait(1000) or more at the end of the test to wait for all requests to finish.

I have an observation that could assist the Cypress team in fixing a potential bug in Cypress (if confirmed as a bug), or individuals like myself in applying a temporary workaround to address the issue.

I came across this thread when my test suite began encountering issues during execution in the CI environment, sometimes stalling midway and necessitating a restart. While restarting occasionally resolved the problem, it wasn't a consistent solution. Despite my attempts at investigation, I was unable to pinpoint the root cause.

Subsequently, I upgraded Cypress to the latest version and encountered a similar issue with a very straightforward test involving the Sign In form. This test includes visiting localhost, inputting login credentials, submitting the form, and then verifying the redirected URL.

The sign-in.cy.ts file wouldn't complete execution in Cypress. It failed to display the statistics for the passed/failed/skipped tests; the process simply hung. Given the simplicity of the test, it became evident that the problem lay with Cypress itself.

I speculated that the issue might be related to requests spawned by the main view of the application, to which users are redirected after a successful sign-in. As a quick workaround, I appended cy.wait(1000) to the test's end, and to my delight, it resolved the problem 🎉

However, I tried to find a more elegant solution utilizing cy.intercept('/api/').as('api') followed by **@.***'). Regrettably, this approach didn't yield the desired outcome.

Consequently, I am currently relying on the slightly inelegant but functional cy.wait(1000) workaround, with the hope that this issue will eventually be addressed within Cypress itself. 💡 It's possible that all pending requests could be automatically canceled at the end of each test...

— Reply to this email directly, view it on GitHub https://github.com/cypress-io/cypress/issues/8206#issuecomment-1685267659, or unsubscribe https://github.com/notifications/unsubscribe-auth/AFKDNFFVS6TKELSK42WQJ5LXWH42HANCNFSM4PWYC6YA . You are receiving this because you commented.Message ID: @.***>

GarrisonD commented 1 year ago

@bahunov I have more than one test in a file, so yeah, I use afterEach hook.

The key point here is to wait for running requests, where and how is a minor thing.

zeldigas commented 1 year ago

Just want to share one more workaround for hang problem. In our case we are running files individually, and hang occured after test completion - for some reason cypress failed to properly exit.

I ended up with implementing parallel watchdog with bash to track this.

Gist: https://gist.github.com/zeldigas/6d2041c51b96b98b4859017f55a9bc34

jennifer-shehane commented 1 year ago

@Phonesis Please make sure to update to the latest version of the Cypress App. We've been actively addressing hanging issues related to Test Replay.

jennifer-shehane commented 1 year ago

Hi everyone, stalling and hanging within Cypress could be attributed to many different reasons. We do work through hangs/stalls issues as quickly as we can since we know this is the worst situation that your tests can be stuck in and it is an extra expense for CI. The causes we've found for hanging have been very wide and we do need clear reproducible examples provided in order to investigate.

I suggest opening a new issue describing any current hanging issues and as much info as you can provide for us to investigate, including debug logs. Everyone's issue in this thread could be caused by very different things and unrelated. I'll be closing this issue since it is not encompassing a single cause that we can work towards a resolution on and close. We have fixed several causes of hanging issues over the years that might have addressed many of these as well.

amakhrov commented 1 year ago

Everyone's issue in this thread could be caused by very different things and unrelated

How would one know the root cause for their specific case? The symptoms in all cases are the same: the thing just gets stuck with no errors / other indication of smth being wrong.

madardour commented 11 months ago

I have also this issue. I have 40 test (feature) files with parallel processing using the plugin cypress-parallel. For long time i have running cypress e2e tests in AzureDevops on version 9.x without issues. Begin this year i have upgraded to v12.4. Upgrade was necessary for me because of the proxy. v9.x (or i ) was not able to call webserver behind a proxy. Since upgrade to v12 the cypress process hangs. The process still appears in TaskManager but it does nothing and AzureDevops waits for until i kill the proces manually. i have tried several versions (12.4, 12.10, 12.13, 12.15, 13,0) and the issue still occurs.

nagash77 commented 11 months ago

@madardour please open a new issue with a reproducible example and the Cypress team will be happy to investigate.

elenasch commented 10 months ago

I have been running into a similar issue with Cypress just hanging. In my case it was on a very particular new test suite that was added. What we found is that the app UI had performance issues with excessive DOM element counts. This perf issue didn't surface to naked eye, but crashed several of our QA automation systems and frameworks (Jest, Selenium and Cypress). The Cypress side effect was a handing runner.

Hope that helps with reproduction and fixing this scenario.

Cypress 12.7.0 It didn't matter how large or small machine resources running these tests were. The runner hung the same way, and I could see the memory of chrome browser growing over time, but other processes didn't get affected as much.

We have ~4500 DOM elements on initial render and that number kept growing (anything over ~1500 is known to be a possible perf bottleneck)