jestjs / jest

Delightful JavaScript Testing.
https://jestjs.io
MIT License
44.27k stars 6.46k forks source link

100% CPU usage on --bail & --coverage combination #9401

Open dubbha opened 4 years ago

dubbha commented 4 years ago

🐛 Bug Report

To Reproduce

Steps to reproduce the behavior:

If there is a single failing test jest start running coverage of untested files and pretty soon eats up all the CPU, preventing any usage of the system, e.g. mouse becomes unresponsive. And it never releases to the point I have to restart the machine, at least with Win10.

Expected behavior

I don't expect my machine to hang, obviously. Also it might make sense to bail out of the coverage on untested files in case of a bail out. BTW, is there a way to prevent --coverage in case there's a failed test and we bail out? Is there some way to configure it? If not, would that make sense as a feature? I mean, I don't really care for the coverage until I fix the failing test.

Link to repl or repo (highly encouraged)

envinfo

PS D:\git\TRI\linsi-ui> npx envinfo --preset jest
npx: installed 1 in 3.33s

  System:
    OS: Windows 10 10.0.17763
    CPU: (4) x64 Intel(R) Core(TM) i5-3470 CPU @ 3.20GHz
  Binaries:
    Node: 12.14.1 - C:\Program Files\nodejs\node.EXE
    Yarn: 1.13.0 - C:\Program Files (x86)\Yarn\bin\yarn.CMD
    npm: 6.8.0 - C:\Program Files\nodejs\npm.CMD
  npmPackages:
    jest: ^24.8.0 => 24.8.0

It kind of happens like this:

 ...
 PASS  src/services/Library/InstrumentUpdater/CustomInstrumentUpdater.test.ts
 PASS  src/components/ProgressIndicator/ProgressIndicator.component.test.tsx
 PASS  src/services/TemplateDataConvertor/ConditionalConvertor.service.test.ts
 FAIL  src/services/Auth/Auth.service.test.ts
  ● AuthService › should remove token from storage and call login method on relogin

    expect(jest.fn())[.not].toBeCalled()

    Matcher error: received value must be a mock or spy function

    Received has type:  function
    Received has value: [Function anonymous]

      55 |     sut.relogin()
      56 |     expect(tokenServiceMock.remove).toBeCalled()
    > 57 |     expect(sut.login).toBeCalled()
         |                       ^
      58 |   })
      59 | })
      60 |

      at Object.<anonymous> (src/services/Auth/Auth.service.test.ts:57:23)

Running coverage on untested files... PASS  src/services/ScheduleConvertor/ParamsConvertors/OnceScheduleParamsConvertor.test.ts
Running coverage on untested files... PASS  src/validators/validation-functions/SameArgumentsValue.test.ts
Running coverage on untested files...

tests pass until one test fail, and then coverage on untested files follows, and that's where things get nasty very fast if you are not fast enough to hit Ctrl+C.

SimenB commented 4 years ago

I think ditching the coverage if tests fail makes sense. @thymikee @jeysal thoughts? We can probably get that into Jest 25

thymikee commented 4 years ago

Agree with @SimenB. I don't see a value in processing coverage for a failed test and related files (if not processed by other code). By doing so, we could also end up with a slightly faster feedback loop and less clutter to scroll through when identifying failed expectation.

jeysal commented 4 years ago

It seems like it is conflating very different things, coverage to me does not even require test cases that can pass or fail, it can be done on any program execution. This would also introduce an unintuitive half-coverage run where code is still run instrumented but no coverage output generated. Overall sounds like a lot of architectural complexity, not like "do one thing" / separation of concerns. I think maybe it's better to fix performance of the "Collecting coverage from untested files" step instead, which has always struck me as a problem, even on passed runs in many projects.

SimenB commented 4 years ago

There is no performance issue in "Collecting coverage from untested files" (or, there is, but that's totally orthogonal to the issue reported here) - the issue here is that bail is broken and doesn't stop other tests before reporting the run as complete, meaning we're still running tests on a bunch of cores, then spawn new workers to collect coverage on uncovered files thus bombing the CPU with (numberOfCores - 1) * 2 processes

jeysal commented 4 years ago

Okay that's a nasty bug - rest still stands though I think it would be better to fix it than bailing out on coverage collection on failure.

rodrigopsasaki commented 4 years ago

Is there any plan to resolve this bug? Or maybe a known work around while there isn't a permanent solution?

emzet commented 2 years ago

any progress here? it will be almost two years from the last comment...

SimenB commented 2 years ago

PR welcome 🙂

github-actions[bot] commented 1 year ago

This issue is stale because it has been open for 1 year with no activity. Remove stale label or comment or this will be closed in 30 days.

khitrenovich commented 1 year ago

This issue is stale because it has been open for 1 year with no activity. Remove stale label or comment or this will be closed in 30 days.

Yes, it's still a problem...

ctsstc commented 1 year ago

I noticed my RAM was being consumed and that my test suite seemed to be stalled while everything on my system started reported as being non-responsive. I noticed that there were a ton of node processes spawned each using 500-700MB+ of RAM. After killing the test running those processes went away and I got my RAM back.

Part of me believes that if you're asking for coverage then it shouldn't matter if a test fails, but in our scenario we would like to halt our integration process on a failed test while trying to collect coverage, but we also do not want to run the suite twice (once for tests and again for coverage). This has seemed to never work as expected, and lately we've been running into some major resource consumption that seems to be related to this.

github-actions[bot] commented 4 months ago

This issue is stale because it has been open for 1 year with no activity. Remove stale label or comment or this will be closed in 30 days.

khitrenovich commented 4 months ago

if you're asking for coverage then it shouldn't matter if a test fails

I have to disagree here - with bail some tests won't run at all on failure, so why bother to collect coverage at all?