Open simkessy opened 4 years ago
We will need a repro that can be downloaded and analyzed.
Also, please make sure to clear cache just in case, e.g with jest --clear-cache
Oh --clear-cache
fixed it.
Thanks, that's good to know. Still weird
I spoke too soon, it seems like the issue is this helper function:
export function setItemsToBeReviewed(itemIds) {
sessionStorage.setItem(ITEMS_TO_BE_REVIEWED_KEY, JSON.stringify(itemIds));
}
We will need a repro that can be downloaded and analyzed.
This is still the case 🙂
Also sounds like JSDOM leaking
Not sure if it is related. But I get heap leak for simple expect:
let items = tree.root.findAllByProps({ testID: 'CrewItem.Employee' })
expect(items).toHaveLength(8) // stacked and throws leak in 30-60 seconds
expect(items.length).toEqual(8) // works ok
Clearing cache doesn't help
I am facing similar issues
Same issue here as well. (using ts-jest)
I got it during a full run in which some tests failed. I spent some time debugging and taking memory snapshots and comparing.. I couldn’t find any leaks. I ran it with inspect in watch mode, run in band, took a snapshot after the first run, then ran again and took another. Is that the best way to find leaks?
I think I'm running into the same issue. Created a new app recently with Jest 26. Using Enzyme for snapshot testing. Updated a test to use mount
instead of shallow
and now it gets out of memory errors everytime I run it even if it's the only test running. Node's out there using something like 1.5GB. This is with or without coverage and I've tried clearing cache as well. I can provide my repo as an example if needed.
I posted an issue to Enzyme https://github.com/enzymejs/enzyme/issues/2405#issuecomment-646957124
Below is the error I get on this test
Test suite failed to run
Call retries were exceeded
at ChildProcessWorker.initialize (node_modules/jest-runner/node_modules/jest-worker/build/workers/ChildProcessWorker.js:191:21)
<--- Last few GCs --->
[3466:0x39d1050] 32366 ms: Mark-sweep 1390.7 (1425.4) -> 1390.2 (1425.9) MB, 714.3 / 0.0 ms (average mu = 0.110, current mu = 0.013) allocation failure scavenge might not succeed
[3466:0x39d1050] 33470 ms: Mark-sweep 1391.0 (1425.9) -> 1390.5 (1426.4) MB, 1091.8 / 0.0 ms (average mu = 0.053, current mu = 0.010) allocation failure scavenge might not succeed
<--- JS stacktrace --->
==== JS stack trace =========================================
0: ExitFrame [pc: 0x23bdb465be1d]
1: StubFrame [pc: 0x23bdb465d1df]
Security context: 0x1e8e53f9e6c1 <JSObject>
2: printBasicValue(aka printBasicValue) [0x2c6a1c7d28e1] [<root>/node_modules/jest-snapshot/node_modules/pretty-format/build/index.js:~108] [pc=0x23bdb4dcdac1](this=0x00329d2826f1 <undefined>,val=0x3125160c22e1 <String[14]: onSubMenuClick>,printFunctionName=0x00329...
I tried removing random test suites from my tests but still jest memory leaks. So there is no particular test causing the leak.
I had a similar problem where I used to run into Out of Memory
error when Jest started to do coverage on "untested files".
Using v8 as coverage provider solved the issue for me.
However, its an experimental feature (as per documentation) -
https://jestjs.io/blog/2020/01/21/jest-25#v8-code-coverage
After doing some research, it seems this memory leak has been an ongoing issue since 2019 (Jest 22) so wanted to consolidate some notes here for posterity. Past issues have been related to graceful-fs and I think some have solved it via a hack/workaround that removes graceful-fs and then re-adds graceful-js after running jest. One troubleshooting thread was looking at compileFunction in the vm
package as a potential cause. It seems that jest, webpack-dev-server, babel, and create-react-app are using graceful-js as a dependency. The memory leak issue was supposed to be fixed in a newer release of Jest but there may have been a regression since it is popping up again. I can confirm everything was working fine until a substantial amount of Jest tests were created in our environment and then the heap overflows on our CI machine after the heap size grows larger than the allocated memory due to the leak. I've tried using 1 worker, runInBand, etc. without success.
The common cause of the issues I've seen is collecting coverage via collecting coverage and graceful-fs. I haven't done an in-depth analysis of those issues but seeing that they are both filesystem-related and having solved my own issue which was related to file imports I suspect they are some version of the same issue I was having.
Wanted to provide the solution I found so others may reap benefits:
The cause:
Using imports of the format import * from 'whatever'
The solution:
Using the format import { whatINeed } from 'whatever'
instead dramatically reduced the memory accumulation
Often times when this happens, I delete the src folder (provided it's on version control) and run
git checkout .
and
jest --clearCache
and now running the tests again works as before. In my case, not sure it has anything to do with upgrade but since it has occurred a few times over the last 6 months i thought to share
+1 @alexfromapex solution did not worked for me.
Jest 26.6.3 Node 14.15.4
Dump: https://pastebin.com/Mfwi2iiA
It happens after some re-runs on any CI server (my runners are docker containers). Always after a fresh boot it works normally, and after some runs, it breaks again, only comming back after a new reboot. I tried with 1GB RAM and 2GB RAM machines, same result. It seems not happening with 8GB+ RAM hardware (my local machine).
Some other info I've gathered, it happens always after ~5m running, everytime the test log has the same size (it might be happening at same spot).
i have the same issue
I have very similar issue,
My test:
const first = [ div_obj, p_obj, a_obj ]; // array with three DOM elements
const second = [ div_obj, p_obj, a_obj ]; // array with same DOM elements
second.push( pre_obj ); // add new obj
expect(first).toEqual(second); // compare two arrays one 3 elements other 4 elements
test should fail within 250 ms (timeout), but it takes 40 sec and it spits out message:
FATAL ERROR: Ineffective mark-compacts near heap limit Allocation failed - JavaScript heap out of memory
...
<--- JS stacktrace --->
==== JS stack trace =========================================
0: ExitFrame [pc: 0x55e78ec781b9]
Security context: 0x086b49dc08d1 <JSObject>
1: toString [0x2cb560879521](this=0x1a95f0796eb1 <Object map = 0x2b3cdd5e4839>)
2: printComplexValue(aka printComplexValue) [0x329610605059] [/home/joe/../node_modules/pretty-format/build/index.js:~198] [pc=0x1413a8b0e5ac](this=0x059db4cc04b1 <undefined>,0x1a95f0796eb1 <Object map = 0x2b3cdd5...
Somehow I believe stack trace points to printComplexValue
. I also tried toMatchObject
, but exactly the same.
Jest: v26.6.3
I have a similar issue with:
Node: 15.5.1 Jest: 25.5.4
It seems not happening with 8GB+ RAM hardware (my local machine).
Update: It just happened in my local host with 8GB ram, but this time in watch mode and outside docker, running a single test, after consecutive file saves (without waiting tests to finish).
Here is the dump: https://pastebin.com/jrDkCYiH
IDK if this help, but here is the memory status when it happened:
[klarkc@ssdarch ~]$ free -m
total used free shared buff/cache available
Mem: 7738 3731 2621 473 1385 3226
Swap: 8191 2133 6058
I had this same error on my Gitlab CI and I just temporary added the -clearCache
jest option and it works well.
We see this regularly on our tests at https://github.com/renovatebot/renovate
We see this regularly on our tests -- the first one succeeds then the second fails.
Have no idea why this worked for me but I had accidentally removed 'js'
from moduleFileExtensions
in my jest.config.ts
and then that's when my heap issues started going wild. But when I added it back, my heap issues went away.
So now I have moduleFileExtensions: ['js', 'ts']
. Hopefully this helps someone!
This also happened to me: https://github.com/icecream17/solver/runs/2562701426?check_suite_focus=true in https://github.com/icecream17/solver/actions/runs/834149179
Failing commit: https://github.com/icecream17/solver/commit/7956c084b9dbbdfb721a2c56b7bea66eb83cc555, fixed commit: https://github.com/icecream17/solver/commit/06064acf014a0adb73d918cff3ea18657b3b2feb Update: I managed to use typescript again: https://github.com/icecream17/solver/commit/1b9f406ef14ece14c14ce6f14ccacac7f7b247ad
(Sorry for the unhelpful links)
Since this is a heap issue, maybe triggering garbage collection and de-referencing unused or global variables will also help fix? Look out for things that are leaving lots of memory hanging around in the heap.
I was facing the same error on GitHub actions. I was able to pin down the problem in my case to the currently experimental ESM support #9430.
For comparison here are two profiling screenshots (I followed this article for instructions):
I can prevent this error by using fake timers:
jest.test.setup.js:
jest.useFakeTimers();
i was running into this issue and it appears that upgrading to jest 27 fixed it
Upgrading from Jest 23 to 24 is triggering this for me. Are folks who are on Jest 27 relieved from this issue?
Trying to migrate from 26.6.3 to 27.2.5 and got the same issue.
I think I managed to reproduce it by trying to add 1000000 PureSudoku
to an array in global scope (not in a test block)
I had what I thought may be a similar error.
Using --coverage
while running inside a container like the node:alpine docker image was a problem for me. I really didn't need to be running tests in a container so maybe my comment/solution here https://github.com/facebook/jest/issues/5837#issuecomment-1002239562 may help someone.
I had the same problem (also while collecting coverage data, on GitHub Actions/CI) and fixed it by limiting the number of workers:
maxWorkers: 2
in jest.config.js
--w 2
as a command line parameter.I suspect that this issue might be related to the way that the objects that are being compared are being printed. Filed an issue with a minimal reproduction here: https://github.com/facebook/jest/issues/12364
In effect, Jest will stop short if it's trying to print a very deeply nested object, but doesn't account for "total nodes" so for objects that are fairly shallow but very wide (such as many React/Enzyme elements) it will try to print the whole object and will run out of memory while constructing the string.
I was having a similar issue where random tests would timeout. I added three arguments to my jest.config.js file and it seems to have helped. Seems to have been happening on both windows, mac, and in our gitlab pipeline.
testTimeout: 10000, maxConcurrency: 3, maxWorkers: '50%',
Surprisingly the tests execute faster as well. By ~15-20%.
Hi,
I'm running tests on Ubuntu 20.04 with jest 28.1.3 and I'm seeing jest allocating 13gb of ram by running 75 tests. I also ran tests on macbook m1 and there is no such issue, it takes only 1gb or ram.
My guess is that each worker is very inefficiently allocated. Also using @rstock08 maxWorkers and maxConcurrency helps to mitigate the issue.
I also had this issue, and I had to change my collectCoverageFrom array.
Before I had:
"collectCoverageFrom": [ "**/*.{js,jsx}", "!**/node_modules/**", ... ]
But for some reason it seemed to still be running over the node_modules folder. I changed this by taking away the exclude node_modules and changing my patterns to include just the directories I needed:
"collectCoverageFrom": [ "<rootDir>/*.js", "<rootDir>/components/**",
I don't like having to do this, because it may be easy to miss future coverage, but all attempts to exclude the node_modules folder either had other errors or reintroduced the "out of memory" issue.
I had this issue happen even with Jest 29. This issue was caused by an unexpected infinite loop within my code.
This might be why it's hard to track down.
Issue happening with jest v29 inside docker container. Tests run fine on host machine.
We are also struggling with this issue on Jest 26. Upgrading to Jest 29 didn't work.
Specifying NODE_OPTIONS="--max-old-space-size=2048" helped us to solve the issue
Sadly, this didn't solve our issues however, I can run our unit tests in two sessions which solves the issue for now
I had the same problem (also while collecting coverage data, on GitHub Actions/CI) and fixed it by limiting the number of workers:
maxWorkers: 2
injest.config.js
- or
--w 2
as a command line parameter.
This solved my issue: https://stackoverflow.com/a/68307839/9331978
For me this happened only on CI.
Turns out it was because on CI the default is runInBand
, so adding this locally helped me replicate the issue on a local machine.
For me it happened when an exception was thrown inside the callback given to a useLayoutEffect()
.
It just ran for ever, consuming more and more memory.
Hope this helps someone in this thread 🙏
Updating my jest.config.ts
with coverageProvider: 'v8'
and maxWorkers: 2
did the trick for me!
I tried a few different solutions that didn't work for me:
--max-old-space-size
--no-compilation-cache
--runInBand
--expose-gc
What did work for me:
Limiting the idle memory per worker using the flag workerIdleMemoryLimit
I'm also limiting the number of workers so maybe it was a combination of the solutions.
testTimeout: 10000, maxConcurrency: 3, maxWorkers: '50%',
Doesn't help in my case. still stuck in the test.
Jest 29.4.2
here, happening with a single, rather simple test and it boils down to:
expect(s3.client).toHaveReceivedCommandWith(PutObjectCommand, {
Bucket: IMAGES_BUCKET,
Key: `${a}/${b}/info.json`,
Body: expect.jsonMatching(infoFile),
ContentType: "application/json",
});
The s3 client is import { mockClient } from "aws-sdk-client-mock";
"aws-sdk-client-mock": "2.2.0", // hapens with 3.0.0 as well
"aws-sdk-client-mock-jest": "2.2.0",
"@aws-sdk/client-s3": "3.414.0",
There's nothing fancy in there; IMAGES_BUCKET
is actually undefined
, a
and b
some constant strings, and infoFile
is:
const infoFile: SomeType = {
imageKeys: [imageObject1.Key!, imageObject2.Key!], // strings
taskToken, // string
location, // shallow dict
};
Commenting out parts of it does not help, but as soon as I comment out the whole expectation my test turns green. With it I constantly get:
<--- Last few GCs --->
[186230:0x758f6a0] 55581 ms: Scavenge 2044.5 (2081.3) -> 2043.6 (2085.5) MB, 6.0 / 0.0 ms (average mu = 0.244, current mu = 0.218) allocation failure;
[186230:0x758f6a0] 56400 ms: Mark-sweep (reduce) 2045.6 (2085.5) -> 2044.3 (2079.8) MB, 465.3 / 0.0 ms (+ 154.6 ms in 32 steps since start of marking, biggest step 6.9 ms, walltime since start of marking 638 ms) (average mu = 0.255, current mu = 0.268
<--- JS stacktrace --->
FATAL ERROR: Ineffective mark-compacts near heap limit Allocation failed - JavaScript heap out of memory
1: 0xb85bc0 node::Abort() [/home/xxx/.nvm/versions/node/v18.18.1/bin/node]
2: 0xa94834 [/home/xxx/.nvm/versions/node/v18.18.1/bin/node]
3: 0xd667f0 v8::Utils::ReportOOMFailure(v8::internal::Isolate*, char const*, bool) [/home/xxx/.nvm/versions/node/v18.18.1/bin/node]
4: 0xd66b97 v8::internal::V8::FatalProcessOutOfMemory(v8::internal::Isolate*, char const*, bool) [/home/xxx/.nvm/versions/node/v18.18.1/bin/node]
5: 0xf442a5 [/home/xxx/.nvm/versions/node/v18.18.1/bin/node]
6: 0xf451a8 v8::internal::Heap::RecomputeLimits(v8::internal::GarbageCollector) [/home/xxx/.nvm/versions/node/v18.18.1/bin/node]
7: 0xf556b3 [/home/xxx/.nvm/versions/node/v18.18.1/bin/node]
8: 0xf56528 v8::internal::Heap::CollectGarbage(v8::internal::AllocationSpace, v8::internal::GarbageCollectionReason, v8::GCCallbackFlags) [/home/xxx/.nvm/versions/node/v18.18.1/bin/node]
9: 0xf30e8e v8::internal::HeapAllocator::AllocateRawWithLightRetrySlowPath(int, v8::internal::AllocationType, v8::internal::AllocationOrigin, v8::internal::AllocationAlignment) [/home/xxx/.nvm/versions/node/v18.18.1/bin/node]
10: 0xf32257 v8::internal::HeapAllocator::AllocateRawWithRetryOrFailSlowPath(int, v8::internal::AllocationType, v8::internal::AllocationOrigin, v8::internal::AllocationAlignment) [/home/xxx/.nvm/versions/node/v18.18.1/bin/node]
11: 0xf1342a v8::internal::Factory::NewFillerObject(int, v8::internal::AllocationAlignment, v8::internal::AllocationType, v8::internal::AllocationOrigin) [/home/xxx/.nvm/versions/node/v18.18.1/bin/node]
12: 0x12d878f v8::internal::Runtime_AllocateInYoungGeneration(int, unsigned long*, v8::internal::Isolate*) [/home/xxx/.nvm/versions/node/v18.18.1/bin/node]
13: 0x17055f9 [/home/xxx/.nvm/versions/node/v18.18.1/bin/node]
I tried maxWorkers
, runInBand
, workerIdleMemoryLimit
, and more but to no avail.
Running on Win11 inside WSL 2 Ubuntu LTS, node 18.18.1. Runs "fine" on colleagues' M2 Macs (except it's grilling it).
What's also interesting, I use similar expectations right before and after the problematic one and they run just fine.
Have you tried node 21.1+?
EDIT: oh, a specific assertion - that's weird. Could you out together a minimal reproduction?
🐛 Bug Report
I upgraded from 24.X to 26.0.0 but now test that was passing is not Running test takes long time to complete then I get this error
To Reproduce
My test:
code:
jest.config:
envinfo
System: OS: Linux 4.15 Ubuntu 18.04.4 LTS (Bionic Beaver) CPU: (36) x64 Intel(R) Xeon(R) Platinum 8124M CPU @ 3.00GHz Binaries: Node: 14.1.0 - ~/.nvm/versions/node/v14.1.0/bin/node Yarn: 1.22.4 - /usr/bin/yarn npm: 6.14.4 - ~/.nvm/versions/node/v14.1.0/bin/npm npmPackages: jest: ^26.0.0 => 26.0.0