jestjs / jest

Delightful JavaScript Testing.
https://jestjs.io
MIT License
44.04k stars 6.43k forks source link

Memory Leak on ridiculously simple repo #7874

Open javinor opened 5 years ago

javinor commented 5 years ago

You guys do an awesome job and we all appreciate it! 🎉

🐛 Bug Report

On a work project we discovered a memory leak choking our CI machines. Going down the rabbit hole, I was able to recreate the memory leak using Jest alone.

Running many test files causes a memory leak. I created a stupid simple repo with only Jest installed and 40 tautological test files.

jest-memory-leak

I tried a number of solutions from https://github.com/facebook/jest/issues/7311 but to no avail. I couldn't find any solutions in the other memory related issues, and this seems like the most trivial repro I could find.

Workaround :'(

We run tests with --expose-gc flag and adding this to each test file:

afterAll(() => {
  global.gc && global.gc()
})

To Reproduce

Steps to reproduce the behavior:

git clone git@github.com:javinor/jest-memory-leak.git
cd jest-memory-leak
npm i
npm t

Expected behavior

Each test file should take the same amount of memory (give or take)

Link to repl or repo (highly encouraged)

https://github.com/javinor/jest-memory-leak

Run npx envinfo --preset jest

Paste the results here:

System:
    OS: macOS High Sierra 10.13.6
    CPU: (4) x64 Intel(R) Core(TM) i7-5557U CPU @ 3.10GHz
  Binaries:
    Node: 10.15.0 - ~/.nvm/versions/node/v10.15.0/bin/node
    Yarn: 1.12.3 - /usr/local/bin/yarn
    npm: 6.4.1 - ~/.nvm/versions/node/v10.15.0/bin/npm
  npmPackages:
    jest: ^24.1.0 => 24.1.0
MichalBurgunder commented 3 years ago

For those wanting to get their CI pipeline going with jest@26, I found a workaround that works for me. (this issue comment helped, combined with this explanation). I increased the maximum oldspace on node, and although the leak persists, my CI pipeline seems to be doing better/passing. Here my package.json input: "test-pipeline": "node --max-old-space-size=4096 ./node_modules/.bin/jest --runInBand --forceExit --logHeapUsage --bail",

What else I tried and scraped together from a few other issues:

barry800414 commented 3 years ago

Hey guys, my team also encounter this issue, and we would like to share our solution.

Firstly, we need to understand that Node.js will find its best timing to swipe unused memory by its own garbage collection algorithm. If we don’t configure it, Node.js will do their own way.

And we have several ways to configure / limit how garbage collection works.

Secondly, I think we have 2 types of memory leak issue.

For Type 1, it’s easier to solve. We can use --expose-gc flag and run global.gc() in each test to swipe unused memory. Or, we can add --max-old-space-size=xxx to remind node.js to swipe all known unused memory once it reached limit.

Before adding --max-old-space-size=1024:

 PASS  src/xx/x1.test.js (118 MB heap size)
 PASS  src/xx/x2.test.js (140 MB heap size)
 ...
 PASS  src/xx/x30.test.js (1736 MB heap size)
 PASS  src/xx/x31.test.js (1746 MB heap size)
...

After adding --max-old-space-size=1024:

 PASS  src/xx/x1.test.js (118 MB heap size)
 PASS  src/xx/x2.test.js (140 MB heap size)
 ...
 PASS  src/xx/x20.test.js (893 MB heap size)
 PASS  src/xx/x21.test.js (916 MB heap size)

// -> (everytime it reachs 1024MB, it will swipe ununsed memory)

 PASS  src/xx/x22.test.js (382 MB heap size)
 ...

(Note: if we specify lower size, it will of course use less unused memory, but more frequent to swipe it)

For Type 2, we may need to investigate where memory leaks happened. This would be more difficult.

Because in our team, main cause is Type I issue, so our solution is adding --max-old-space-size=1024 to Node.js while running tests


Finally, I would like to explain why --expose-gc works in previous comment.

Because in Jest source code, if we add --logHeapUsage to Jest, Jest will call global.gc() if gc exists. In other words, if we add --logHeapUsage to Jest and add --expose-gc to Node.js, in current version of Jest, it will force Node.js to swipe all known unused memory for each run of test.

However, I don’t really think adding --logHeapUsage and --expose-gc to solve this issue is a good solution. Because it’s more like we “accidentally” solve it.


Note: --runInBand: ask Jest to run all tests sequentially. By default, Jest will run tests in parallel by several workers. --logHeapUsage: log heap memory usage in each test.

rgoldfinger-quizlet commented 3 years ago

Something that might be helpful for those debugging a seeming memory leak in you Jests tests:

Node's default memory limit applies seperately to each worker, so make sure that the total memory available > number of workers * the memory limit.

When the combined memory limit of all of the workers is greater than the available memory, Node will not realized that it needs to run GC, and the memory usage will climb until it OOM's.

Setting the memory limit correctly causes Node to run GC much more often.

For us, the effect was dramatic. When we had the --max-old-space-size=4096 and three workers on a 6GB machine, memory usage increased to over 3gb per worker and eventually OOM'd. Once we set it to 2gb, memory usage stayed below 1gb per worker, and the OOM's went away.

tsairinius commented 3 years ago

I believe I may be experiencing the same problem as reported in this thread.

Some observations I've made when running node --inspect-brk --expose-gc ./node_modules/.bin/jest --runInBand --logHeapUsage:

My environment:

  System:
    OS: macOS 10.15.7
    CPU: (4) x64 Intel(R) Core(TM) i5-5257U CPU @ 2.70GHz
  Binaries:
    Node: 14.15.1 - /usr/local/bin/node
    npm: 6.14.8 - /usr/local/bin/npm
  npmPackages:
    jest: 26.6.0 => 26.6.0

The command I use to monitor heap usage during tests: node --inspect-brk --expose-gc ./node_modules/.bin/jest --runInBand --logHeapUsage

Sample results:

pages/Web/Account/PersonalInfo/__tests__/PersonalInfo.test.js (6.652 s, 59 MB heap size)
pages/Web/Account/DeliverySignup/__tests__/DeliverySignupForm.test.js (85 MB heap size)
pages/Web/Account/DonationSignup/__tests__/DonationSignupForm.test.js (87 MB heap size)
pages/Web/DonorFulfillment/DonorFulfillment.test.js (96 MB heap size)
pages/Web/Account/PersonalInfo/__tests__/PasswordSection.test.js (109 MB heap size)
pages/Web/Account/OrganizationInfo/__tests__/OrgAddressSection.test.js (112 MB heap size)
pages/Web/Account/_helpers/__tests__/SettingsDropdown.test.js (121 MB heap size)
App.test.js (124 MB heap size)
pages/Web/Account/OrganizationInfo/__tests__/OrgNameSection.test.js (135 MB heap size)
components/Dropdown/Dropdown.test.js (153 MB heap size)
components/Wizard/Wizard.test.js (148 MB heap size)
pages/Web/Account/OrganizationInfo/__tests__/OrganizationInfo.test.js (155 MB heap size)
pages/Web/Account/RecipientAccountSettings/RecipientAccountSettings.test.js (160 MB heap size)
pages/Web/Account/PersonalInfo/__tests__/PersonalNameSection.test.js (167 MB heap size)
pages/Web/Account/PersonalInfo/__tests__/EmailSection.test.js (173 MB heap size)
pages/Web/Account/DonationSignup/__tests__/DonationSignup.test.js (180 MB heap size)
pages/Web/Account/_helpers/__tests__/OptIn.test.js (186 MB heap size)
components/Breadcrumbs/Breadcrumbs.test.js (184 MB heap size)
pages/Web/Account/DeliverySignup/__tests__/DeliverySignup.test.js (185 MB heap size)
components/UserAvailability/__tests__/TimeBlock.test.js (179 MB heap size)
components/Modal/Modal.test.js (185 MB heap size)
pages/Web/Account/_helpers/__tests__/SettingsTextArea.test.js (195 MB heap size)
pages/Web/Account/PersonalInfo/__tests__/PhoneSection.test.js (190 MB heap size)
pages/Web/Account/DropoffPreferences/DropoffPreferences.test.js (183 MB heap size)
pages/Web/RequestSupplies/__tests__/ConfirmDropoffPreferences.test.js (181 MB heap size)
pages/Web/Account/_helpers/__tests__/SettingsAvailability.test.js (187 MB heap size)
pages/Web/Account/_helpers/__tests__/SettingsCard.test.js (186 MB heap size)
pages/Web/RequestSupplies/__tests__/ConfirmContactInfo.test.js (181 MB heap size)
pages/Web/Account/_helpers/__tests__/SettingsChild.test.js (186 MB heap size)
pages/Web/Account/_helpers/__tests__/SettingsEditManager.test.js (186 MB heap size)
pages/Web/Account/DonorAccountSettings/DonorAccountSettings.test.js (180 MB heap size)
pages/Web/Account/DeliveryPreferences/DeliveryPreferences.test.js (185 MB heap size)
pages/Web/Account/DonationPreferences/DonationPreferences.test.js (184 MB heap size)
components/UserAvailability/__tests__/UserAvailability.test.js (181 MB heap size)
pages/Web/Account/_helpers/__tests__/AccountSettings.test.js (182 MB heap size)
pages/Web/Account/_helpers/__tests__/SettingsParent.test.js (182 MB heap size)
pages/Web/Account/_helpers/__tests__/VolunteerStatus.test.js (182 MB heap size)
pages/Web/Account/_helpers/__tests__/SettingsContainer.test.js (187 MB heap size)
components/RouteBreadcrumbs/RouteBreadcrumbs.test.js (188 MB heap size)

To help isolate the problem, I commented out all of my tests and replaced them with a dummy test case in each of my test files:

test("dummy test", () => {
    expect(1).toBe(1);
})

The results after doing so still have memory leakage, but the increase in heap usage increases at a more linear rate of ~3MB per test file (where each file now only contains the dummy test):

PASS  src/pages/Web/RequestSupplies/__tests__/ConfirmDropoffPreferences.test.js (47 MB heap size)
 PASS  src/pages/Web/Account/DonationSignup/__tests__/DonationSignupForm.test.js (54 MB heap size)
 PASS  src/pages/Web/Account/_helpers/__tests__/SettingsEditManager.test.js (57 MB heap size)
 PASS  src/pages/Web/Account/DeliverySignup/__tests__/DeliverySignupForm.test.js (60 MB heap size)
 PASS  src/components/UserAvailability/__tests__/UserAvailability.test.js (63 MB heap size)
 PASS  src/pages/Web/Account/PersonalInfo/__tests__/PhoneSection.test.js (66 MB heap size)
 PASS  src/pages/Web/Account/_helpers/__tests__/OptIn.test.js (69 MB heap size)
 PASS  src/pages/Web/Account/DonationPreferences/DonationPreferences.test.js (72 MB heap size)
 PASS  src/pages/Web/Account/DonationSignup/__tests__/DonationSignup.test.js (75 MB heap size)
 PASS  src/pages/Web/Account/OrganizationInfo/__tests__/OrgNameSection.test.js (78 MB heap size)
 PASS  src/pages/Web/Account/_helpers/__tests__/SettingsContainer.test.js (81 MB heap size)
 PASS  src/pages/Web/Account/DropoffPreferences/DropoffPreferences.test.js (84 MB heap size)
 PASS  src/pages/Web/Account/_helpers/__tests__/AccountSettings.test.js (87 MB heap size)
 PASS  src/pages/Web/Account/RecipientAccountSettings/RecipientAccountSettings.test.js (90 MB heap size)
 PASS  src/components/RouteBreadcrumbs/RouteBreadcrumbs.test.js (93 MB heap size)
 PASS  src/pages/Web/Account/_helpers/__tests__/SettingsDropdown.test.js (96 MB heap size)
 PASS  src/components/Modal/Modal.test.js (99 MB heap size)
 PASS  src/pages/Web/Account/PersonalInfo/__tests__/EmailSection.test.js (101 MB heap size)
 PASS  src/pages/Web/Account/_helpers/__tests__/SettingsTextArea.test.js (105 MB heap size)
 PASS  src/components/Breadcrumbs/Breadcrumbs.test.js (107 MB heap size)
 PASS  src/pages/Web/Account/_helpers/__tests__/SettingsAvailability.test.js (110 MB heap size)
 PASS  src/pages/Web/RequestSupplies/__tests__/RequestSupplies.test.js (113 MB heap size)
 PASS  src/pages/Web/RequestSupplies/__tests__/ConfirmContactInfo.test.js (116 MB heap size)
 PASS  src/pages/Web/Account/DeliveryPreferences/DeliveryPreferences.test.js (119 MB heap size)
 PASS  src/pages/Web/Account/PersonalInfo/__tests__/PersonalInfo.test.js (122 MB heap size)
 PASS  src/App.test.js (125 MB heap size)
 PASS  src/pages/Web/Account/_helpers/__tests__/SettingsCard.test.js (127 MB heap size)
 PASS  src/pages/Web/Account/OrganizationInfo/__tests__/OrgAddressSection.test.js (130 MB heap size)
 PASS  src/pages/Web/Account/DeliverySignup/__tests__/DeliverySignup.test.js (130 MB heap size)
 PASS  src/pages/Web/Account/_helpers/__tests__/VolunteerStatus.test.js (130 MB heap size)
 PASS  src/pages/Web/Account/_helpers/__tests__/SettingsParent.test.js (130 MB heap size)
 PASS  src/pages/Web/Account/PersonalInfo/__tests__/PersonalNameSection.test.js (130 MB heap size)
 PASS  src/pages/Web/DonorFulfillment/DonorFulfillment.test.js (130 MB heap size)
 PASS  src/pages/Web/Account/PersonalInfo/__tests__/PasswordSection.test.js (130 MB heap size)
 PASS  src/pages/Web/Account/_helpers/__tests__/SettingsChild.test.js (131 MB heap size)
 PASS  src/components/Wizard/Wizard.test.js (131 MB heap size)
 PASS  src/components/UserAvailability/__tests__/TimeBlock.test.js (131 MB heap size)
 PASS  src/pages/Web/Account/OrganizationInfo/__tests__/OrganizationInfo.test.js (131 MB heap size)
 PASS  src/pages/Web/Account/DonorAccountSettings/DonorAccountSettings.test.js (130 MB heap size)

The suggestion of using --max-old-space-size does seem to resolve the issue for me, although I haven't yet added my original tests back into my files:

node --max-old-space-size=70 --expose-gc ./node_modules/.bin/jest --runInBand --logHeapUsage

Heap usage for all of my tests now linger at around 47 MB:

 PASS  src/pages/Web/RequestSupplies/__tests__/ConfirmDropoffPreferences.test.js (45 MB heap size)
 PASS  src/pages/Web/Account/PersonalInfo/__tests__/PersonalInfo.test.js (46 MB heap size)
 PASS  src/pages/Web/Account/_helpers/__tests__/SettingsChild.test.js (45 MB heap size)
 PASS  src/components/Wizard/Wizard.test.js (45 MB heap size)
...
 PASS  src/pages/Web/Account/_helpers/__tests__/SettingsContainer.test.js (49 MB heap size)
 PASS  src/pages/Web/Account/OrganizationInfo/__tests__/OrgNameSection.test.js (47 MB heap size)
 PASS  src/components/Modal/Modal.test.js (47 MB heap size)
 PASS  src/pages/Web/Account/_helpers/__tests__/SettingsAvailability.test.js (47 MB heap size)
armingjazi commented 3 years ago

experiencing the exact same issue with jest+ts-jest on a nestjs project

event the simplest test is reporting 500mb of heap size.

 PASS  tier5-test/one.get.spec.ts (7.304 s, 596 MB heap size)
describe('one', () => {

  it('one', async () => {
    expect(2).toEqual(2);
  });
});
SimonGodefroid commented 3 years ago

Here are my findings.

Did a study on 4 of our apps, made a benchmark with the following commands.

Case Command
A NODE_ENV=ci node node_modules/.bin/jest --coverage --ci --runInBand --logHeapUsage
B NODE_ENV=ci node --expose-gc node_modules/.bin/jest --coverage --ci --runInBand --logHeapUsage
C NODE_ENV=ci node --expose-gc ./node_modules/.bin/jest --logHeapUsage
D NODE_ENV=ci node node_modules/.bin/jest --coverage --ci --logHeapUsage

NB: "order" is the rank of the test within the running of the command e.g. 1 means it has been ran first, it's just the order in which the console outputs the test result at the end.

EDIT: All of this is running on my local machine, trying this on the pipeline was even more instructive since only the case where there's GC and no RIB results in 100% PASS. Also GC makes it twice as fast, imagine if you had to pay for memory usage on servers.

EDIT 2: case C has no --coverage --ci option but it does not impact performance. I added a chart to measure average pipeline speed with above scenarios. The graph is the average time of tests job on pipeline, 3 execution for each case, regardless of test outcome (All Pass vs some failing tests, because at the moment of collecting data some tests were unstable).

Cross apps Max Heap image

Average Heap image

App1 Max Heap and Average Heap image

Heap Chronology image

File Based Heap (x axis is file path) image

App2 Max Heap and Average Heap image

Heap Chronology image

File Based Heap (x axis is file path) image

App3 Max Heap and Average Heap image

Heap Chronology image

File Based Heap (x axis is file path) image

App4 Max Heap and Average Heap image

Heap Chronology image

File Based Heap (x axis is file path) image

Rank of appearance of the highest heap image

App 1 Average time of execution for the test job out of 3 pipelines for App1. image

Hope this helps or re-kindles the debate.

pastelsky commented 3 years ago

Did a heap snapshot for my test suite and noticed that the majority of the memory was being used by strings that store entire source files, often the same string multiple times!

Screenshot 2021-06-23 at 1 33 30 AM

The size of these strings continue to grow as jest scans and compiles more source files. Does it appear that jest or babel holds onto the references for the source files (for caching maybe) and never clears them?

MichalBurgunder commented 3 years ago

Did a heap snapshot for my test suite and noticed that the majority of the memory was being used by strings that store entire source files, often the same string multiple times!

The size of these strings continue to grow as jest scans and compiles more source files. Does it appear that jest or babel holds onto the references for the source files (for caching maybe) and never clears them?

I have found the same thing with Jest, but I have the feeling this is normal. Still, it does beg the question of whether these strings are cleared once jest has finished using them

However, what I've discovered not too long ago, is that importing data from .json files caused my "memory leak", or rather, a "memory overflow". Once I stuck the JSON files into js files (the dat aof which now sits inside some variable), and exported that variable, heap usage had been significantly reduced during testing, enough that our pipeline didn't crash anymore.

viceice commented 3 years ago

We at renovate solved the major issues by correctly disable nock after each test and run jest with node --expose-gc node_modules/jest/bin/jest.js --logHeapUsage

rgoldfinger-quizlet commented 3 years ago

We also noticed that there were a ton of strings containing modules that were eating up memory. It all went away when we set memory limits correctly: https://github.com/facebook/jest/issues/7874#issuecomment-744561779

Setting memory limits correctly is effectively the same as forcing GC with --logHeapUsage, as node will run GC automatically to keep from going over the allowed memory.

pastelsky commented 3 years ago

When I --logHeapUsage --runInBand on my test suite with about 1500+ tests, the memory keeps climbing from ~100MB to ~600MB. The growth seems to be in these strings, and array of such strings (source).

It is apparent that the number of compiled files will grow as jest moves further in the test suite — but if this doesn't get GC'ed, I don't have a way to separate real memory leaks from increases due to more modules stored in memory.

On running a reduced version my test suite (only about ~5 tests), I was able to narrow down on this behaviour —

Let's say we had — TEST A, TEST B, TEST C, TEST D

I was observing TEST B. Without doing anything —

TEST A (135 MB heap size)
TEST B (150 MB heap size)
TEST C (155 MB heap size)
TEST D (157 MB heap size)

If I reduce the number of imports TEST B is making, and replace them with stubs —

TEST A (135 MB heap size)
TEST B (130 MB heap size). <-- Memory falls!
TEST C (140 MB heap size)
TEST D (147 MB heap size)

This consistently reduced memory across runs. Also, the imports themselves did not seem to have any obvious leaks (the fall in memory corresponded with the number of imports I commented out).

Other observations:

BlackGlory commented 3 years ago

v27 seems to leak more memory. The test of my project never encountered OOM on v26, but it was killed on v27.

Npervic commented 3 years ago

We at renovate solved the major issues by correctly disable nock after each test and run jest with node --expose-gc node_modules/jest/bin/jest.js --logHeapUsage

I went trough the pull requests in your repo but can you give some more information on this? I have been having issues with large heap memory and flaky tests in CI only and I use nock. Any help is appreciated.

This is currently the way I manage nock in tests:

  // nock in used directly from import no local scope is declared
  import nock from 'nock';

  beforeEach(() => {
    nock.cleanAll();
    // More unrelated cleanup stuff
  });

  afterEach(() => {
    nock.restore();
    nock.activate();
    // More unrelated cleanup stuff
  });

  afterAll(() => {
    nock.cleanAll();
    // More unrelated cleanup stuff
  });
viceice commented 3 years ago

You need nock.restore(); in afterAll, so nock is self-removing from node:http, otherwise it will activate again and again

eg: nock > nock > nock > nock > node:http

https://github.com/renovatebot/renovate/blob/394f0bb7416ff6031bf7eb14498a85f00a6305df/test/http-mock.ts#L106-L121

 // nock in used directly from import no local scope is declared
  import nock from 'nock';

  beforeEach(() => {
    nock.cleanAll();
    // More unrelated cleanup stuff
  });

  afterEach(() => {
    // More unrelated cleanup stuff
  });

  afterAll(() => {
    nock.cleanAll();
    nock.restore();
    // More unrelated cleanup stuff
  });
Penagwin commented 2 years ago

I'm being affected by this as well. Our application is a react app with typescript.

I have tried NODE_OPTIONS=--max_old_space_size=2048, jest.useFakeTimers();, removing all import *, --clear-cache, with and without --coverage. To no avail.

Confusingly when I run the chrome devtools to look at the memory heap, it seems to think the heap is only 25mb? It's over 5000mb so I'm not sure what that is about. There are a crapload of strings with code though.

I'm wondering if it's some interaction with the version of jest, node, typescript, react, babel, or eslint?

I'm using: jest 27.2.4 node v16.4.0 typescript 4.4.3 eslint 7.32.0 react 16.4.2

EDIT: Currently using this command node --inspect-brk --expose-gc ./node_modules/.bin/jest --watch --watchAll=false --logHeapUsage --run_in_band --no-cache

EDIT 2: Downgrading to jest@24.9.0 and ts-jest@26.4.2 seems to fix the issue. The memory never went over 1gb and it completed in 60 seconds instead of 300+ seconds.

I found NODE_OPTIONS=--max_old_space_size=8000 did work with jest@27 however it obviously doesn't fix anything, the memory usage is still 5x higher then it should be and that's causing slowdowns with swap

EDIT 3: I have no idea what changed, but now tests take 4 minutes again and are filled with errors like

FAIL  tests/handheld/Index.tsx
  ● Test suite failed to run

    TypeError: buffer.reduce is not a function

      at _default (node_modules/jest-cli/node_modules/@jest/console/build/getConsoleOutput.js:41:29)

and the summary makes no sense

Snapshot Summary
 › 28 snapshots failed from 14 test suites. Inspect your code changes or re-run jest with `-u` to update them.

Test Suites: 152 failed, 109 passed, 261 of 179 total
Tests:       49 failed, 417 passed, 466 total
Snapshots:   28 failed, 152 passed, 180 total
Time:        241.84s
origamih commented 2 years ago

This works for me: https://github.com/kulshekhar/ts-jest/issues/1967#issuecomment-834090822

Add this to jest.config.js

globals: {
    'ts-jest': {
      isolatedModules: true
    }
  }
SimonGodefroid commented 2 years ago

In the end what really improved our pipeline was fiddling with the maxworkers option to figure our how many would make the fastest yet most stable pipeline. The memory leak thingy did not seem to stabilize and reduce pipeline times as much as manually finding the right figure for maxworkers actually did.

Le sam. 9 oct. 2021 à 10:37, Huy Tran @.***> a écrit :

This works for me: kulshekhar/ts-jest#1967 (comment) https://github.com/kulshekhar/ts-jest/issues/1967#issuecomment-834090822

Add this to jest.config.js

globals: { 'ts-jest': { isolatedModules: true } }

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/facebook/jest/issues/7874#issuecomment-939239534, or unsubscribe https://github.com/notifications/unsubscribe-auth/AEEIWZWW5MRCPY72EDYKMXDUF7PJ7ANCNFSM4GW5MRBQ . Triage notifications on the go with GitHub Mobile for iOS https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Android https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub.

Penagwin commented 2 years ago

This works for me: kulshekhar/ts-jest#1967 (comment)

Add this to jest.config.js

globals: {
    'ts-jest': {
      isolatedModules: true
    }
  }

This totally fixed it for me thank you so much!

pbrain19 commented 2 years ago

Why doesn't jest support TS out of the box... Do they not use it @ facebook?

milesj commented 2 years ago

@pbrain19 Jest does support TS out of the box, but via Babel and not ts-jest.

elorusso commented 2 years ago

Did a heap snapshot for my test suite and noticed that the majority of the memory was being used by strings that store entire source files, often the same string multiple times!

Screenshot 2021-06-23 at 1 33 30 AM

The size of these strings continue to grow as jest scans and compiles more source files. Does it appear that jest or babel holds onto the references for the source files (for caching maybe) and never clears them?

+1, we are also running into this issue. Jest is using over 5GB per worker for us. Our heap snapshots show the same thing as above. Any updates would be greatly appreciated.

pbrain19 commented 2 years ago

How can we force jest to delete these?

fazouane-marouane commented 2 years ago

I've observed today two unexplained behaviours:

  1. There's too much memory usage even when disabling code transforms and cleaning jest's cache
  2. When using --forceExit or --detectOpenHandles (or a combination of both), the memory usage drops from 1.4GB to roughly 300MB

I don't know if this is specific to our codebase or if the memory leak issue is tied to tests that somehow don't really finish/cleanup properly (a "bug" that detectOpenHandles or forceExit somehow fix)

pbrain19 commented 2 years ago

Bingo. This seems to be the fix alone with making sure to mock any external dependency that maybe no be your DB. In my case I was using a stats lib and bugsnag. When using the createMockFromModule it seems to actually run the file regardless so I ended up just mocking both along with running using NODE_OPTIONS=--max-old-space-size=6144 NODE_ENV=test && node --expose-gc ./node_modules/.bin/jest -i --detectOpenHandles --logHeapUsage --no-cache

@fazouane-marouane thanks so much... this one comment has legit saved the day.

For the record I use ts-jest. Memory Leak is gone!

Dakuan commented 2 years ago

unfortunately none of these solutions worked for me. test suite bloats out to several GB in less than 30 seconds (locally), all ts related

image

the more it has the more it uses!

darekg11 commented 2 years ago

This is going to sound bad but I have been struggling with the same situation as @pastelsky - memory heap dumps showing huge allocation differences in array and string between each snapshot and memory not being released after test run is completed.

We have been running Jest from inside Node with jest.runCLI, I tried everything suggested in this topic and in other issues on GitHub:

The only thing that reduced memory by around 200MB was to switch off default babel-jest transformer since we did not need it at all:

testEnvironment: "node",
transform      : JSON.stringify({})

This has indeed reduced memory usage but still not to the level where we could accept it.

After two days of memory profiling and trying different things, I have just switched to mocharunner since our tests were primarily E2E tests (no typescript, no babel, Node 12) making request to API it was fairly simple:

After deploying this, tests have been running with stable memory usage of 70MB and never going above while with Jest it was peaking at 700MB. I am not here to advertise mocha (my first time using it to run tests) but it literally just worked so if you have fairly simple test suites you could try changing your runner if you want to run tests programmatically.

radcapitalist commented 2 years ago

I'm seeing this same leak, with heap snapshots that look remarkably similar to those posted by @pastelsky above. I'm at Jest 27.0.6. Tried moving ahead to 27.4.3 and back to 26.6.3, to no avail. Tried a few things mentioned in this issue, without affect.

jaredjj3 commented 2 years ago

TLDR - The memory leak is not present in version 22.4.4, but starts appearing in the subsequent version 23.0.0-alpha.1. The following steps are for (1) the community to assert/refute this and then (2) find the offending commit(s) causing the memory leak and increased memory usage.

In https://github.com/facebook/jest/issues/7874#issuecomment-639874717, I mentioned that I created a repo jest-memory-leak-demo to make this issue easier to reproduce in local and Docker environments.

I took it a step further and decided to find the version that the memory leak started to show. I did this by listing all the versions returned from yarn info jest. Next, I manually performed a binary search to find the versions where version i does not produce a memory leak and version i + 1 does produce a memory leak.

Here is my env info (I purposely excluded the npmPackages since the version was the variable in my experiment):

npx envinfo --preset jest

  System:
    OS: macOS 12.0.1
    CPU: (10) arm64 Apple M1 Max
  Binaries:
    Node: 17.1.0 - ~/.nvm/versions/node/v17.1.0/bin/node
    Yarn: 1.22.17 - ~/.nvm/versions/node/v17.1.0/bin/yarn
    npm: 8.1.2 - ~/.nvm/versions/node/v17.1.0/bin/npm

The key finding is in jest-memory-leak-demo/versions.txt#L165-L169. You can see several iterations of the binary search throughout the file. I did one commit per iteration, so you also can examine the commit history starting at edc0567ad4710ba1be2bf2f745a7d5d87242afc4.

The following steps are for the community to validate these initial findings and ultimately use the same approach to find the offending commit causing the memory leaks. Here's a StackOverflow post that will help: "How to get the nth commit since the first commit?".

It would also be great if someone can write a script to do this in jest-memory-leak-demo. ~The most challenging part of doing this is programming memory leak detection~ edit: The script doesn't have to decide whether or not a test run yields a memory leak or not - it can take a commit range and produce test run stats at each commit. A list of versions can be found by running yarn info jest. I don't have time to do this at the moment.


NOTE: I was not very scientific about defining what versions produce a memory leak and what versions don't. First of all, I should have used the yarn docker test command to reproduce the results on other machines, but I just wanted to get an answer as fast as possible. Second, for each version, I should have run the test command >30 times and then aggregated the results. If you decide to reproduce this in the way I did it, YMMV.

NOTE: For earlier versions, I had to add the following to my package.json:

"jest": {
 "testURL": "http://localhost/"
}

If I didn't, I got the following error:

SecurityError: localStorage is not available for opaque origins

At first, I was diligent in removing this if it was not needed, but then I got lazy after iteration eight or so and just kept it. I don't know if this affected the results.

StringEpsilon commented 2 years ago

I have tested the reproduction with jest 24, jest 27 and jest 28 beta:

Version --runInBand min heap size max heap size
24.9.0 true 53 MB 259 MB
24.9.0 false 47 MB 61 MB
27.5.1 true 36 MB 71 MB
27.5.1 false 26 MB 30 MB
28.0.0-alpha.6 true 38 MB 73 MB
28.0.0-alpha.6 false 27 MB 36 MB

(All tested on node.js v14.15.3)

I think in general the leak has become less of an issue, but the discrepancy between --runInBand=true and --runInBand=false suggests that there is still an issue.

See also:

12142 (leak when using --runInBand)

10467 (duplicate of this issue)

7311 (leak when using --runInBand)

6399 (leak when using --runInBand)

As for the cause, from other issues relating to leaks, I suspect that there are multiple issues playing a role. For example:

6738 [Memory Leak] on module loading

6814 Jest leaks memory from required modules with closures over imports

8984 jests async wrapper leaks memory

9697 Memory leak related to require (might be a duplicate of / has common cause with 6738?)

10550 Module caching memory leak

11956 [Bug]: Memory consumption issues on Node JS 16.11.0+

And #8832 could either be another --runInBand issue or a require / cache leak. Edit: It seems to be both. It leaks without --runInBand, but activating the option makes the problem much worse.

There are also leak issues with coverage, JSDOM and enzyme #9980 has some discussion about that. And #5837 is directly about the --coverage option.


Addendum: it would probably be helpful to have one meta-issue tracking the various memory leak issues and create one issue per scenario. As it currently stands, all the issues I mentioned above have some of the puzzle pieces, but nothing is tracked properly, the progress that was made isn't apparent to the end users and it's actually not easy to figure out where to add to the conversation on the general topic. And it probably further contributes to the creation of duplicate issues.

StringEpsilon commented 2 years ago

As for triaging memory leaks, there needs to be some minimum info:

I suggest closing old issues that do not provide the above mentioned info and directing the user to the to-be created meta-issue.

And then it would be nice to find out if the scenario is one of the require and cache leaks. Figuring that out is probably a little involved unless those leaks get fixed.

SimenB commented 2 years ago

Ah, thank you so much @StringEpsilon! Is there still a leak now (i.e. forever growing, never reset) in the reproduction on the OP using v27? If not, I think we should close this and encourage new issues with reproductions as you say.

Note that 2 changes in v27 very much impacts this - we swapped default test environment from jsdom to node, and default test runner from jasmine to circus. So direct comparison between versions might not be perfectly valid.

StringEpsilon commented 2 years ago

I can't really say, based on the reproduction. I do see an increase test over test on the heap, but it's completely linear and not the saw-tooth pattern I see on production repositories, so I think node just doesn't run GC.

But I did find that running a single test file with a lot of tests seems to still leak:

for (let i = 0; i < 100000; i++) {
  test(`tautology #${i}`, () => {
    expect(true).toBeTruthy()
  })
}

I had a heap size of 313 mb with that (w/ --runInBand).

Running the test with 1 million iterations yields a heap size of 2.6 GB. Beware that testing that takes a while (276 seconds).

Edit: Okay, this particular kind of leak seems to happen without --runInBand too.

SimenB commented 2 years ago

If it's a single test file it always runs in band

StringEpsilon commented 2 years ago

I used the above scenario to create a case where the heap increase is more noticable:

npx jest --logHeapUsage --runInBand
 PASS  __test__/test_1.test.js (13.524 s, 221 MB heap size)
 PASS  __test__/test_2.test.js (188 MB heap size)
 PASS  __test__/test_9.test.js (235 MB heap size)
 PASS  __test__/test_8.test.js (265 MB heap size)
 PASS  __test__/test_7.test.js (306 MB heap size)
 PASS  __test__/test_6.test.js (346 MB heap size)
 PASS  __test__/test_10.test.js (13.586 s, 548 MB heap size)
 PASS  __test__/test_4.test.js (578 MB heap size)
 PASS  __test__/test_3.test.js (620 MB heap size)

and

npx jest --logHeapUsage 
 PASS  __test__/test_7.test.js (54 MB heap size)
 PASS  __test__/test_2.test.js (54 MB heap size)
 PASS  __test__/test_4.test.js (55 MB heap size)
 PASS  __test__/test_9.test.js (53 MB heap size)
 PASS  __test__/test_3.test.js (53 MB heap size)
 PASS  __test__/test_8.test.js (53 MB heap size)
 PASS  __test__/test_6.test.js (54 MB heap size)
 PASS  __test__/test_10.test.js (7.614 s, 196 MB heap size)
 PASS  __test__/test_1.test.js (7.619 s, 197 MB heap size)

(28.0.0-alpha.6)

Each test is just

for (let i = 0; i < 50000; i++) {
    describe("test", () => {
        it(`tautology #${i}`, () => {
            expect(true).toBeTruthy()
        })
    })
}

I also noticed that adding the extra describe() makes the heap grow faster:

jaredjj3 commented 2 years ago

On jest v28.0.0-alpha.6 and node v14.15.3, I observe the same behavior in jest-memory-leak-demo regardless of --runInBand:

radcapitalist commented 2 years ago

It's great that some folks think the memory leak issue is somehow not a big deal anymore, but we're at jest 27 and we have to run our builds at Node 14 even though we will ship with Node 16 so that our test suite can finish without running out of memory. Even at Node 14, as our test suite has grown, we struggle to get our test suite to run to completion.

StringEpsilon commented 2 years ago

@radcapitalist

I am sorry, my intention was only to report the progress that was made and to figure out the overall situation with the ~30 tickets. I was not trying to suggest that the various leaks are not an issue anymore. In fact, if anything, my testing suggests that --runInBand still has a large confounding effect on whatever leaks may occur.

As for the node 16 issue, there is a workaround via patch-package, see this comment https://github.com/facebook/jest/issues/11956#issuecomment-1011310131 and following.

radcapitalist commented 2 years ago

@StringEpsilon No need to apologize whatsoever! @SimenB suggested closing this after your comment and that just worried me a little :-).
We are working on splitting up our test files more to see if it helps us. And thanks very much for the patch-package tip, I will look into it!

Eric

StringEpsilon commented 2 years ago

@SimenB I have drafted a meta issue: https://gist.github.com/StringEpsilon/6c8e687b47e0096acea9345f8035455f

SimenB commented 2 years ago

It's great that some folks think the memory leak issue is somehow not a big deal anymore, but we're at jest 27 and we have to run our builds at Node 14 even though we will ship with Node 16 so that our test suite can finish without running out of memory. Even at Node 14, as our test suite has grown, we struggle to get our test suite to run to completion.

And that's exactly my point in potentially closing this - that has next to nothing to do with the reproduction provided in the OP. Your issue is #11956 (which seemingly is a bug in Node and further upstream V8).

However, it seems the OP still shows a leak somewhere, so you can rest easy knowing this issue won't be closed. 🙂


If it's an issue for you at work, any time you can spend on solving this (or at least getting more info about what's going on) would be a great help. It's not an issue for me at work, so this is not something I'm spending any time investigating - movement on this issue is likely up to the community. For example gathering traces showing what is kept in memory that could (should) be GC-ed. The new meta issue @StringEpsilon has spent time on is an example of great help - they're probably all a symptom of the same issue (or smaller set of issues), so getting the different cases listed out might help with investigation, and solving one or more might "inadvertently" solve other issues as well.

SimenB commented 2 years ago

Actually, I think I will close this. 😅 If I run the repo in the OP (using node 14 due to #11956) with --detect-leaks (which forces GC to run), I get this:

image

(this is using Jest 27 - using 28 alpha I get 22-23 instead of 24-25, but still miniscule)

I took heap snapshots in the middle of running (without --detect-leaks) and after tests completed running, and it's all source code cache that's the diff between the snapshots.

image

(compiled code is the... compiled code, string is the source strings within the compiled code, array is an internal array of the source code yet again, system - again source code and concatenated string is.... strings of source code)

And forcing GC will collect them, indicating Node itself will also collect it when it feels like it.

Some increase of memory usage is expected as we collect test results etc. as we go (vaguely related: #8242), but that's just increased memory usage, not leaking.

Note that this is the same result https://github.com/facebook/jest/issues/7874#issuecomment-639874717 shows. But unless somebody has info that indicates otherwise (e.g. manual GC is more aggressive than the automatic one), I think the cached strings are red herrings since they can be collected, and Node (or v8, who knows) is just choosing not to.

Also note that there is no API to clear this cache (either on Script instances or just the code cache in general) so for Jest to be able to do anything (except dropping all references to the Scripts and the contexts they run in (which we do) so GC can pick them up) Node needs to add some APIs.


I'll go through the issues @StringEpsilon collected, but from what I can tell at a quick glance they all seem to be a duplicate of (to some degree) #6814. Additionally #11956 is a well known upstream issue.

Please feel free to open up a new issue (if it's not one of the two I linked right above) with a reproduction if you have one. And while this issue has a lot of decent discussion, examples and workarounds, I think it's better to close it since the issue talked about in the OP is no longer an issue.

SimenB commented 2 years ago

One thing I came over when going through the list of issues was this comment: https://github.com/facebook/jest/issues/7311#issuecomment-578729020, i.e. manually running GC in Jest.

So I tried out with this quick and dirty diff locally:

diff --git i/packages/jest-leak-detector/src/index.ts w/packages/jest-leak-detector/src/index.ts
index 0ec0280104..6500ad067f 100644
--- i/packages/jest-leak-detector/src/index.ts
+++ w/packages/jest-leak-detector/src/index.ts
@@ -50,7 +50,7 @@ export default class LeakDetector {
   }

   async isLeaking(): Promise<boolean> {
-    this._runGarbageCollector();
+    runGarbageCollector();

     // wait some ticks to allow GC to run properly, see https://github.com/nodejs/node/issues/34636#issuecomment-669366235
     for (let i = 0; i < 10; i++) {
@@ -59,18 +59,18 @@ export default class LeakDetector {

     return this._isReferenceBeingHeld;
   }
+}

-  private _runGarbageCollector() {
-    // @ts-expect-error
-    const isGarbageCollectorHidden = globalThis.gc == null;
+export function runGarbageCollector(): void {
+  // @ts-expect-error
+  const isGarbageCollectorHidden = globalThis.gc == null;

-    // GC is usually hidden, so we have to expose it before running.
-    setFlagsFromString('--expose-gc');
-    runInNewContext('gc')();
+  // GC is usually hidden, so we have to expose it before running.
+  setFlagsFromString('--expose-gc');
+  runInNewContext('gc')();

-    // The GC was not initially exposed, so let's hide it again.
-    if (isGarbageCollectorHidden) {
-      setFlagsFromString('--no-expose-gc');
-    }
+  // The GC was not initially exposed, so let's hide it again.
+  if (isGarbageCollectorHidden) {
+    setFlagsFromString('--no-expose-gc');
   }
 }
diff --git i/packages/jest-runner/src/runTest.ts w/packages/jest-runner/src/runTest.ts
index dfa50645bf..5e45f06b1b 100644
--- i/packages/jest-runner/src/runTest.ts
+++ w/packages/jest-runner/src/runTest.ts
@@ -22,7 +22,7 @@ import type {TestFileEvent, TestResult} from '@jest/test-result';
 import {createScriptTransformer} from '@jest/transform';
 import type {Config} from '@jest/types';
 import * as docblock from 'jest-docblock';
-import LeakDetector from 'jest-leak-detector';
+import LeakDetector, {runGarbageCollector} from 'jest-leak-detector';
 import {formatExecError} from 'jest-message-util';
 import Resolver, {resolveTestEnvironment} from 'jest-resolve';
 import type RuntimeClass from 'jest-runtime';
@@ -382,6 +382,11 @@ export default async function runTest(
     // Resolve leak detector, outside the "runTestInternal" closure.
     result.leaks = await leakDetector.isLeaking();
   } else {
+    if (process.env.DO_IT) {
+      // Run GC even if leak detector is disabled
+      runGarbageCollector();
+    }
+
     result.leaks = false;
   }

So if running after every test file, this gives about a 10% perf degradation for jest pretty-format in this repo.

$ hyperfine 'node packages/jest/bin/jest.js pretty-format' 'node packages/jest/bin/jest.js pretty-format -i' 'DO_IT=yes node packages/jest/bin/jest.js pretty-format' 'DO_IT=yes node packages/jest/bin/jest.js pretty-format -i'
Benchmark 1: node packages/jest/bin/jest.js pretty-format
  Time (mean ± σ):      2.391 s ±  0.088 s    [User: 2.418 s, System: 0.392 s]
  Range (min … max):    2.273 s …  2.574 s    10 runs

Benchmark 2: node packages/jest/bin/jest.js pretty-format -i
  Time (mean ± σ):      2.315 s ±  0.060 s    [User: 2.381 s, System: 0.385 s]
  Range (min … max):    2.229 s …  2.416 s    10 runs

Benchmark 3: DO_IT=yes node packages/jest/bin/jest.js pretty-format
  Time (mean ± σ):      2.513 s ±  0.101 s    [User: 2.966 s, System: 0.397 s]
  Range (min … max):    2.413 s …  2.746 s    10 runs

Benchmark 4: DO_IT=yes node packages/jest/bin/jest.js pretty-format -i
  Time (mean ± σ):      2.581 s ±  0.179 s    [User: 2.981 s, System: 0.403 s]
  Range (min … max):    2.423 s …  3.032 s    10 runs

Summary
  'node packages/jest/bin/jest.js pretty-format -i' ran
    1.03 ± 0.05 times faster than 'node packages/jest/bin/jest.js pretty-format'
    1.09 ± 0.05 times faster than 'DO_IT=yes node packages/jest/bin/jest.js pretty-format'
    1.11 ± 0.08 times faster than 'DO_IT=yes node packages/jest/bin/jest.js pretty-format -i'

However, this also stabilizes memory usage in the same way --detect-leaks does.

So it might be worth playing with this (e.g. after every 5 test files instead of every single one?). Thoughts? One option is to support a CLI flag for this, but that sorta sucks as well.


I'll reopen (didn't take long!) since I'm closing most other issues and pointing back here 🙂 But it might be better to disucss this in an entirely new issue. 🤔

UnleashSpirit commented 2 years ago

Hi,

We experiments the same issue about memory leaks with our Angular 13 app We try the --detect-leaks as suggested above and it seems to work but only with node 14 (14.19.1) npx jest-heap-graph "ng test --run-in-band --log-heap-usage --detect-leaks"

Here the heap graph for 14.19.1 :

--- n: 127 ---
     287.00 ┤                  ╭╮                  ╭╮╭╮           ╭╮
     285.00 ┤      ╭╮ ╭╮ ╭╮    ││╭╮ ╭╮  ╭╮      ╭╮ ││││      ╭╮  ╭╯│      ╭╮  ╭╮      ╭╮
     283.00 ┤      ││ │╰╮││    ││││ │╰╮╭╯│  ╭╮ ╭╯│ ││││  ╭╮  ││╭╮│ ╰╮╭╮   ││ ╭╯│ ╭╮ ╭╮││        ╭╮
     281.00 ┼╮ ╭─╮ ││ │ │││  ╭╮││││ │ ││ │  ││ │ │ │╰╯╰╮╭╯│  │││││  │││  ╭╯│╭╯ │ ││ ││││   ╭╮   ││
     279.00 ┤│ │ │ ││ │ │││  │╰╯│││╭╯ ╰╯ ╰╮ │╰╮│ │ │   ╰╯ ╰╮ │╰╯╰╯  ││╰╮ │ ││  ╰╮││ ││││ ╭╮││╭╮ ││
     277.00 ┤│ │ │ ││ │ │││╭╮│  ││││      │ │ ││ │╭╯       │ │      ╰╯ ╰─╯ ╰╯   ╰╯│╭╯││╰─╯││╰╯╰─╯╰────
     275.00 ┤│ │ │╭╯╰╮│ ││││││  ││││      ╰╮│ ││ ╰╯        │ │                    ╰╯ ╰╯   ╰╯
     273.00 ┤│╭╯ ││  ││ ╰╯││││  ││╰╯       ╰╯ ││           │ │
     271.00 ┤││  ││  ││   ││││  ││            ││           │╭╯
     269.00 ┤││  ╰╯  ╰╯   ││╰╯  ╰╯            ╰╯           ││
     267.00 ┤╰╯           ╰╯                               ╰╯

Here the heap graph for 16.14.2 :

--- n: 126 ---
    2212.00 ┤                                                                                   ╭─────
    2074.50 ┤                                                                          ╭────────╯
    1937.00 ┤                                                                ╭─────────╯
    1799.50 ┤                                                        ╭───────╯
    1662.00 ┤                                               ╭────────╯
    1524.50 ┤                                      ╭────────╯
    1387.00 ┤                             ╭────────╯
    1249.50 ┤                    ╭────────╯
    1112.00 ┤            ╭───────╯
     974.50 ┤   ╭────────╯
     837.00 ┼───╯
npx envinfo --preset jest

  System:
    OS: Windows 10 10.0.19044
    CPU: (12) x64 11th Gen Intel(R) Core(TM) i5-11500H @ 2.90GHz
  Binaries:
    Node: 16.14.2 - C:\Program Files\nodejs\node.EXE
    Yarn: 1.22.18 - ~\workspace\WEBTI\fe-webti\node_modules\.bin\yarn.CMD
    npm: 8.5.0 - C:\Program Files\nodejs\npm.CMD
  npmPackages:
    jest: ^27.3.1 => 27.5.1 

We also tried the coverageProvider set to babel as here https://github.com/facebook/jest/issues/11956#issuecomment-1112561068 no change We guess our tests may are not perfectly well written but there still leaks

EDIT

Downgrade to node 16.10.0 seems to work

--- n: 127 ---
     413.00 ┤            ╭╮
     409.70 ┤            ││
     406.40 ┤            ││
     403.10 ┤            ││                                                                          ╭
     399.80 ┤            ││╭╮            ╭╮               ╭╮         ╭╮ ╭╮                           │
     396.50 ┤            ││││    ╭╮      ││       ╭──╮ ╭╮╭╯╰─╮ ╭──╮ ╭╯│ ││ ╭╮╭╮         ╭╮        ╭╮╭╯
     393.20 ┤╭╮╭╮        ││││    ││╭─╮ ╭╮││╭╮ ╭╮╭╮│  │╭╯││   │ │  ╰─╯ ╰─╯╰╮│││╰───╮╭──╮╭╯╰─╮╭─╮╭──╯╰╯
     389.90 ┤││││╭╮     ╭╯││╰╮ ╭─╯││ ╰─╯╰╯│││ │││╰╯  ││ ╰╯   │ │          ╰╯╰╯    ╰╯  ╰╯   ╰╯ ╰╯
     386.60 ┤│╰╯╰╯│  ╭╮╭╯ ╰╯ │╭╯  ╰╯      ╰╯╰╮│││    ╰╯      ╰─╯
     383.30 ┼╯    ╰╮ │││     ╰╯              ╰╯╰╯
     380.00 ┤      ╰─╯╰╯
Moumouls commented 2 years ago

I use Jest for integration testing, it can be complicated to find the source of the memory leak (maybe related to a graceful teardown problem in my test suite, not a Jest issue).

I use this workaround to avoid OOM using matrix on Github actions.

name: Backend

on:
  pull_request:
    branches:
      - master
    types: [opened, synchronize, reopened, unlabeled]

jobs:
  buildAndLint:
    timeout-minutes: 10
    runs-on: ubuntu-latest

    strategy:
      matrix:
        node-version: [16.x]

    steps:
      - uses: actions/checkout@v2
      - run: yarn build && yarn lint
  Test:
    needs: [buildAndLint]
    timeout-minutes: 30
    runs-on: ubuntu-latest

    strategy:
      matrix:
        node-version: [16.x]
        # Each app folder test will be run in parallel using the matrix
        folder: [adapters, auth, cloud, customSchema, miscTest, schemas, utils]

    steps:
      - uses: actions/checkout@v2
      # Improve this to use github artifact
      - run: yarn build:fast
      # Jest will only run test from the folder
      - run: yarn test ${{ matrix.folder }}
        env:
          NODE_ENV: TEST

This script could be improved to upload each LCOV result in a final job and then merge all coverage results into one using nyc merge see: https://stackoverflow.com/questions/62560224/jest-how-to-merge-coverage-reports-from-different-jest-test-runs

j0k3r commented 2 years ago

As @UnleashSpirit mentioned, downgrading to node 16.10 fixed the memory issue with Jest.

michalstrzelecki commented 2 years ago

@j0k3r it works:)

mbyrne00 commented 2 years ago

Wow - this one got us too ... after scouring the internet and finding this ... reverting to 16.10 fixed our build too (in gitlab, docker image extending node:16 changed to node:16.10). Here's hoping there's a longer-term solution, but many thanks for the suggestion!

pleunv commented 2 years ago

This looks like it'll drag on forever as it doesn't appear to be getting picked up on the node/v8 side. Is there anything that can be done in order to escalate this?

mbyrne00 commented 2 years ago

And we're already starting to run into problems with libs requiring a specific min LTS version of node now (as per my linked issue above). This is getting painful to work around without dodgy --ignore-engines for CI.

V8 issue seems to be closed as WontFix so got no idea what the longer term solution is: https://bugs.chromium.org/p/v8/issues/detail?id=12198

Link to node issue https://github.com/nodejs/node/issues/40014

mbyrne00 commented 2 years ago

It seems there is a suggested fix/workaround for Jest as per this comment: https://bugs.chromium.org/p/v8/issues/detail?id=12198#c20

Hopefully this makes more sense to someone in the Jest team ... is this something that could be persued? It seems the first suggestion is for node itself but for jest they are asking if it's possible to remove forced GCs. I gotta admit I don't know the detail.

Did Victor's suggested workaround work for Node? Updating from above, it would be to change https://source.chromium.org/chromium/chromium/src/+/main:v8/src/heap/heap.h;l=1460;drc=de8943e4326892f4e584a938b31dab0f14c39980;bpv=1;bpt=1 to remove the is_current_gc_forced_ check.

In general it's my understanding that --exposed-gc is primarily a testing feature and shouldn't be depended upon in production. Is it not possible to remove forced GCs from how jest runs?