elastic / elastic-transport-js

Transport classes and utilities shared among Node.js Elastic client libraries
Apache License 2.0
5 stars 25 forks source link

MaxListenersExceededWarning: Possible EventEmitter memory leak detected. 11 abort listeners added to [EventEmitter] #63

Closed ahmedsamir-dev closed 4 months ago

ahmedsamir-dev commented 1 year ago

Using the connection's emitter for undici's request causes a event emitter memory leak, for each request an event handler is attached to the abort signal and never deleted and because the max listeners by default is 10 for each event, the process shows up warning fro that.

example:

MaxListenersExceededWarning: Possible EventEmitter memory leak detected. 11 abort listeners added to [EventEmitter]. Use emitter.setMaxListeners() to increase limit
    at _addListener (node:events:587:17)
    at EventEmitter.addListener (node:events:605:10)
    at addSignal (/app/server-transpiled/node_modules/undici/lib/api/abort-signal.js:35:19)

There is already unresolved issues related to the same problem:

https://github.com/elastic/elasticsearch-js/issues/1741 https://github.com/elastic/elasticsearch-js/issues/1733 https://github.com/elastic/elasticsearch-js/issues/1716

Here is a suggested Fix for it but closed unmerged

55

ahmedsamir-dev commented 1 year ago

Hi tomas, we hope it gets your team attention 🙏 @delvedor

delvedor commented 1 year ago

Heya, I'm no longer actively workin on this project, @JoshMock is the new maintainer.

madispuk commented 1 year ago

Hi Josh, thanks for working on it!

arciisine commented 1 year ago

Just spent a moment digging into this, and it looks like its tied to the EventEmitter in the Undici connection class. I verified that by introducing this fix (https://github.com/elastic/elastic-transport-js/pull/64/commits/9012ddcb213bd5238af93c40909fb9db24003b5d), the errors all disappear.

JoshMock commented 1 year ago

Thanks for taking a look @arciisine. :black_heart: I'm just onboarding as the new maintainer of this library so I'll take a look at this soon. Your solution is straightforward, so I'll play with it soon to make sure it has no undesired side effects.

ftejeria commented 1 year ago

Hey guys, I removed the warning by adding in the TransportRequestOptions for the signal field a new AbortController().signal;, here an example client.search( { yourQuery }, { signal: new AbortController().signal, } Any problem attached to this workaround to remove the warning?

JoshMock commented 1 year ago

As a note while I'm working to prioritize a possible fix for this, former maintainer @delvedor mentioned on the Kibana repo that this is just a warning about a possible memory leak for behavior we know is, in fact, just a large number of concurrent requests in progress. So, it's mostly just noise, which you can configure your project to ignore based on the concurrent request limits you'd expect to see in your app.

Hello! If you are reusing the abort signal multiple times, the client is cleaning up listeners as soon as they are no longer needed, but if you are sending more than 10 concurrent requests with the same abort signal, then you will hit that "issue" again. I'm using quotes around issue because that's just a warning. Node.js has no way to know if having dozens of listeners is ok, so it's proactive and lets you know because sometimes it can mean a memory leak.

If it's expected to share the same abort signal among multiple concurrent request, you can change the max listeners limit with:

pocesar commented 1 year ago

@JoshMock we are seeing a steady increase in memory until the containers die out of memory, over the course of 8-10 hours, happens literally every day. I don't think it's just noise, our baseline goes from 64MB of memory to 8GB...

JoshMock commented 1 year ago

@JoshMock we are seeing a steady increase in memory until the containers die out of memory, over the course of 8-10 hours, happens literally every day. I don't think it's just noise, our baseline goes from 64MB of memory to 8GB...

That's good to know. And you've narrowed it down to memory being consumed by Elasticsearch client activity? Would love any traces, usage examples, versions used (Elasticsearch and client) or other relevant details.

JoshMock commented 1 year ago

I ran a simple test where I indexed 100 documents every 10 seconds for 8 hours, and then another where I ran 100 searches every 10 seconds for 8 hours, and was not able to reproduce a memory leak. The results show typical Node.js garbage collection behavior. (charts below)

I'd still like to see a usage example where a memory leak can be seen and traced back to the client.

As has been mentioned before, the Possible EventEmitter memory leak detected warning is just a warning, and increasing max event listeners to reflect how many concurrent Elasticsearch requests your application expects is an appropriate way to prevent that warning message.

image

image

JoshMock commented 1 year ago

Running another 8-hour test now that indexes 10k documents every second to push the client a bit harder. Also running another 8-hour test that is identical, but with the EventEmitter fix proposed in https://github.com/elastic/elastic-transport-js/pull/55. Will report back next week.

JoshMock commented 1 year ago

A more intense test (index 10k docs/second for 8 hours) yielded more interesting results! See charts below. Unfortunately, the proposed fix in #55 did not have any notable positive impact, other than not getting the MaxListenersExceededWarning. Will need to do a deeper analysis of memory usage.

index 10000 docs per second for 8 hours

index 10000 docs per second for 8 hours, with transport fix

JoshMock commented 1 year ago

Ran another long-running process as a test, with an actual memory allocation timeline profiler running and identified a potential leak in Undici that may be fixed in a newer version than 5.5.1, which this library currently uses. Rerunning the test with Undici updated to latest (5.22.1) to see if the results differ.

JoshMock commented 1 year ago

Upgrading Undici appears to solve the issue. Working on rolling out a change for the next patch release of the Elasticsearch client.

image

phil-nelson-bt commented 1 year ago

I just tried this upgrade and there is no change to the warning message. This is with ES client 8.8.1, transport version 8.3.2 and undici version 5.22.1

I've also been unable to find a place where setting the global defaultMaxlisteners as indicated in the node docs has any effect at all. The code doesn't fail, and doesn't do anything

Did this update hope to remove the warning or did you focus more in the memory consumption later in the thread?


(node:33355) MaxListenersExceededWarning: Possible EventEmitter memory leak detected. 11 abort listeners added to [EventEmitter]. Use emitter.setMaxListeners() to increase limit
    at _addListener (node:events:587:17)
    at EventEmitter.addListener (node:events:605:10)
    at addSignal (/Users/philnelson/Projects/productivitysuiteservice/node_modules/undici/lib/api/abort-signal.js:35:19)
    at new RequestHandler (/Users/philnelson/Projects/productivitysuiteservice/node_modules/undici/lib/api/api-request.js:68:5)
    at Pool.request (/Users/philnelson/Projects/productivitysuiteservice/node_modules/undici/lib/api/api-request.js:170:25)
    at /Users/xxxxxxxx/Projects/xxxxxxxxxx/node_modules/undici/lib/api/api-request.js:163:15
    at new Promise (<anonymous>)
    at Pool.request (/Users/philnelson/Projects/productivitysuiteservice/node_modules/undici/lib/api/api-request.js:162:12)
    at Connection.request (/Users/xxxxxxx/Projects/xxxxxxxxxxxnode_modules/@elastic/transport/lib/connection/UndiciConnection.js:143:41)
    `
JoshMock commented 1 year ago

@phil-nelson-bt In my tests, upgrading Undici also had the effect of eliminating the max listeners warning. I'll reopen this and run some more tests soon to verify that. In the meantime, updating defaultMaxListeners will still work on a case by case basis.

phil-nelson-bt commented 1 year ago

I was able to use the EventEmitter.defaultMaxListeners = xx option after reading the docs from node again and doing some googling. The node docs say emitters.defaultMaxListeners which is kind of confusing. I'll still have to play with the value, but that should be easy

JoshMock commented 1 year ago

I did verify that the warning does still show up in some cases! I got so focused on actual memory leaks that I didn't fully validate that the logs were fixed as well. Fortunately those are just warnings, not an indication of an actual problem.

I'll evaluate the best way to get rid of those warnings soon.

Clonex commented 1 year ago

We are also still experience this issue after upgrading, seems like it is a lot less than before the update, but its still present.

(node:1662) MaxListenersExceededWarning: Possible EventEmitter memory leak detected. 11 abort listeners added to [EventEmitter]. Use emitter.setMaxListeners() to increase limit
[TS]     at _addListener (node:events:587:17)
[TS]     at EventEmitter.addListener (node:events:605:10)
[TS]     at addSignal (/code/node_modules/undici/lib/api/abort-signal.js:35:19)
[TS]     at new RequestHandler (/code/node_modules/undici/lib/api/api-request.js:68:5)
[TS]     at Pool.request (/code/node_modules/undici/lib/api/api-request.js:170:25)
[TS]     at /code/node_modules/undici/lib/api/api-request.js:163:15
[TS]     at new Promise (<anonymous>)
[TS]     at Pool.request (/code/node_modules/undici/lib/api/api-request.js:162:12)
[TS]     at Connection.request (/code/node_modules/@elastic/transport/src/connection/UndiciConnection.ts:160:35)
arciisine commented 1 year ago

@JoshMock , if the code is intending to use more than the default max listeners (I believe its 10), you will need to set the max value explicitly on the event emitter (e.g. https://github.com/elastic/elastic-transport-js/commit/9012ddcb213bd5238af93c40909fb9db24003b5d) This uses 100 as a rough upper bound, but could and should be constrained to a realistic upper bound.

JoshMock commented 1 year ago

@arciisine Yep, that's the easiest solution. The catch is that a useful max value likely varies from project to project. Since the warning is mostly there to help uncover memory leaks, 100 is probably fine, but it may be useful to expose it as an option at instantiation so folks can adjust for their use case. I just haven't had time to work on that yet.

rosettaroberts-impact commented 1 year ago

I found that this works as a work around.

import { Client, ConnectionOptions, UndiciConnection } from '@elastic/elasticsearch';
import { kEmitter } from '@elastic/transport/lib/symbols';

new Client({
  ..otherOptions,
  Connection: class extends UndiciConnection {
    constructor(properties: ConnectionOptions) {
      super(properties);
      this[kEmitter].setMaxListeners(Number.POSITIVE_INFINITY);
    }
  },
});
dennisajacobs commented 11 months ago

Is it possible there's been a regression on this? I'm seeing unbounded memory growth until OOM, and also getting the memory leak warning. Versions: @elastic/elasticsearch@8.10.0 @elastic/transport@8.3.4 undici@5.25.3

MaxListenersExceededWarning: Possible EventEmitter memory leak detected. 11 abort listeners added to [EventEmitter]. Use emitter.setMaxListeners() to increase limit
  at _addListener (node:events:587:17)
  at EventEmitter.addListener (node:events:605:10)
  at addAbortListener (/app/node_modules/.pnpm/undici@5.25.3/node_modules/undici/lib/core/util.js:448:10)
  at addSignal (/app/node_modules/.pnpm/undici@5.25.3/node_modules/undici/lib/api/abort-signal.js:33:3)
  at new RequestHandler (/app/node_modules/.pnpm/undici@5.25.3/node_modules/undici/lib/api/api-request.js:68:5)
  at Pool.request (/app/node_modules/.pnpm/undici@5.25.3/node_modules/undici/lib/api/api-request.js:169:25)
  at /app/node_modules/.pnpm/undici@5.25.3/node_modules/undici/lib/api/api-request.js:162:15
  at new Promise (<anonymous>)
  at Pool.request (/app/node_modules/.pnpm/undici@5.25.3/node_modules/undici/lib/api/api-request.js:161:12)
tomimarkus991 commented 11 months ago

Having the same issue. Also downgraded to elastic v7 latest versions and had only one instance of error with a little bit different message

(node:11819) MaxListenersExceededWarning: Possible EventEmitter memory leak detected. 11 product-check listeners added to [EventEmitter]. Use emitter.setMaxListeners() to increase limit

v8 had

v8
(node:99147) MaxListenersExceededWarning: Possible EventEmitter memory leak detected. 11 abort listeners added to [EventEmitter]. Use emitter.setMaxListeners() to increase limit
(node:99147) MaxListenersExceededWarning: Possible EventEmitter memory leak detected. 11 abort listeners added to [EventEmitter]. Use emitter.setMaxListeners() to increase limit
(node:99147) MaxListenersExceededWarning: Possible EventEmitter memory leak detected. 11 abort listeners added to [EventEmitter]. Use emitter.setMaxListeners() to increase limit
(node:99147) MaxListenersExceededWarning: Possible EventEmitter memory leak detected. 11 abort listeners added to [EventEmitter]. Use emitter.setMaxListeners() to increase limit
(node:99147) MaxListenersExceededWarning: Possible EventEmitter memory leak detected. 11 abort listeners added to [EventEmitter]. Use emitter.setMaxListeners() to increase limit
(node:99147) MaxListenersExceededWarning: Possible EventEmitter memory leak detected. 11 abort listeners added to [EventEmitter]. Use emitter.setMaxListeners() to increase limit
(node:99147) MaxListenersExceededWarning: Possible EventEmitter memory leak detected. 11 abort listeners added to [EventEmitter]. Use emitter.setMaxListeners() to increase limit
(node:99147) MaxListenersExceededWarning: Possible EventEmitter memory leak detected. 11 abort listeners added to [EventEmitter]. Use emitter.setMaxListeners() to increase limit
(node:99147) MaxListenersExceededWarning: Possible EventEmitter memory leak detected. 11 abort listeners added to [EventEmitter]. Use emitter.setMaxListeners() to increase limit
JoshMock commented 11 months ago

Just here to acknowledge that I'm seeing this. I was sidetracked on other projects for a few months, but I'm hoping to get some time to focus on addressing this issue soon.

jineshshah36 commented 10 months ago

We can also confirm this issue on our end. Is there a known good version of v8?

MartianH commented 9 months ago

Having this issue right now, All tickets about it lead here. Using @elastic/elasticsearch": "8.10.0".

cosieLq commented 8 months ago

I can confirm this issue. Using @elastic/enterprise-search@npm:8.6.1 (@elastic/transport@npm:8.4.0) with undici@npm:6.2.1.

AlanObject commented 8 months ago

I am having the same issue and I am using 8.11.0.

JoshMock commented 7 months ago

Thanks for your patience, everyone. I was out on leave and am just getting back to work.

@breno-alves put together a PR that solves for the warnings being logged. Thanks for that! I'll test and merge soon.

However, that PR does not address any underlying memory leak issues that might be related to abort listeners. I'd like to dig into that more before fully closing this. If anyone has any isolated code that can reproduce a leak, I'd love to see it.

JoshMock commented 7 months ago

It would also be helpful to know what versions of Node.js you're experiencing the issue on. According to a couple comments, this is less likely to happen on Node.js versions ^18.16.0 and ^19.8.0 (and, presumably, ^20.0.0 as well).

pChausseC commented 7 months ago

same issue node v18.19.1, @elastic/elasticsearch ^8.12.2.

cipher450 commented 6 months ago

Same issue here Node : v18.18.0 @elastic/elasticsearch : "^8.11.0", memory keeps increasing very slightly overtime starting from 414 mb to 918.32 mb in 7 days

JoshMock commented 5 months ago

Anyone here who is on Node 18+ and experiencing memory leaks: please try out @elastic/transport 8.5.0 (using an override). We upgraded Undici from 5.22.1 to 6.7.0, which includes a MaxListenersExceededWarning fix via https://github.com/nodejs/undici/pull/2823. If this is an underlying cause of memory leaks, your situation may be improved. At the very least, it looks like it will help resolve all the warning messages.

yukha-dw commented 5 months ago

Upgrading to 8.5.0 or 8.5.1 seems fix the issue. I have run a 10 minutes run and don't see MaxListenersExceededWarning anymore, usually it comes out right away on 1000 RPS+.

Node: v18.16.1 @elastic/elasticsearch: 8.13.1 @elastic/transport: 8.5.1

daveyarwood commented 5 months ago

With @elastic/transport 8.5.1, I'm still seeing the warnings, and my application is having memory issues. See https://github.com/nodejs/undici/issues/3131

JoshMock commented 5 months ago

Thanks @daveyarwood. I'll keep investigating soon.

alexey-sh commented 5 months ago

Any chance to fix it within two years?

JoshMock commented 5 months ago

I've continued to address different aspects of this issue over the past year as I've had time. Unfortunately there seem to be a few possible things going on rather than one single root cause.

Also, the more clear code examples I can get that consistently reproduce a memory leak that can be traced back to EventEmitter or abort listeners, the easier it will be to track down. I've not received any yet, and the test scenarios I've come up with myself have been solved with fixes that have already been merged.

sibelius commented 5 months ago

when a new release will come up?

mmcDevops commented 5 months ago

@JoshMock tried every possible solution with version upgradation of elastic/transport and undici. But haven't got the fix.

mmcDevops commented 5 months ago

@JoshMock I have tried replicating this on the local environment to reproduce the above mentioned error. I have written a simple script which is using the elastic/elasticsearch package with version "8.13.1" and is creating indices on elasticsearch and fetching data from the same indices. But when I increase the value from 10 to above, I get the error. I had tried upgrading the “undici” package as well from 5.5.1 to 5.22.1 but didn’t get the fix.

Script file:

const { Client } = require('@elastic/elasticsearch');

const client = new Client({
 node: 'https://localhost:9200',
 auth: {
   username: *****,
   password: *****,
 }
 tls: {
   rejectUnauthorized: false,
 },
});
async function createIndex(indexName) {
   try {
       await client.indices.create({
           index: indexName
       });
       console.log(`Index '${indexName}' created successfully.`);
   } catch (error) {
       console.error(`Error creating index '${indexName}':`, error);
   }
}
async function getIndexInfo(indexName) {
   try {
       const response = await client.cat.indices({ index: indexName });
       console.log(`Index '${indexName}' info:`, response.body);
   } catch (error) {
       console.error(`Error retrieving info for index '${indexName}':`, error);
   }
}
async function main() {
   const indexTasks = [];
   for (let i = 0; i < 100; i++) {
       const indexName = `index_${i}`;
       indexTasks.push(createIndex(indexName));
   }
   await Promise.all(indexTasks);
   const retrievalTasks = [];
   for (let i = 0; i < 100; i++) {
       const indexName = `index_${i}`;
       retrievalTasks.push(getIndexInfo(indexName));
   }
   await Promise.all(retrievalTasks);
   console.log('All tasks completed.');
}
main().catch(console.error);

Output:

kunalkumargiri@Kunals-MacBook-Air mmc-server % npm run dev

> mmc-server@1.0.0 dev
> nodemon --trace-warnings --max-http-header-size=16384 --max-old-space-size=4096 server/index.js

[nodemon] 3.1.0
[nodemon] to restart at any time, enter `rs`
[nodemon] watching path(s): *.*
[nodemon] watching extensions: js,mjs,cjs,json
[nodemon] starting `node --trace-warnings --max-http-header-size=16384 --max-old-space-size=4096 server/index.js`
(node:74817) MaxListenersExceededWarning: Possible EventEmitter memory leak detected. 11 abort listeners added to [EventEmitter]. Use emitter.setMaxListeners() to increase limit
    at _addListener (node:events:587:17)
    at EventEmitter.addListener (node:events:605:10)
    at addAbortListener (/Users/kunalkumargiri/Desktop/mmc/mmc-server/node_modules/@elastic/elasticsearch/node_modules/undici/lib/core/util.js:449:10)
    at addSignal (/Users/kunalkumargiri/Desktop/mmc/mmc-server/node_modules/@elastic/elasticsearch/node_modules/undici/lib/api/abort-signal.js:33:3)
    at new RequestHandler (/Users/kunalkumargiri/Desktop/mmc/mmc-server/node_modules/@elastic/elasticsearch/node_modules/undici/lib/api/api-request.js:68:5)
    at Pool.request (/Users/kunalkumargiri/Desktop/mmc/mmc-server/node_modules/@elastic/elasticsearch/node_modules/undici/lib/api/api-request.js:169:25)
    at /Users/kunalkumargiri/Desktop/mmc/mmc-server/node_modules/@elastic/elasticsearch/node_modules/undici/lib/api/api-request.js:162:15
    at new Promise (<anonymous>)
    at Pool.request (/Users/kunalkumargiri/Desktop/mmc/mmc-server/node_modules/@elastic/elasticsearch/node_modules/undici/lib/api/api-request.js:161:12)
Index 'index_2' created successfully.
Index 'index_7' created successfully.
Index 'index_0' created successfully.
Index 'index_11' created successfully.
Index 'index_1' created successfully.
Index 'index_3' created successfully.
Index 'index_6' created successfully.
Index 'index_4' created successfully.
Index 'index_5' created successfully.
Index 'index_10' created successfully.
Index 'index_8' created successfully.
Index 'index_9' created successfully.
tomimarkus991 commented 5 months ago

After 8.5.1 even this workaround doesn't work

https://github.com/elastic/elastic-transport-js/issues/63#issuecomment-1699634799

found this for now https://stackoverflow.com/questions/76579772/node-maxlistenersexceededwarning-when-i-use-same-elasticsearch-instance-in-a-for

sibelius commented 5 months ago

we are getting a lot of request timeoutError

/usr/src/app/node_modules/@elastic/transport/lib/Transport.js in SniffingTransport.request at line 540:31

mmcDevops commented 5 months ago

@JoshMock Is there any way to suppress these warnings until a solution is proposed..

cipher450 commented 5 months ago

@JoshMock Is there any way to suppress these warnings until a solution is proposed..

https://github.com/elastic/elastic-transport-js/issues/63#issuecomment-1699634799

JoshMock commented 5 months ago

I wasn't able to reproduce a memory leak using the code snippet from @mmcDevops. When observing memory while running it, heap usage does spike pretty high as connections are opened, but everything is properly garbage-collected once Promise.all(indexTasks) is resolved:

memory1

However, I rediscovered a PR that fell by the wayside that attempted to solve this problem, addressing @ronag's primary concern. I've opened https://github.com/elastic/elastic-transport-js/pull/96 to reproduce that work, with a few minor tweaks. This does get rid of the MaxListenersExceededWarning, which is great! But it doesn't have any significant impact on memory usage:

memory2

In any case, if anyone would like to pull down the repo and test on that PR's code with their use cases, that'd help a ton! I'll try to reproduce a leak a few more times, and look to merge the change closer to the release of Elasticsearch 8.14 in mid-May.

JoshMock commented 4 months ago

https://github.com/elastic/elastic-transport-js/pull/96 has been merged and deployed in v8.5.2. I'll keep this open for a couple weeks, or until someone can upgrade their transport and verify that it resolves the issue. Whichever comes first. :black_heart:

KerliK commented 4 months ago

96 has been merged and deployed in v8.5.2. I'll keep this open for a couple weeks, or until someone can upgrade their transport and verify that it resolves the issue. Whichever comes first. đź–¤

For me, the upgrade resolved the issue.

daveyarwood commented 4 months ago

I can also confirm that this fixed my issue. Thanks @JoshMock! 🙌