Closed graforlock closed 4 years ago
Can confirm. Did replicate it from the snippet. Present on MacOS Node v11.8.0.
Our microservice infra is based on Micro toolkit (http://micro-toolkit.github.io/info/), the library uses zeromq for transport, recently we upgrade micro toolkit from zmq to this library and while doing some test I also notice that our microservice get OOMKilled on kubernetes. We use Node.js App that run on a docker alpine container.
I was able to replicate this locally and while running the apps they allocate memory without stopping, in this case with a client (dealer socket) Broker (router socket) sending a simple hearbeat I can see the memory increasing continuously.
Does anyone have any idea what is the version of the library that caused this? I will do some attempts with older versions to check if the behaviour persists.
I was able to replicate a example where the memory leak problem is present:
https://github.com/pjanuario/zmq-mem-leak
by running docker-compose up will run a service with a router socket and another one with a dealer socket, by looking the docker stats we can see that memory doesnt stop growing.
Running the same example with zeromq@4.2.1
we don't have this behaviour I wasn't yet able to identify wich commit introduced this problem.
We had to downgrade our library dependency back to this version to prevent the library from having memory issues.
https://github.com/micro-toolkit/event-bus-zeromq/pull/47 https://github.com/micro-toolkit/zmq-service-suite-broker-js/pull/75
I will later try to do some tests to identify where was this problem introduced.
Confirming - we have done our research and we are 100% sure that it's because of zeromq (4.6.0).
Revert to 4.2.1 didn't help...
hey hey this issue also struck me. here it is still reproducible on a current debian buster with zeromq.js 5.2?
also tested the reproducer from https://github.com/pjanuario/zmq-mem-leak with 5.2.0 -> it's still increasing
hmm maybe nevermind. it seems the issue with the reproducer is that it blocks the javascript event loop. adding a random pause at some point makes the issue go away e.g. modifying the reproducer to
const zmq = require('zeromq');
const publisher = zmq.socket('pub');
publisher.bindSync("tcp://*:5556");
// publisher.bindSync("ipc://weather.ipc");
let start = Date.now();
let finish_cb;
async function main() {
loop();
return new Promise((resolve, reject) => {
finish_cb = resolve;
});
}
async function loop()
{
while (true) {
// Get values that will fool the boss
publisher.send('10001 12 12');
let now = Date.now();
if (now - start >= 2000) {
console.log('------------');
const used = process.memoryUsage();
for (let key in used) {
console.log(`${key} ${Math.round(used[key] / 1024 / 1024 * 100) / 100} MB`);
}
start = now;
console.log('------------');
await new Promise(resolve => setTimeout(resolve, 1))
}
}
}
main().then((ret) => {
console.log("finised.");
process.exit();
}).catch((error) => {
console.error(error);
process.exit(1);
});
i'll verify by letting this run for a longer time.
edit: even after running for two straight hours memory does not increase here, zeromq.js 5.2
sorry for the hazzle :)
Using the following code from the weather example i was getting frequent JS out of memory exception from GC.
I have then attempted to check heap memory usage and its growing in alarming pace. The following code is sufficient to reproduce:
Is that normal?
My Node version is
v10.11.0