Closed moshie closed 2 years ago
I'm not sure how you are deploying this but that error come from Node.js core.
@mcollina
I've done a bit more digging and it looks like an object called dataBuf is being held onto.
This is set inside of the thread-stream lib.
I am not sure but it might make sense to clear this buffer when the worker exits.
Why do you think the problem is that SharedArrayBuffer
is not being freed up? It's needed for the workers.
I am by no means an expert at memory issues but I guess the blue bar didn't go down after V8 completed garbage collection which makes me think perhaps that object is being held onto?
I could be completely wrong as well since I am not so sure this is using web assembly in any way :S
What I mean is that this block of memory is fixed at 4 MB (or whatever other value you configure it to). It's instantiated once at the beginning, and then your application run for a while without issues. Then something happen and the worker dies for out of memory: WebAssembly.instantiate(): Out of memory: wasm memory
.
Unless you care constantly creating new workers all the time, I do not see how that could be the problem you are looking for.
We have a log that fires for all successful requests from our application we noticed when we remove it the issue goes away. This log fires a lot as you can imagine.
The memory is still rising in our application but at a much slower rate (I suspect the other logs left in there) and I haven't seen the WebAssembly.instantiate(): Out of memory: wasm memory
error yet.
I think that there is a leak here somewhere in pinoJS and it relates to hitting web assembly's memory limit when running a large number of logging workers.
Have you simulated running a large number of logs before?
I also noticed running locally that the threads in the activity monitor rise to 100+ threads which is about the time I get the WebAssembly.instantiate(): Out of memory: wasm memory
error.
I think that there is a leak here somewhere in pinoJS and it relates to hitting web assembly's memory limit when running a large number of logging workers.
How many workers are you creating? You should not be needed more than a few.
Does pino create a worker for each log?
If so it would be alot.
For instance some quick math if we have 200 users over time and each of them make 4 requests thats 800 logs. and those numbers are conservative
Does pino create a worker for each log?
Can you articulate this question? Having an example of your setup would be useful.
(Your application should not create a new worker for each log).
I am using transports to log events as I understand it the code that handles the log runs inside of a worker I am curious if pinoJS creates a new worker for every time we run a log?
Unfortunately the application I am working on is under an NDA so I can't share with you too much of the code.
I can attempt to re-create a low level version of what I am doing which may or may not reproduce the error if you think that would be helpful?
I am using transports to log events as I understand it the code that handles the log runs inside of a worker I am curious if pinoJS creates a new worker for every time we run a log?
No, there is only one worker instance for every transport you create.
I created a project with a dumbed down version of our app hopefully that's helpful.
You'll find most of the code in the /helpers folder
https://github.com/moshie/memory-issue
Worth mentioning i couldn't replicate it with ☝️
threads-sample-11-15-00.txt I took a sample of the process from the activity monitor
You shouldn't create a new pino instance for every request: https://github.com/moshie/memory-issue/commit/102946eb6129f9b846bf3ad353fa33e5fa264fb6#diff-c1b593e8452344f7bae31fbb28f6489d240c99280ec645332dc7ae5e5fc5a9f6R15
There's no reason why that isn't declared globally
Pretty much that!
Thanks both! I'll refactor stop it doing that
@marcbachmann has this worked out for you?
I started to see the exact same behavior (Next + Pino), and I'm declaring it the global scope in a logger.js
module:
// logger.js
import pino from 'pino';
function getTransportObject() {
if (['test', 'development'].includes(process.env.NODE_ENV) || process.env.CI) {
return {
transport: {
target: 'pino-pretty',
},
};
}
}
function getLogLevel() {
if (process.env.LOG_LEVEL) {
return process.env.LOG_LEVEL;
}
if (process.env.NODE_ENV === 'production') {
return 'info';
}
return 'warn';
}
const logger = pino({
base: {
environment: process.env.VERCEL_ENV || 'local',
},
level: getLogLevel(),
nestedKey: 'payload',
redact: [
'headers.cookie',
'password',
'email',
'body.password',
'body.email',
'context.user.password',
'context.user.email',
'context.session.token',
],
...getTransportObject(),
});
export default logger;
I code using Jest "watch" mode, and after a while:
error - uncaughtException: RangeError: WebAssembly.instantiate(): Out of memory: wasm memory
error - unhandledRejection: Error: the worker has exited
"pino": "7.9.2",
"pino-pretty": "7.6.1",
"next": "12.1.4",
Node v14.18.3
This issue has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs.
Hi,
I have a NextJS application which runs on the lts version of node (v16.13.2) and on a linux docker container.
I seem to be getting an error after the app has been running for a while where webAssembly is erroring due to running out of memory.
The error I managed to pull out is pretty vague but it has pino below it perhaps it's draining web assemblies memory?
I have one custom transport which I have written (see below the error) this could have a memory leak in it I would like to get your opinion on it.
The worker exiting might be a consequence of the webAssembly error but I am not sure.