Closed HUGEIT closed 2 years ago
What's your pino configurarion? If you are using transport or async logging, your description is fundamentally correct: if the system is under high load, the log lines are queued and flushed whenever the I/O layer allow that. This minimizes overhead.
console.log is mostly synchronous instead (and therefore blocks the event loop).
@mcollina thank you for responding
it's literally just
import P from 'pino'
P({ level: 'trace' })
so there should be no async or remote transport logging involved?
Yes this is synchronous
man I really don't know how to prove my point here. the statements are literally one after another and there is this delay.
I will check if I'm able to make a minimal example somehow
ok here is an example output
code is:
date = new Date()
console.log(date.getMilliseconds(), "pre log")
logger.trace('log')
console.log(date.getMilliseconds(), "post log")
output is:
612 pre log
612 post log
{"level":10,"time":1647465394613,"pid":21554,"hostname":"me","msg":"log"}
after a few minutes this is worse:
908 pre log
908 post log
------14 seconds pass
{"level":10,"time":1647465595908,"pid":21554,"hostname":"me","msg":"log"}
and eventually it even goes cross-over:
622 pre log
622 post log
322 pre log
322 post log
{"level":10,"time":1647465595622,"pid":21554,"hostname":"me","msg":"log"}
{"level":10,"time":1647465595322,"pid":21554,"hostname":"me","msg":"log"}
like the console.log printed for iteration n AND for iteration n+1 which are at least 20 seconds apart before suddenly the logging catches up with the console.logs but not at all in an synchronous order with the console.logs
needless to say, sometimes they fire all in the correct order but most of the time they are off. CPU usage is way below 10% on that core.
any idea?
What OS is this code running? Could you upload a complete example to reproduce this problem: something I can run on my system.
hey this is on macOS 12.3.
I've tried to narrow down things to a minimal node package but unfortunately I'm not able to reproduce this in an isolated manner.
is there anything that could potentially delay pino's processing somehow? like a busy event emitter or stuff like that?
Hi, I'm not 100% sure, but maybe I'm currently facing the same weird problem with pino :
This is how I instantiate pino
const loggerOptions: LoggerOptions = {
level: LOG_LEVEL,
formatters: {
// in production print the label instead of the numeric value of the level
level: (label) => ({ level: label }),
},
};
this.log = pino(loggerOptions);
Then this same logger instance is used throughout our app (we use the same instance, we don't recreate new ones). We just do usual log like log.debug
, log.info
and log.error
. This is the only interactions we have with the logger. Nothing special.
Our app is just a simple gRPC server, and for debugging, the only calls made to it is each 5 seconds we send two health check calls. For one health check call, we output 2 log lines (we use log.info()
). The server respond well to this health checks. So the expected logs is to have 4 lines of log each ~5s.
But from some point in time, pino starts to write the output after ~10min instead of the expected 5s freq, and write a big batch of log lines at once.
The version of pino is the latest version.
The process run on node:14.15.1:alpine
.
For instance this is the log with pino
this is the log without pino, using simple console.log
is there anything that could potentially delay pino's processing somehow? like a busy event emitter or stuff like that?
The only thing is the standard out processing. As an example if you are logging on a terminal it can forward an "I am busy" signal and pino will wait.
In the same context you said two statements that contradicts each other:
What's the destination for the logs? Are you logging to a TTY (a terminal)? Or are you accessing them via docker logs?
We would need an application showing step-by-step what you are doing to understand what's going on.
Which version of pino are you using? Could you try with the latest one?
@mcollina you mixed posts from @HUGEIT and mine @doums
I would really like to have a way repro this
I'm really sorry but I'm unable to make a minimal example.
we have this somewhat big project that showcases that behavior under specific circumstances but if I try to fetch the relevant bits and loop them 50k times, I cannot reproduce and pino works as expected. it's only with the big project that this issue comes up.
what I thought to be true was that it happens once CPU usage sticks to 100 for a while and then even after cooling off however the tests I made today showed that as long as the CPU is not busy, the logs come in as normal.
so all in all this may actually be a case about "console.log is strictly sync, pino does some more processing and is less tight to when its data can write to the output stream".
I will let you know once I find more
"console.log is strictly sync, pino does some more processing and is less tight to when its data can write to the output stream".
That's not correct. The default configuration of pino uses fs.writeSync()
. Or are you using any other configuration?
What version of pino are you using?
Are all logs properly timestamped?
I also encountered this problem.
After my observation, this delay increases over time
When I switched back to console.log
, the problem no longer appeare
Closing for now. If anybody could produce a reproduction script for this I would love to take a look.
This issue has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs.
I was debugging something using pino and noticed that some functions are called with a noticeable delay. whilst I'm still not quite sure where the bottleneck in my code is I noticed that I'm seeing
console.log
s much earlier than the pino output.pino seems to be instantaneous at first but as the process ages, the pino output seems to come in later and later.
I can see a few seconds' delay between a
console.log
and apino.trace
that are called at the same time.I tried searching here for terms like "throttle" or "debounce" but I cannot find anything regarding that - should I? why are the output lines delayed sometimes?
(that is with a
{ level: 'trace' }
constructed object)