Closed dmuth closed 7 years ago
Good workaround -- unfortunately, this is the case with pretty much all logging libs that keep an open WriteStream (and frankly, using one that utilizes one-off writes would have so much of a performance impact and draught in the thread pool that it wouldn't be worth using one that doesn't).
However, I'm keeping this incident open because I have a decent idea to get child processes' Bristol instances to autoconfigure according to the parent process, and automate the passing of messages. I'll comment again when that's in.
I'm going to close this for now. While there's a good solution to this, it would appear to be not worth the effort at this point. There's a strong consensus that for most use cases, if you're running Node on a machine with multiple CPUs, you should either:
With the above, you multiply your durability for literally free, and gain the advantage of a less complex codebase as a result.
I'll reopen if this becomes a popular request!
Let's say you have multiple processes logging to the same file:
Then run the script, and while the script is running, fire up lsof:
Two processes, two different file descriptors, same file. Under UNIX, there is no guarantee that writes to a file are atomic, thus running the risk of partial lines being written to one file descriptor before the next line is written, thus creating a "staircase" effect.
Opening files in the master process then spawning the child process won't help. (I tried)
The best solution I found so far is to have a single process write files. This can be done with process.send() as follows:
There are probably other ways to accomplish the same. This particular technique worked for me in my projects.