Closed dgw closed 8 years ago
In the past, when I've seen this, it's because the file is being opened once, written to a lot, then closed way later (if ever). The solution I've found is to open and close the logfile with each write.
Alternatively, I've never tried it, but you might be able to flush it out with $| = 1; That might only work on stdout, so you might have to do something to tell it to work on the file instead.
Did a little research. Apparently, file I/O in Perl is done through a buffering layer, including appended writes. It only flushes the new log lines out to disk when the write buffer fills up by default, and the buffer is typically multiple kilobytes long, so it can take a fair amount of chatting to fill.
Opening and closing the log file for each write would require a bit more code refactoring than I want to do, given that currently Bucket opens the log file and just hangs on to the filehandle, checking if it needs to be reopened every so often. But I think there's another solution called "making the filehandle hot", described in that buffering FAQ page, that will solve this by making Perl flush the buffer on every write. Going to test tonight, and submit a PR if it solves the issue.
I bet it will. The trick you suggested with $|
seems to be related, @SpicyLemon, as part of making the filehandle hot is setting $|=1
.
I honestly have no idea why this is still open. The PR to fix it was merged months ago…
tail -f /path/to/bucketlog
is basically useless, since it might take minutes or hours to update. And when it does update, it dumps out a ton of lines.When I shut down my Bucket instance, it also throws away some log data that is never written out. (I do shut it down with
service bucket stop
on Ubuntu, so Bucket might not be given the chance to clean up before it exits.)I'm looking into this, because I really want to be able to tail Bucket's log to find more info on another issue.