Closed driekus77 closed 5 years ago
It sounds like you are producing logs faster than they can be consumed. For majority of applications, dropping the message is the sensible thing to do as it favors dropping logs over memory leaking or blocking the application. No matter where or how you buffer the logs, your consumer must be able to consume messages at a rate faster than the logs are being produced (on average) if you are producing logs asynchronously.
Here are some tips
public void Emit(LogEvent logEvent)
when the queue is full.Thanks for the input!
@driekus77 implementing AuditTo
on the Oracle sink would be the only 100% sure-fire way to prevent message loss, since there's more to the picture than just full batches (Serilog will suppress all exceptions in the WriteTo
pipeline, making it useless for "real" auditing, which it isn't designed for; AuditTo
is all about this use case).
PeriodicBatchingSink
isn't really intended to form the basis for sinks that do auditing; for example, in the Seq sink, the default WriteTo
configuration uses a PeriodicBatchingSink
, but when configured with AuditTo
a different implementation is used. Happy to help you implement this for Oracle if you're keen?
I'll close this as I don't think this will align with the goals of this particular package, but, that's not to say we can't do more elsewhere to make this scenario easier (I just spotted the interesting "burst" sink work via your Oracle sink PR - will have to dig in!)
@Aenima4six2 and @nblumhardt Thanks for your explainations!
Kind regards, Henry Roeland
I'm busy adding some features to Serilog.Sinks.Oracle: https://github.com/lucascebertin/Serilog.Sinks.Oracle/pull/1
This sink implements (uses) Serilog.Sinks.PeriodicBatching.
I also, a time ago, tested the MSSQLServer sink, which was then also using PeriodicBatch, and discovered the same issue: Default Not all messages are written! Storing 10000 log events in: 51 (ms) to Oracle. ==> Only 477 are stored in Oracle! Note: I'm timing the "Log.CloseAndFlush()" method.
I looked through the PeriodicBatching code and in the "OnTick"(?) function its dropping messages when the queue becomes to full.
If I want to use Serilog with Oracle logging for (kind of) Auditlog its useless if messages are being dropped!
Is there a way to tweak PeriodicBatchingSink that it logs all messages and does not drop anything?
I know that I can set the queuelimit very high but the point is you don't know up front how many messages are going to be logged! Dropping messages does not seem right to me.
Kind regards, Henry