serilog-contrib / serilog-sinks-slack

A simple (yet customizable) Slack logging sink for Serilog
MIT License
41 stars 27 forks source link

Does not force flush on disposal #11

Closed sburman closed 2 years ago

sburman commented 7 years ago

I have a console app that executes quite quickly and the number of messages is inconsistently relayed to Slack. (i.e out of 6 log messages, sometimes I might see 1 or 2 or 3 but never the full 6.

I understand the limitations of posting to a webhook but I thought that as long as I am disposing correctly that the sink would be fully flushed? Maybe a synchronous option if people really want this enforced?

Thoughts?

geovanimartinez commented 7 years ago

I am running into the same issue, my work around is not efficient (using sleep thread) but has improved my delivery rate.

private void FlushAndExit(Exception exception, string message = "")
        {
            if (string.IsNullOrEmpty(message)) { message = "Unrecoverable Runtime Error, Application Will Exit"; }

            Log.Fatal(exception, message); // NOTE: Will post Fatal events to #devops slack channel 
            Log.Information("[{Name} - END...]", Assembly.GetEntryAssembly().GetName().Name);

            Log.CloseAndFlush();

            // Give ample time for the CloseAndFlush to finish...
            Thread.Sleep(new TimeSpan(0, 0, 0, 10));
            Environment.Exit(-1);
        }
LaurensvandenBerg commented 7 years ago

Same problem here, my program terminates rather quickly after i log an event to this sink. (i only send Fatal messages to slack as it runs on multiple machines). When testing with the Thread.Sleep, i start to get the messages with a delay of about 1200-1250 ms, 1500 to be safe. This however is very machine and instance specific and i'd figure there should be a different, more flexible way of checking if everything is sent before i dispose my program.

homolibere commented 2 years ago

v2.1.0 still not fixed

TrapperHell commented 2 years ago

@homolibere I'm going to need more information to work with this. This should have been fixed since v2.0.0, but there may be situations in which the app terminates before it is done flushing the content to the underlying sink, and thereby logging to slack.

homolibere commented 2 years ago

when i call Log.CloseAndFlush(); it does not wait till it flushes all messages and terminates application right away. using Thread.Sleep is a only workaround, but it's a very dirty code.

TrapperHell commented 2 years ago

If I recall correctly that should be one of the more common scenarios which should work correctly, without the need for Thread.Sleep. I might need to take another look at it at some point next week. How are you using the logger in your application? Are you using DI at all, or working directly with the static Log ? Could you share a snippet of how you're configuring the logger in a succinct manner that reproduces the error?

homolibere commented 2 years ago
using Serilog;
using Serilog.Formatting.Json;
using Serilog.Sinks.Slack;

var configuration = new LoggerConfiguration().Enrich.FromLogContext();

configuration.WriteTo.Console(formatter: new JsonFormatter());
configuration.WriteTo.Slack("https://hooks.slack.com/services/xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx");

Log.Logger = configuration.CreateLogger();

Task task = Task.Run(async () => {
        for (int i = 0; i < 10; i++)
        {
            Log.Logger.Information(i.ToString());
            Task.Delay(500).Wait();
        }
    });

task.Wait();

Log.CloseAndFlush();

net6 console example. without delay 500 - it does not show any messages, with it at least firs 3-4

TrapperHell commented 2 years ago

This is to confirm that I can reproduce your issue (without the need for Task.Run), as such I will be re-opening this.

TrapperHell commented 2 years ago

I was pretty confident that this issue has already been resolved in the past - and in fact it was. Unfortunately a later commit undid the fix through an out-of-order disposal.

This should be resolved again in v2.2.0. Kindly confirm.

homolibere commented 2 years ago

Confirm, it is fixed now