serilog-contrib / serilog-sinks-splunk

A Serilog sink that writes to Splunk
https://splunk.com
Apache License 2.0
46 stars 47 forks source link

'time' property not sent to Splunk in logging event #154

Closed angusmillar closed 2 years ago

angusmillar commented 2 years ago

Hi Guys,

First up, thank you for all the work you have put into this Sink, it is much appreciated by many.

I have been working with our Splunk administrator on an issue where by the 'time' property is not seen in my log events sent to Splunk.

I am using .NET Core 5 with the following Nuget Packages: Serilog.AspNetCore: V4.1.0 Serilog.Sinks.Splunk: V3.7.0

Here is my Program.cs:

using Microsoft.AspNetCore.Hosting;
using Microsoft.Extensions.Hosting;
using Serilog;
using Steeltoe.Extensions.Configuration.ConfigServer;
using System;

namespace Sonic.IATA.ResultingService.Api
{
  /// <summary>
  /// The main entry point class
  /// </summary>
  public class Program
  {
    /// <summary>
    /// The name given to this services for logging
    /// </summary>
    public const string ApplicationName = "Sonic.IATA.ResultingService";

    /// <summary>
    /// The main entry point method
    /// </summary>
    /// <param name="args"></param>
    public static void Main(string[] args)
    {
      //This log provider is used until the log provider below 'UseSerilog' takes over
      //The later log provider additionally logs to Splunk 
      Log.Logger = new LoggerConfiguration()
        .Enrich.FromLogContext()
        .MinimumLevel.Is(Serilog.Events.LogEventLevel.Information)
        .MinimumLevel.Override("Microsoft", Serilog.Events.LogEventLevel.Information)
        .WriteTo.Console()
        .WriteTo.File(System.IO.Path.Combine("Logs", $"StartupLog-.log"), rollingInterval: RollingInterval.Day)
        .CreateBootstrapLogger(); ;
      try
      {
        Log.Information($"{ApplicationName} starting up");
        CreateHostBuilder(args).Build().Run();
      }
      catch (Exception Exec)
      {
        Log.Fatal(Exec, $"{ApplicationName} start-up failed");
      }
      finally
      {
        Log.CloseAndFlush();
      }
    }

    /// <summary>
    /// CreateHostBuilder
    /// </summary>
    /// <param name="args"></param>
    /// <returns></returns>
    public static IHostBuilder CreateHostBuilder(string[] args) =>
        Host.CreateDefaultBuilder(args)          
          .UseSerilog((context, config) =>
          {
            config.Enrich.FromLogContext();
            config.MinimumLevel.Is(Serilog.Events.LogEventLevel.Information);
            config.MinimumLevel.Override("Microsoft", Serilog.Events.LogEventLevel.Information);
            config.WriteTo.Console();
            config.WriteTo.File(System.IO.Path.Combine("Logs", $"MainLog-.log"), rollingInterval: RollingInterval.Day);
            config.WriteTo.EventCollector(
              splunkHost: "https://AddressToOurSplunkServerRemoved:8088/services/collector/event", 
              eventCollectorToken: "36f99576-5307-4008-b7d3-8c2081795687", 
              source: "Sonic:IATA", 
              sourceType: "Sonic:IATA:ResultingService");            
          })            
          .ConfigureWebHostDefaults(webBuilder =>
          {
            webBuilder.UseStartup<Startup>();
          });
  }
}

Here is how I create a very simple logging event: I pull in a dependency in my class constructor on : ILogger<InviteController> Logger; and then write a logging event as follows: Logger.LogError("Angus Millar Testing");

If I look at what writes to my text file log I can see the log event occurring at 2022-01-28 10:49:40.567 +10:00 as follows:

2022-01-28 **10:49:40.567 +10:00** [ERR] **Angus Millar Testing**

However, if I look in Splunk I see a millsecond difference and the timezone is one hour ahead. The Timezone difference is understandable as our Splunk in a +11:00 time zone where my running service in in a +10:00 timezone. However, the millsecond difference indicates that Splunk is only reporting the time the event arrived in Splunk, not a 'time' written into the event data. image

Furthermore, working with my Splunk administrator we do not see a 'time' property in the log event JSON data sent.

Here is example log event produced by a different system. This event has a wrapping JSON object containing both 'time' and the inner event data in the 'event' property. This is an example of what we are hoping to achieve, or at a minimum, we want Time to come accross.

{
    "time": "1643257471.96",
    "event": {
        "Level": "Information",
        "RenderedMessage": "End processing HTTP request after 48.4135ms - 200",
        "Properties": {
            "ElapsedMilliseconds": 48.4135,
            "StatusCode": 200,
            "EventId": {
                "Id": 101,
                "Name": "RequestPipelineEnd"
            },
            "SourceContext": "System.Net.Http.HttpClient.cache.LogicalHandler",
            "HttpMethod": "GET",
            "OtherProperties": "Other Properties have been removed for brevity"           
        }
    }
}

Here is an example of what we are currently producing using the setup detailed above. As you can see there is no 'time' element seen, and the outer wrapping JSON object is not present.

{
    "Level": "Information",
    "MessageTemplate": "{HostingRequestFinishedLog:l}",
    "RenderedMessage": "Request finished HTTP/2 GET https://localhost:5001/swagger/v1/swagger.json - - - 200 - application/json;charset=utf-8 134.6275ms",
    "Properties": {
        "ElapsedMilliseconds": 134.6275,
        "StatusCode": 200,
        "ContentType": "application/json;charset=utf-8",
        "ContentLength": null,
        "Protocol": "HTTP/2",
        "Method": "GET",
        "OtherProperties": "Other Properties have been removed for brevity"
    }
}

Looking through this repo's code I can see 'time' being set on line 160 in the SplunkJsonFormatter class. So I'm a little stumped as to what I am doing wrong.

Any help would be much apreisated.

Thanks Angus Millar

angusmillar commented 2 years ago

My Splunk administrator has come back and informed me that the 'time ' property was coming through after all. So this issue is not any issue at all.

Sorry for the inconvenience.