Azure / azure-functions-host

The host/runtime that powers Azure Functions
https://functions.azure.com
MIT License
1.95k stars 442 forks source link

Application Insights not recognising logLevel settings in host.json #5250

Closed ghost closed 4 years ago

ghost commented 5 years ago

I have configured host.json to filter out certain trace/request logs in my Azure function. For example:

{
  "logging": {
    "logLevel": {
      "Host.Triggers.DurableTask": "Warning",
      "Host.Results": "Warning",
      "Function.FunctionName": "Warning",
      "Function.FunctionName.User": "Information",
    },
    "applicationInsights": {
      "samplingSettings": {
        "isEnabled": false,
        "EnableDependencyTracking": false
      }
    }
  },
  ...
}

These logs appear to be successfully filtered in the Azure Functions in-portal console (Function started/completed logs no longer appear), but I've noticed that these configurations are not reflected in Application Insights logging. We're still receiving all information-level logs into our instance, which produces excessive logs.

Investigative information

Please provide the following:

Repro steps

Provide the steps required to reproduce the problem:

  1. Create a Function App with a function called 'FunctionName'
  2. Apply host.json settings sub-sections as detailed above
  3. Monitor logs on both in-portal console and on connected AI instance (via Live Metrics stream)

Expected behavior

Logs should be filtered in AI the same way as it is in console.

Actual behavior

Logs are not being filtered into AI.

Known workarounds

I could code up a custom ITelemetryProcessor, but I want to make sure that there are no simpler workarounds before going down this route.

mikesitch commented 5 years ago

I'm also trying this via host.json (to remove dependency tracking and performance counters), not working for me either.

mhoeger commented 4 years ago

Adding @brettsam - can you weigh in on this when you're back from vacation?

ak98 commented 4 years ago

Using azure functions log parameter.

This has lead to some large data consumption by app insights (HUGE GIGS and $$$) Have now seen 100+ fold increase in spending on app insights. Huge amounts of irrelevent logging occurring that is uncontrolled by host.json.

At its most basic description - the logLevel is ignored

host.json

{
    "version": "2.0",
    "logging": {
        "logLevel": {
          "default": "Warning",
          "Host.Results": "Warning",
          "Function": "Warning",
          "Host.Aggregator": "Warning"
        }
    }
}

Sample Function in C# expected

LogLevel.Trace.Enabled = False
LogLevel.Debug.Enabled = False
LogLevel.Information.Enabled = False
LogLevel.Warning.Enabled = True
LogLevel.Error.Enabled = True
LogLevel.Critical.Enabled = True
LogLevel.None.Enabled = True

actual

LogLevel.Trace.Enabled = True
LogLevel.Debug.Enabled = True
LogLevel.Information.Enabled = True
LogLevel.Warning.Enabled = True
LogLevel.Error.Enabled = True
LogLevel.Critical.Enabled = True
LogLevel.None.Enabled = True

      //Sample function to return report on status
  [FunctionName("AppInsightsDetailFunction")]
          public static async Task<IActionResult> Run(
              [HttpTrigger(AuthorizationLevel.Function, "get",  Route = null)] HttpRequest req, ILogger log)
          {

              StringBuilder sb = new StringBuilder();
              foreach (LogLevel item in Enum.GetValues(typeof(LogLevel)))
              {
                  sb.AppendLine($"LogLevel.{item}.Enabled = {log.IsEnabled(item)}");

              }
              sb.AppendLine(log.GetType().Name);

              return new OkObjectResult(sb.ToString());

          }

currently trying to throttle/disable appinsights will probably have to remove the keys

ak98 commented 4 years ago

2 possible fixes to the above problem

  1. Temporary dumb fix by using a decorator

       [FunctionName("AppInsightsDetailFunction")]
      public static async Task<IActionResult> Run(
      [HttpTrigger(AuthorizationLevel.Function, "get",  Route = null)] 
            HttpRequest req, ILogger log)
      {
          //insert this line at start of each function
          log = new LoggerWrap(log);
          // usage of log now obeys LogLevel.Warning
          //code 
      }
    
    public class LoggerWrap : ILogger
    {
        ILogger _logger;
        readonly LogLevel _minLevel;
        public LoggerWrap(ILogger logger, LogLevel min = LogLevel.Warning)
        {
            this._minLevel = min;
            _logger = logger;
        }
        public IDisposable BeginScope<TState>(TState state) => _logger.BeginScope(state);
        public bool IsEnabled(LogLevel logLevel) => logLevel >= _minLevel;
        public void Log<TState>(LogLevel logLevel, EventId eventId, TState state, Exception exception, Func<TState, Exception, string> formatter)
            => _logger.Log(logLevel, eventId, state, exception, formatter);
    }

    Note : this only affects usage of log by your code and not runtime 2 .Another option is to follow the manual procedures for setting up Application Insights https://docs.microsoft.com/en-us/azure/azure-monitor/app/azure-web-apps?tabs=netcore see section Manually instrumenting the application through code by installing the Application Insights SDK.

pragnagopa commented 4 years ago

@brettsam - Should LogLevel be specified for AppliationInsights logging provider as shown here: https://docs.microsoft.com/en-us/azure/azure-monitor/app/ilogger#create-filter-rules-in-configuration-with-appsettingsjson?

andrewleader commented 4 years ago

I'm affected by this issue too, super frustrating, been wasting lots of $$ on tracking pointless SQL dependencies in Application Insights and there's no way to disable!

andrewleader commented 4 years ago

So as @ak98 described, setting the logLevel in host.json doesn't seem to work for my apps (tested by using a function like they provided).

However, if I create a new Azure Function app and deploy that, the host.json logLevel seems to work fine. So it's only existing Azure Functions that are ignoring the host.json...

brettsam commented 4 years ago

@jublair, I want to revisit this -- I've just tried your repro and all filtering is happening as expected for me. One thing I notice is that you have enableDependencyTracking at the wrong level; it needs to be as a property of applicationInsights, not of samplingSettings.

It'd also be good to know what you mean by "I've noticed that these configurations are not reflected in Application Insights logging". Do you mean in Live Metrics, or in the actual log tables? If you mean Live Metrics, you should note that Live Metrics logs are not filtered in any way. These logs are free so we allow the full stream to go through -- we do filter the persisted logs, which is where the filter applies.

For anyone else that is not seeing filtering as they expect, it'd be useful to see exactly what logs you want to remove (make sure not to post any private details if you're copy/pasting logs). Here's a good sample query for the traces table that shows what category and logLevel messages are coming in as (adjust time scale as needed):

traces
| where timestamp > ago(30m)
| extend category = customDimensions.Category
| extend logLevel = customDimensions.LogLevel
| project timestamp, category, logLevel, message
| order by timestamp asc

If you can share the logs that are appearing there that you didn't expect, that will help us track this down. You can also do this for requests, dependencies, etc tables (you may need to tweak as message only exists in the traces table).

Also, if you could share your Function App name (either explicitly or privately), I can see if I find anything from the backend logs. That will show us exactly what options you're running with, which could help track down other issues here.

ghost commented 4 years ago

Hi @brettsam, thanks for the info. I've updated my host.json file as per our discussion regarding docs here: https://github.com/MicrosoftDocs/azure-docs/pull/43025

I was checking the Live Metrics here, so the appearance logs being unfiltered was a red herring on my part. Are you aware of whether pre-filtered metrics count towards resource costing?

Anyway, I've since re-done our logging structure so we can filter logs across the board using sampling rather than filtering out specific logs using host.json settings, meaning this issue no longer applies to our project.

brettsam commented 4 years ago

Live Metrics is free, which is why we allow that telemetry to flow unrestricted. Early on there was a lot of confusion where someone had turned off logging (or set it to Error-only) and then seen effectively no invocations in Live Metrics. So we decided to show everything there since it was no cost.

However, we're getting feedback that maybe we show too much and it's also confusing wrt filtering in host.json (this case being a good example). I'm not sure exactly what we'll do yet... but you're not the only one confused by this.

avishnyakov commented 4 years ago

Hi team, did we have progress on this issue? @brettsam @jublair

Documentation on host.json for Azure functions suggest that tracing can be configured per-category. It does not seem to work. Using azure functions with JavaScript runtime, tons of telemetry which we cannot suppress.

    "logging": {
        "fileLoggingMode": "debugOnly",
        "logLevel": {
          "Function.MyFunction": "Warning",
          "default": "None"
        },

Doing everything as per documentation and then ending up with things not working and moreover, having random GitHub tickets to find the root cause or undocumented behavior is not very productive.

Can you please advice on options here?

brettsam commented 4 years ago

Can you follow up with details that I've asked for here? That would help us track down what you're seeing.

avishnyakov commented 4 years ago

@jublair, I want to revisit this -- I've just tried your repro and all filtering is happening as expected for me. One thing I notice is that you have enableDependencyTracking at the wrong level; it needs to be as a property of applicationInsights, not of samplingSettings.

Didn't get this one. Here is example of host.json for testing purposes.

{
  "logging": {
    "logLevel": {
      "Function.MyFunction": "Warning",
      "default": "Warning"
    },
    "applicationInsights": {
      "httpAutoCollectionOptions": {
        "enableHttpTriggerExtendedInfoCollection": false,
        "enableW3CDistributedTracing": false,
        "enableResponseHeaderInjection": false
      }
    }
  }
}

It'd also be good to know what you mean by "I've noticed that these configurations are not reflected in Application Insights logging".

AppInsights application has tons of traces coming from the host itself and therefore, on high frequency functions, it fills AppInsights limits supre-quickly by simply pushing "trigger was run, trigger running, trigger completed" traces. Even if you have no traces, only warning and errors, the AppInsights application is getting out of control. Accosted cost is rather unreasonable.

For anyone else that is not seeing filtering as they expect, it'd be useful to see exactly what logs you want to remove

Anything that has to do with category "Function.MyFunction" and other categories coming from the host itself. Anything that is NOT your own traces, warning and exceptions.

The query gives these, that's the trace which is piling up.

johannesoenema commented 4 years ago
traces
| where timestamp > ago(30m)
| extend category = customDimensions.Category
| extend logLevel = customDimensions.LogLevel
| project timestamp, category, logLevel, message
| order by timestamp asc

Same problem here. I ran this query and since 20th of February (this day we didn't release the azure function, so it wasn't a code change) category and log level are empty. Could this be a reason why the host.json configuration is not working?

Example:

brettsam commented 4 years ago

@johannesoenema / @avishnyakov -- If you could share your Function App name (either explicitly or privately), I can see if I find anything from the backend logs.

avishnyakov commented 4 years ago

@brettsam thank you. I raised support ticket with Azure instead.

Can you please advice on options to control this bits of logging?

We can see that this ticket was raised on Nov 15, 2019 and it is unclear if there is a solution. With current understanding, pragmatic options in some cases is to ditch Azure functions and consider something else.

johannesoenema commented 4 years ago

@brettsam 2020-02-26T10:35:58.831 [Information] Executing *** (Reason='', Id=22f5f1fe-45a8-4055-b757-790ac4b997be) Region: West Europe

brettsam commented 4 years ago

@johannesoenema -- it looks like you are registering your own Microsoft.ApplicationInsights.TelemetryClient in your Startup class. That could be the reason that the filtering isn't working -- this may be bypassing our built-in filtering logic. Can you share what this service is set up to do? That may help point me in the right direction.

brettsam commented 4 years ago

@avishnyakov -- do you have any details on the ticket you've opened with Azure? I can take a look. Feel free to email me directly; my Microsoft email is in my github profile.

johannesoenema commented 4 years ago

@brettsam Thanks for your quick reply, I have sent you an email regarding the issue.

johannesoenema commented 4 years ago

My issue was resolved with help of @brettsam. TL;DR: now it is not needed (and advised) anymore to register your own telemetry client at startup as it will override the telemetry client of the azure function. If you want to add your own telemetry initializer at startup you are now able to do it like this:

services.AddSingleton<ITelemetryInitializer>(...);

I hope this will help some other people that have a similar issue. Read more about it here: https://docs.microsoft.com/en-us/azure/azure-functions/functions-dotnet-dependency-injection#logging-services

fabiocav commented 4 years ago

@brettsam any additional work on this? Or should this be closed?

avishnyakov commented 4 years ago

Mine is also solved. Kudos to @brettsam for pointing out things we overlooked. Tops!

argium commented 4 years ago

Edit: I think I was able to resolve this by checking the box to delete all files at the destination while publishing.

@brettsam @fabiocav I'm also experiencing logTrace and logDebug lines ending up in app insights despite host.json settings.

  "version": "2.0",
  "functionTimeout": "20:00:00",
  "extensions": {
    "queues": {
      "maxPollingInterval": "00:00:10",
      "visibilityTimeout": "00:00:30",
      "batchSize": 1,
      "maxDequeueCount": 1,
      "newBatchThreshold": 0
    }
  },
  "logging": {
    "applicationInsights": {
      "samplingSettings": {
        "isEnabled": true,
        "maxTelemetryItemsPerSecond": 2000
      }
    },
    "logLevel": {
      "Microsoft.SecurityAssessment.AzureCompliance": "Information",
      "System.Net.Http.HttpClient": "Warning", 
      "default": "Information"
    }
  }
}

These TRACE rows below should be filtered out and not sent to app insights by the first line in the logLevel block above.

image

I have a lot of debug and trace lines and they're not intended to be logged anywhere in production. I'm deploying a debug build, if that's relevant. Function app name is AzureSubComplianceExperimental.

Packages: image

    <TargetFramework>netstandard2.0</TargetFramework>
    <AzureFunctionsVersion>v2</AzureFunctionsVersion>
fabiocav commented 4 years ago

@brettsam closing this. Please open another issue if any follow up is required.