Azure / azure-functions-host

The host/runtime that powers Azure Functions
https://functions.azure.com
MIT License
1.93k stars 441 forks source link

[BUG] Extension suddenly logging unwanted information messages into log stream of Functions app #8973

Closed jsquire closed 1 year ago

jsquire commented 1 year ago

Issue Transfer

This issue has been transferred from the Azure SDK for .NET repository, #33008.

Please be aware that @TheCakeMonster is the author of the original issue and include them for any questions or replies.

Details

The last deployment we did was 27 days ago. These log entries were not present at that time; they only started appearing in the last 10-14 days.

Describe the bug

We have an Azure functions app (.NET 6, running in the original in-process deployment model on functions V4) that has been in production for about 18 months. Suddenly, in the last 10 days, we have started getting an overload of unwanted logging of messages at level Information, which appear to be from Azure.Storage.Blobs and/or Azure.Storage.Queues.

The default log level specified in host.json is Warning. host.json has not changed for about 6 weeks.

These new log entries appear to have been triggered by a change in Azure - a change to the platform. We haven't changed anything in our app for over a month, but suddenly these logs have started appearing in the log stream. These could dramatically increase our App Insights telemetry costs - but I haven't found time to check yet. There is a lot of information in the logs, and we don't want any of it, so my hope is that we are not suddenly having to pay for it because of a change to the behaviour of Azure's runtime.

Whether it is hitting App Insights or not, the effect on the log stream is fairly dramatic - in a bad way. It is now very difficult - almost impossible - to find the useful log entries from our app (which we record at level Information) in the log stream because of all of the noise from the infrastructure.

I think there might be a bug in the way entries are being filtered, or a change in behaviour in the way the log stream works, so that it no longer uses host.json settings to filter the log stream before display.

What follows is a small example of the logs we are seeing. I have redacted the details of a number of the log entry values, as I don't want to expose anything sensitive. My redactions are probably excessive, but better I redact too much than too little. Hopefully, the format of the message rather than the content will help you understand what these are, and help you find why they have started appearing in our carefully filtered log stream in the last 10 days.

2022-12-12T15:35:23Z   [Information]   Request <redacted> HEAD https://storageaccount<redacted>.blob.core.windows.net/azure-webjobs-hosts/locks/<redacted>/host
x-ms-version:2021-08-06
Accept:application/xml
x-ms-client-request-id:<redacted>
x-ms-return-client-request-id:true
User-Agent:azsdk-net-Storage.Blobs/12.13.0,(.NET 6.0.11; Microsoft Windows 10.0.14393)
x-ms-date:Mon, 12 Dec 2022 15:35:22 GMT
Authorization:REDACTED
client assembly: Azure.Storage.Blobs
2022-12-12T15:35:23Z   [Information]   Response <redacted> 200 OK (00.0s)
Accept-Ranges:bytes
ETag:<redacted>
Server:Windows-Azure-Blob/1.0,Microsoft-HTTPAPI/2.0
x-ms-request-id:<redacted>
x-ms-client-request-id:<redacted>
x-ms-version:2021-08-06
x-ms-meta-FunctionInstance:REDACTED
x-ms-creation-time:Fri, 22 Oct 2021 13:03:11 GMT
x-ms-lease-status:locked
x-ms-lease-state:leased
x-ms-lease-duration:fixed
x-ms-blob-type:BlockBlob
x-ms-server-encrypted:true
Date:Mon, 12 Dec 2022 15:35:22 GMT
Content-Length:0
Content-Type:application/octet-stream
Content-MD5:<redacted>
Last-Modified:Mon, 12 Dec 2022 02:57:45 GMT

I think this is in a related area to https://github.com/Azure/azure-functions-dotnet-worker/issues/796 and https://github.com/Azure/azure-sdk-for-net/issues/31967, which are about whether these are really Informational. I don't think these log entries are information - this is debugging data from underlying infrastructure in which we are not interested, and it is not really appropriate to put them at a level of Information. However, that's a separate issue. I have chosen to report this as there has been a change in behaviour in the last 10 days or so.

Expected behavior

Logging should not increase significantly because of deployments to the underlying cloud platform.

These log entries did not appear a few weeks ago, so they should not be there now - but suddenly they are.

Actual behavior

A significant increase in the number of entries in the log stream, making it very hard to find what we want in amongst the noise generated by the underlying runtime/platform.

Reproduction Steps

I am guessing this affects many Functions apps. I haven't got time to create a new one at the moment, but my guess is that these log entries will appear for any .NET functions app targeting .NET 6 using the in-proc model and using blob storage in any way. It may also affect lots of other deployment models, I'm not sure.

Environment

.NET 6.0, Azure Functions V4, older in proc deployment model.

3 of our functions have Http triggers, one has an Azure Storage Queue trigger. The functions using Http triggers seem to have been affected, but the one with storage queues might have as well. I can't quite decipher the logs to work that out at the moment because they are so noisy.

jorge-datactika commented 1 year ago

It just started to happen to me... Haven't made any releases in months! Please help!

It looks like these logs are just showing in the LogStream but not in Application Insights

RetiredDotaPlayer commented 1 year ago

Same here... Have tried making these log settings in host.json, which is not working. The information level flood still exists, making the log stream unreadable.

      "Host.Results": "Warning",
      "Azure.Core": "Warning",
      "Azure.Storage.Blobs": "Warning",
DomPurnell commented 1 year ago

yes - this is causing me pain too .net 6 v4.

d4046949 commented 1 year ago

Yep - causing issues here too! Making the logs so noisy that we can't actually work with it. Any work around? I tried changing the log settings like @RetiredDotaPlayer mentions and again, no luck either!

mincheHuang commented 1 year ago

I occurred the same issue with blob service when I included "Microsoft.Azure.WebJobs.Extensions.Storage" or Microsoft.Azure.WebJobs.Extensions.Storage.Blobs version "5.0.0" / "5.0.1" , but I found when I used and downgraded "Microsoft.Azure.WebJobs.Extensions.Storage" to version "4.0.4", it won't occur those annoying logs repeatedly again.

bw-adriangould commented 1 year ago

Hi - We have the same issue. Very noisy logging to the Log Stream in Azure portal for all of our Azure function V4 apps. The log stream is so chatty and noisy, it's almost impossible to monitor anything useful, and yet nothing has changed our end. Please advise.

SomaAmbati commented 1 year ago

yes, we too have same issue.

sdean3842 commented 1 year ago

I came across this issue in the last few days whilst trying to lock down Azure Function v4 (isolated mode) chattiness ahead of a new client project, prompted by having previously witnessed AI cost horror. While it doesn't affect us directly at present, others might appreciate a response from Microsoft Billing on any plan to compensate for excess AI telemetry ingestion.

thomasf1234 commented 1 year ago

This has significantly impacted our team's ability to diagnose issues. Is there any workaround available? After trying to use it for several weeks, so many lines are written that we can't correlate what we're seeing in real time, making this too difficult to use. We have gotten used to this feature being simple and available as this feature has been priceless to us over the last year (for instant feedback on functions), but we are struggling with demand now in this vital part of monitoring.

danielvaneck commented 1 year ago

We are also facing this issue with .net 7 and Azure Fuctions v4. I can see the AI logging has increased substantially and this is contributing to a much higher bill for ingestion.

I have tried the suggested log level fixes in the host.json file, however these are not being respected by the logging for blob storage and I can still see the flood of logs. Is there any ETA on a fix for this?

joachimdalen commented 1 year ago

We experienced the same issue. In addition to logging to Application Insights our functions are configured with Serilog to send to different sources. By setting the following configuration we were able to filter these out

Log.Logger = new LoggerConfiguration()
.MinimumLevel.Information()
.MinimumLevel.Override("Azure", LogEventLevel.Warning)
.CreateLogger();
d4046949 commented 1 year ago

Does the above work with Azure Functions? I don't use Serilog and hoped for a solution that would involve configuration changes in the Host.json

joachimdalen commented 1 year ago

Does the above work with Azure Functions? I don't use Serilog and hoped for a solution that would involve configuration changes in the Host.json

@d4046949 We are using it in Azure Functions. All our attempts at changing the host.json file failed, no matter what namespace we defined. Already had the Serilog implementation in our apps, so it wasn't a big change for us. After posting the last comment, we also had to add some additional namespaces to get rid of the extra logging from Durable Functions.

d4046949 commented 1 year ago

I don't suppose you know how to configure this for the default logger?

I've got the IOC installed and setup, but I'm at a lost as to how to configure the default logger to do what you're doing.

codylittle commented 1 year ago

Has anyone found a resolution for this? Also experiencing this using Python

pjmvp commented 1 year ago

This is quite a nuisance! Surprised it hasn't been looked at.

We're getting a lot of unwanted logs like this:

Request [5c30e4c2-30ef-4c60-89e7-c8c850f1b597] PUT https://(REDACTED)
x-ms-lease-action:renew
x-ms-lease-id:d011227a-a939-9337-ebd1-9480e1eb64be
x-ms-version:2021-08-06
Accept:application/xml
x-ms-client-request-id:0000e4c2-30bf-4960-89e7-c8c850f1b111
x-ms-return-client-request-id:true
User-Agent:azsdk-net-Storage.Blobs/12.13.0,(.NET 6.0.13; Microsoft Windows 10.0.14393)
x-ms-date:Tue, 07 Mar 2023 19:56:17 GMT
Authorization:REDACTED
client assembly: Azure.Storage.Blobs

(among others...) polluting my Application Insights traces, not to mention the additional cost for ingesting these logs

Tried all sorts of combinations in my host.json's "logging" section and nothing worked.

{
  "version": "2.0",
  "logging": {
    "applicationInsights": {
      "samplingSettings": {
        "isEnabled": true,
        "excludedTypes": "Request"
      }
    },
    "logLevel": {
      "default": "Information",
      "Host.Results": "Information",
      "Function": "Information",
      "Host.Aggregator": "Information",
      // HERE:
      "Azure.Core": "Warning",
      "Azure.Storage.Queues": "Warning",
      "Azure.Storage.Blobs": "Warning"
    }
  }
}
{
  "version": "2.0",
  "logging": {
    "applicationInsights": {
      "samplingSettings": {
        "isEnabled": true,
        "excludedTypes": "Request"
      }
    },
    "logLevel": {
      "default": "Information",
      "Host.Results": "Information",
      "Function": "Information",
      "Host.Aggregator": "Information",
      "Azure.Core": "Warning",
      "Azure.Storage.Queues": "Warning",
      "Azure.Storage.Blobs": "Warning"
    },
    // HERE:
    "AzureAppServicesBlob": {
      "LogLevel": {
        "Default": "Information",
        "Microsoft": "Error",
        "System": "Error"
      }
    }
  }
}

...and more.

Also seen it mentioned:

How can we get turn off these logs without discarding Application Insights entirely?!

spearzy commented 1 year ago

This is also a security issue logging request parameters like Bearer tokens out in the log stream output

leidegre commented 1 year ago

I can only concur with what has been said already. I'm looking that log stream right now and all I see is

2023-04-12T06:18:58Z   [Information]   Request [133d8023-09bd-4cc0-938d-277314594baa] PUT https://...blob.core.windows.net/azure-webjobs-hosts/locks/.../host?comp=lease
x-ms-lease-action:renew
x-ms-lease-id:00000000-0000-0000-0000-000026dc43da
x-ms-version:2021-08-06
Accept:application/xml
x-ms-client-request-id:133d8023-09bd-4cc0-938d-277314594baa
x-ms-return-client-request-id:true
User-Agent:azsdk-net-Storage.Blobs/12.13.0,(.NET 6.0.15; Linux 5.10.102.2-microsoft-standard #1 SMP Mon Mar 7 17:36:34 UTC 2022)
x-ms-date:Wed, 12 Apr 2023 06:18:57 GMT
Authorization:REDACTED
client assembly: Azure.Storage.Blobs
2023-04-12T06:18:58Z   [Information]   Response [133d8023-09bd-4cc0-938d-277314594baa] 200 OK (00.0s)
ETag:"0x8DB3B1C02F3DCEF"
Server:Windows-Azure-Blob/1.0,Microsoft-HTTPAPI/2.0
x-ms-request-id:165bc704-901e-0053-1906-6d4ac9000000
x-ms-client-request-id:133d8023-09bd-4cc0-938d-277314594baa
x-ms-version:2021-08-06
x-ms-lease-id:00000000-0000-0000-0000-000026dc43da
Date:Wed, 12 Apr 2023 06:18:57 GMT
Content-Length:0
Last-Modified:Wed, 12 Apr 2023 06:06:09 GMT

Over and over and over again. That is 30 lines of noise every second or two.

The problem here is that the Azure.Core package is logging all requests with log level information.

https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/core/Azure.Core/src/Diagnostics/AzureCoreEventSource.cs#L57-L61

Log filtering doesn't apply to the log stream, so there's no point in trying to filter these out. The only thing we can do is to change the log level to warning in the log stream view.

What team is responsible for the log stream? Can we petition them to add more filtering capabilities so that you can more easily get away from this problem. The problem is really that all the polling the Azure Function Host does ruins the log stream feature for everyone. It is more or less unusable even for development work (never mind a production setting). It's just too spammy. Can't the log stream just work with category filters to? Not that it should pickup the host.json config but that it should allow you to filter out unwanted output.

mfdanb commented 1 year ago

Any official updates on this please? This issue single-handedly killed the log streaming feature of AppInsights for us, it is just unusable at this point.

alpersilistre commented 1 year ago

Any official updates on this please? This issue single-handedly killed the log streaming feature of AppInsights for us, it is just unusable at this point.

Yes, I'm also waiting this to be resolved. We can't use any app insights feature anymore on Web Job and Function App below the warning level. This has cost hundreds of euros in a couple days, and now we need to filter this:

.MinimumLevel.Override("Azure", LogEventLevel.Warning)

This repo is getting commits, but no one has taken any action to resolve this. An explanation from someone would be fine, at least we'd know that someone is looking at it.

to-fi commented 1 year ago

any news and/or workarounds available ??? please can someone give a short update on this?

skotl commented 1 year ago

Same issue and plea here. Disappointing none of the maintainers have picked this up in six months.

to-fi commented 1 year ago

@soninaren any news on that ?

mikeasharp commented 1 year ago

Bump.

I'm getting noise in the log stream every 4 seconds. Makes the log stream unusable. All host.json changes are ignored.

leidegre commented 1 year ago

I understand why this gets discussed here but I'm quite certain that this is the wrong place, I don't think the same people responsible for the Azure Function stuff are responsible for the log streaming feature. At least this would be my guess but maybe we can route this to the App Service people? That I believe can do something about this. It is very unfortunate that the Azure.Storage.* packages are this verbose given how they are being used.

fabiocav commented 1 year ago

Investigation for this is in progress. We'll post updates once we have more information.

RetiredDotaPlayer commented 1 year ago

[heart] Chi Zhang reacted to your message:

chi @.***


From: Fabio Cavalcante @.> Sent: Wednesday, June 21, 2023 8:13:25 PM To: Azure/azure-functions-host @.> Cc: Chi Zhang @.>; Mention @.> Subject: Re: [Azure/azure-functions-host] [BUG] Extension suddenly logging unwanted information messages into log stream of Functions app (Issue #8973)

Investigation for this is in progress. We'll post updates once we have more information.

— Reply to this email directly, view it on GitHubhttps://github.com/Azure/azure-functions-host/issues/8973#issuecomment-1601608801, or unsubscribehttps://github.com/notifications/unsubscribe-auth/AZTZ4MJY5SVFZS7VJS3M74DXMNIWLANCNFSM6AAAAAAS4HEIIY. You are receiving this because you were mentioned.Message ID: @.***>

Nice-Major commented 1 year ago

Hello @fabiocav, Good day, I hope you are having a good day, are there any updates on this issue? Look forward to hearing from you.

fabiocav commented 1 year ago

@surgupta-msft is actively workin on this. We'll continue to provide updates as we make progress here.

brettsam commented 1 year ago

@surgupta-msft and I have been looking at this and I wanted to provide an update on what we're planning.

First, let me explain what's happening. A long time back when we first added App Insights support to Functions, we decided that we did not want the logging filters set in host.json to apply to the Live Metrics stream. The idea was that you could turn off all logging for an application but still jump on Live Metrics in the portal and see how things were behaving. Because Live Metrics is free, we didn't think it'd matter much. We also didn't think of the perf implications. In retrospect, this was likely a confusing decision, and it has caused some issues throughout the years.

But to be clear -- if you add a filter to host.json and see those logs in Live Metrics, it does not necessarily mean they are also being persisted in App Insights. You'd need to go to the actual Logs in App Insights to check there. If you are seeing the filters not working as expected in the persisted layer, please let us know. Everything we've tested with the filters does, indeed, filter there correctly.

With a recent release of the bindings, Functions and WebJobs updated their dependencies on the Azure SDK. This new dependency introduced a wave of new logs at the Information level, resulting in a flood of logs with the "Azure.Core" category. A lot of reports came in that these are unfilterable -- however for everything we see they are filterable, just not for the Live Metrics view.

This behavior isn't something we can change by default until a new major version of the Functions host. So instead, we're going to:

  1. Introduce a new property that you can set in host.json (something like EnableLiveMetricsFilters) which will then allow the filters to be applied at the Live Metrics level. This would make log filtering in the Live Metrics view match those that are persisted. We suspect there'll also be a perf benefit here although we've not measured it. Then, adding "Azure.Core": "Warning" to the log filters should clearly show in both Live Metrics and the persisted logs.
  2. Update our new project templates to enable this setting (along with a "Azure.Core": "Warning" log filter). That way it's clear that these exist and it's easy enough to switch them. But there's been so much confusion around these that we really want them front-and-center.

We hope this addresses the issue and provides some background. Please let us know if something here doesn't seem to addres the issue that you've been seeing in this thread.

pjmvp commented 1 year ago

@brettsam , thank you for reaching back to us. I haven't tested this yet but I'm a bit confused:

it does not necessarily mean they are also being persisted in App Insights. You'd need to go to the actual Logs in App Insights to check there. If you are seeing the filters not working as expected in the persisted layer, please let us know.

It has been a while, but I'm relatively sure the logs were indeed persisted in Application Insights. After all, the way I noticed something was wrong was when my Azure bill started to skyrocket and the logs were the culprit. Correct me if I'm wrong, but if I was billed by those logs it means they were ingested?

I will have to test it out soon.

Regarding @surgupta-msft 's post in this issue:

Changes have been merged. Once Host is released, will release the templates and update public documentation describing the new setting - EnableLiveMetricsFilters.

Where can we know when the host has been released?

Thank you.

TheCakeMonster commented 1 year ago

@surgupta-msft and I have been looking at this and I wanted to provide an update on what we're planning.

First, let me explain what's happening. A long time back when we first added App Insights support to Functions, we decided that we did not want the logging filters set in host.json to apply to the Live Metrics stream. The idea was that you could turn off all logging for an application but still jump on Live Metrics in the portal and see how things were behaving. Because Live Metrics is free, we didn't think it'd matter much. We also didn't think of the perf implications. In retrospect, this was likely a confusing decision, and it has caused some issues throughout the years.

But to be clear -- if you add a filter to host.json and see those logs in Live Metrics, it does not necessarily mean they are also being persisted in App Insights. You'd need to go to the actual Logs in App Insights to check there. If you are seeing the filters not working as expected in the persisted layer, please let us know. Everything we've tested with the filters does, indeed, filter there correctly.

With a recent release of the bindings, Functions and WebJobs updated their dependencies on the Azure SDK. This new dependency introduced a wave of new logs at the Information level, resulting in a flood of logs with the "Azure.Core" category. A lot of reports came in that these are unfilterable -- however for everything we see they are filterable, just not for the Live Metrics view.

This behavior isn't something we can change by default until a new major version of the Functions host. So instead, we're going to:

  1. Introduce a new property that you can set in host.json (something like EnableLiveMetricsFilters) which will then allow the filters to be applied at the Live Metrics level. This would make log filtering in the Live Metrics view match those that are persisted. We suspect there'll also be a perf benefit here although we've not measured it. Then, adding "Azure.Core": "Warning" to the log filters should clearly show in both Live Metrics and the persisted logs.
  2. Update our new project templates to enable this setting (along with a "Azure.Core": "Warning" log filter). That way it's clear that these exist and it's easy enough to switch them. But there's been so much confusion around these that we really want them front-and-center.

We hope this addresses the issue and provides some background. Please let us know if something here doesn't seem to addres the issue that you've been seeing in this thread.

Thanks, that additional property to apply the filtering to Live Metrics sounds great.

I agree that there is confusion about the difference between Live Metrics (the log stream in particular) and App Insights. I raised this issue because of the noise in the log stream - part of Live Metrics - rather than that the filters were not working for App Insights.

I look forward to the release of the new version, I will be pressing that new property into action as soon as it's available!

I'm not sure where to look for news of releases of the Functions host. Is there a blog we can follow, or something similar? Is it visible somewhere in the Azure Portal?

pjmvp commented 1 year ago

@brettsam , thank you for reaching back to us. I haven't tested this yet but I'm a bit confused:

it does not necessarily mean they are also being persisted in App Insights. You'd need to go to the actual Logs in App Insights to check there. If you are seeing the filters not working as expected in the persisted layer, please let us know.

It has been a while, but I'm relatively sure the logs were indeed persisted in Application Insights. After all, the way I noticed something was wrong was when my Azure bill started to skyrocket and the logs were the culprit. Correct me if I'm wrong, but if I was billed by those logs it means they were ingested?

I will have to test it out soon.

Regarding @surgupta-msft 's post in this issue:

Changes have been merged. Once Host is released, will release the templates and update public documentation describing the new setting - EnableLiveMetricsFilters.

Where can we know when the host has been released?

Thank you.

For future reference, this is what my host,json looks like now WRT logging:

"logging": {
    "logLevel": {
      "default": "Information",
      "Host.Results": "Information",
      "Function": "Information",
      "Host.Aggregator": "Information",
      "Azure.Core": "Warning",
      "Azure.Storage.Queues": "Warning",
      "Azure.Storage.Blobs": "Warning"
    },
    "applicationInsights": {
      "samplingSettings": {
        "isEnabled": true,
        "maxTelemetryItemsPerSecond": 20,
        "excludedTypes": "Request;Exception"
      },
      "enableDependencyTracking": true
    }
  }

So far, I haven't seen any excessive logging and my costs haven't risen 👍

smriti2901 commented 11 months ago

The verbose logging in our Webjobs stopped by adding in Serilog logger configuration. .MinimumLevel.Override("Azure.Core", LogEventLevel.Warning)

golfalot commented 9 months ago

This brought me happiness in my .Net 6 WebJob console app

  builder.ConfigureLogging((context, b) =>
  {
      b.AddConsole();
      b.AddApplicationInsightsWebJobs(options => options.InstrumentationKey = instrumentationKey);
      b.AddFilter("", LogLevel.Information);
      b.AddFilter("Azure.Core", LogLevel.Warning);
  });
varkey98 commented 6 months ago

Can someone please confirm if it still exists? I tried adding the attached host.json in my in process v4 function (http trigger) but didnt see any request/response information in live stream.

"logging": {
    "logLevel": {
      "default": "Trace",
      "Azure.Core": "Trace",
    },
    "applicationInsights": {
      "samplingSettings": {
        "isEnabled": true,
        "maxTelemetryItemsPerSecond": 20,
        "excludedTypes": "Request;Exception"
      },
      "enableDependencyTracking": true
    }
  }