Azure / Azure-Functions

1.12k stars 199 forks source link

Is debug level logging meant to be like this or have I missed something? #2388

Open TehWardy opened 1 year ago

TehWardy commented 1 year ago

This is a plea to whoever is responsible for the mess that is logging in Azure Functions applications. I am at my wits end just trying to get simple things to behave in a simple consistent manner with Azure functions and my last ditch attempt was to rely on debug level logging calls which led to this fiasco.

Expecting nothing less than the obvious question "what's my MVP here to reproduce the issue", here we go ...

  1. Create a new console app
  2. modify the project file to this ...

    <Project Sdk="Microsoft.NET.Sdk.Web">
    
    <PropertyGroup>
    <TargetFramework>net7.0</TargetFramework>
    <AzureFunctionsVersion>v4</AzureFunctionsVersion>
    <OutputType>Exe</OutputType>
    <ImplicitUsings>enable</ImplicitUsings>
    </PropertyGroup>
    
    <ItemGroup>
    <PackageReference Include="Microsoft.Extensions.Logging.Abstractions" Version="7.0.1" />
    </ItemGroup>
    </Project>
  3. Modify the Progam.cs to this ...

    
    var host = new HostBuilder()
    .ConfigureServices((builder, services) =>
    {
        services.AddLogging(loggingBuilder => {
            loggingBuilder.AddSimpleConsole(options =>
            {
                options.SingleLine = true;
                options.TimestampFormat = "hh:mm:ss ";
            });
        });
    
    }).Build();

ILogger log = host.Services.GetService<ILogger>();

log.LogInformation("Hello World!"); log.LogDebug("This is a debug message");

host.Run();

 4. Run and note the debug message does not appear
 5. Modify to this ...

var host = new HostBuilder() .ConfigureServices((builder, services) => { services.AddLogging(loggingBuilder => { loggingBuilder.AddSimpleConsole(options => { options.SingleLine = true; options.TimestampFormat = "hh:mm:ss "; });

        loggingBuilder.AddFilter(level => level >= LogLevel.Debug);
    });

}).Build();

ILogger log = host.Services.GetService<ILogger>();

log.LogInformation("Hello World!"); log.LogDebug("This is a debug message");

host.Run();

 6. marvel at the wonder of simple logging in dotnet.

...
Now lets repeat this process but with azure functions ...
 1. Create a new functions app with a timer run function 
      - I don't think the timer isn't specifically required it's just how I came across this
  2. Modify the function Run method code to this ...

[Function("Function1")] public void Run([TimerTrigger("0 /1 *")] MyInfo myTimer) { log.LogInformation("Hello World!"); log.LogDebug("This is a debug message"); }

You'll also need to inject a logger in to the constructor like this ...
    public Function1(ILoggerFactory loggerFactory) =>
        log = loggerFactory.CreateLogger<Function1>();

or like this ...
    public Function1(ILogger<Function1> log) =>
        this.log = log;

but definitely not like this ...

[Function("Function1")] public void Run([TimerTrigger("0 /1 *")] MyInfo myTimer, ILogger log) { log.LogInformation("Hello World!"); log.LogDebug("This is a debug message"); }

... because apparently there's documentation out there that even confuses ChatGPT suggesting this works.
From whati've seen at the very least you'd need to some additional DI setup, but I have no idea what it is.

 3. Run the app, note the lack of any debug output when the function is called
 4. Modify the host builder in the program.cs file to this ...

var host = new HostBuilder() .ConfigureFunctionsWorkerDefaults() .ConfigureServices((builder, services) => { services.AddLogging(loggingBuilder => { loggingBuilder.AddSimpleConsole(options => { options.SingleLine = true; options.TimestampFormat = "hh:mm:ss "; });

            loggingBuilder.AddFilter(level => level >= LogLevel.Debug);
        });

    })
.Build();
 5. Note that the console app and the function app are now the same
 6. Run the function app

When the function is called we can see a few weird things ...
 - Double timestamps (only the second is formatted)
 - Double logging calls (formatted differently too)

This is where it gets particularly annoying ...
As documented the way we are supposed to handle our logging in functions apps is by configuring this in the host.json file.
So as our logging is going to hit the console anyway out the box for functions apps we can strip the above code back to ...

var host = new HostBuilder() .ConfigureFunctionsWorkerDefaults() .Build();

... then according to the documentation here ...
   https://learn.microsoft.com/en-us/azure/azure-functions/configure-monitoring?tabs=v2
... we should just be able to add this to our host.json "logging" element ...
"fileLoggingMode": "always",
"logLevel": {
  "default": "Debug",
  "Host": "Error",
  "Function": "Error",
  "Host.Aggregator": "Information"
}
... and achieve the same result ... right?
Apparently not.
From what I can tell, no amount of messing about with this section fo the configuration makes any kind of difference to the log output.

It gets even more confusing ...
After explaining that you can do the above, the same documentation goes on to say:

Azure Functions integrates with Application Insights by storing telemetry events in Application Insights tables. Setting a category log level to any value different from Information will prevent the telemetry to flow to those tables. As outcome, you won't be able to see the related data in Application Insights or Function Monitor tab.

So I can configure logging, but if I do, it won't work on the cloud inside a framework that is designed to work in a serverless "cloud environment".

It gets even more crazy when we take the "full example" shown here ...
  https://learn.microsoft.com/en-us/azure/azure-functions/functions-host-json#sample-hostjson-file

Which is pre-fixed with this ...

Other function app configuration options are managed depending on where the function app runs:

Deployed to Azure: in your application settings On your local computer: in the local.settings.json file. Configurations in host.json related to bindings are applied equally to each function in the function app.

You can also override or apply settings per environment using application settings.


... A vague statement that says "oh you can / might want to put all of this one of 3 files", but given that the above behaves the way it does I'm confused ... 
  Why configure logging at all if you're forced to then put it in code anyway?

Generally speaking I'd argue that from logging most of us want the same thing which boils down to ...
 - formatting so we can output key information like ...
    - timestamp (formatted)
    - logging level used
    - The source type in the code
    - some message
 - The ability to configure / specify logging levels we want to see in the logs
    - Optionally with an ability to configure for different "scopes" different levels of logging
 - Consistency from it 
    - If grpc logging calls ignore our config entirely both config or coded that's not helpful
    - If my business logic and the functions framework log in different formats, that's not helpful
    - If I make calls using 2 different logging levels and they behave differently that's not helpful  

Am I missing something about how to correctly configure logging in functions apps or am I just seeing bugs that no one else seems to think are an issue?

I find with logging that it usually doesn't matter until you have a problem, then they are crucial to the entire process of solving a problem, particularly in complex production environments. 

And Finally ...
I'm in the wild, working on this stuff and I hit some issue in my functions app, 
I think the platform isn't playing ball ... 
what does Azure Functions Support staff do, they look at the App Insights stuff in the cloud and are inundated with basically a swamp of garbage that I can't control or filter, it's injected by the framework and adds no value to me but the support staff ultimately are seeing whats on the end of the logging that ultimately doesn't give them consistent results.

So to summarize ...
 - Logging either logs nothing at all or double logs "out of the box"
 - It behaves differently depending on the logging level used
 - Formatting seems to be a general issue / not configurable at all
 - Configuration of logging as documented is confusing and simply doesn't appear to work
 - The base logging behavior of a console app and a functions app should be consistent it's all dotnet
 - App Insights seems to be this magic that's forced on us as developers, it's huge and doesn't solve any day to day problem I have
 - Support staff at Azure Functions support are lost and make generic statements like "oh that's an sdk issue all we can do is escalate"
bhagyshricompany commented 1 year ago

Thanks for reporting.Once check at our end and update you.

TehWardy commented 1 year ago

@bhagyshricompany Simple suggestion here (I know this is not how it works at the moment but why not is my thinking) .... When I spin up a web / functions app locally and I get all my logging spat out to the console, is it unreasonable to expect that same output in the "Log Stream" part of the azure portal ... if not ... maybe there should be a place for that?

This is a one stop simple shop for "like for like" without which this stuff is just a maze of pain we have to chase through to find answers to seemingly simple questions.

What's worse is ... Explained this way, apparently no one at Azure support understands why this is not the case either.

It's not a complaint ... just a keen observation from a fan.

bhagyshricompany commented 1 year ago

Console App vs. Azure Functions Logging: The differences you observed between a console app and Azure Functions logging can be attributed to the underlying infrastructure and the way Azure Functions handles logging. While the console app has a straightforward logging setup, Azure Functions have a more complex logging infrastructure that interacts with other Azure services like Application Insights.

Configuration of Logging: Configuring logging in Azure Functions can be tricky, and sometimes the behavior might not match your expectations. Azure Functions provides various options for logging, including host.json, application settings, and code-based configuration. To ensure consistent logging behavior, it's essential to understand how these configurations interact and override each other.

Application Insights Integration: Application Insights is a powerful tool for monitoring and analyzing your application's behavior in the cloud. However, it may add complexity if you only need basic logging and don't want the additional telemetry data. If you prefer a more straightforward logging approach, you can consider disabling or reducing the verbosity of Application Insights.

Debugging and Support: When facing issues with Azure Functions or other Azure services, reaching out to Azure Support can sometimes be challenging, especially if they encounter issues with logging or telemetry data. It's essential to provide clear information about the problem, including relevant logs, error messages, and steps to reproduce the issue.

Community and Feedback: As a developer, your feedback on Azure Functions and related services is valuable. If you encounter specific issues with logging or any other aspects of Azure Functions, consider sharing your experiences on the Azure Feedback Forum or GitHub repository for Azure Functions. This can help the development team understand common pain points and improve the service accordingly.

@fabiocav pls comment and validate

TehWardy commented 1 year ago

Thank you for the feedback.

Right now I'm trying to standardize the behavior of the log stream output in the cloud ... The idea being that we can go that page and set the logging level from the drop down and then using the output to help track down complex issues between applications.

I'd just like to get some consistent behavior across the entire Azure platform such that a web app that talks to a functions app using something like Service bus or HTTP triggers both behave the same way.

I thought something as simple as this ...

var builder = WebApplication.CreateBuilder(args);

builder.Configuration
    .AddEnvironmentVariables()
    .SetBasePath(Directory.GetCurrentDirectory())
    .AddJsonFile("appsettings.json", optional: false, reloadOnChange: true);

builder.Services.AddLogging(loggingBuilder =>
{
    loggingBuilder.AddAzureWebAppDiagnostics();

    loggingBuilder.AddSimpleConsole(options => 
    { 
        options.TimestampFormat = "yyyy-MM-ddTHH:mm:ss "; 
        options.SingleLine = true; 
    });
});

// add other stuff 
var host = builder.Build();
host.Run();

Then some simple configuration either in host.json or appsetttings.json (depending on app type) ...

  "Logging": {
    "LogLevel": {
      "Default": "Debug",
      "Microsoft": "Warning",
      "Microsoft.Hosting.Lifetime": "Information"
    },

Notice a couple things here ... I added a dependency on Microsoft.Extensions.Logging.AzureAppServices to enable app insights. I configured "as documented" the logging levels I want "per namespace" whilst also specifying a default.

If I spin it up I would expect to get debug level logging from my own code and only warnings / Errors from Microsoft's code except for the overridden namespace at Information level.

This is how it seems to be documented, but this is not how it behaves.

In my web application I get ALL logging up to the drop down selected level on the log streaming page UI. I would expect ...

All Logging but with a final clause ... Up to the level configured in the drop on the UI, but not beyond the basic configuration defined in the app itself.

... and this disconnect causes me issues. It's essentially impossible for me to get debug level logging from my own code without getting pages and pages of Microsoft logging in the same stream too.

The worst case for example ... I set the logging level debug where in my own code I expected roughly 3 lines logged out. I got about 160 lines of Microsoft logging.

This tells me the cloud is ignoring my logging configuration. Yet in the console window output locally it behaves perfectly.

For a developer, the worst nightmare we have is when something behaves differently when deployed to our local environment. It makes working with the technology a nightmare.

My issue boils down to a ton of similar questions ...

.., but the bottom line is ...

bhagyshricompany commented 1 year ago

have you checked? one of the concerns which you asked? Viewing Cloud Log Output: To view the log output in the cloud environment similar to what you see in the local console window, you should check the Azure portal for your app service. In the Azure portal, navigate to your web app or function app, and look for the "Monitoring" or "Diagnose and solve problems" section. There, you should find options to access logs, including the "Log stream" or "Application logs." The log stream option provides real-time streaming of log messages, similar to what you see in the local console.

TehWardy commented 1 year ago

That's what I was talking about here ... https://github.com/Azure/Azure-Functions/issues/2388#issuecomment-1638453889

.. my premise was "console output logically should match log stream output" ideally. For some reason though these things are not the same.

bhagyshricompany commented 1 year ago

pls share the function name,instanceid,region,timestamp etc.

TehWardy commented 1 year ago

You can repro this by deploying what I create in the first post. I have multiple functions and web apps that are experiencing this problem.

since raising this I built my own logging provider that pushes logs to a signalr hub and i'm getting a very good and consistent logging experience from it.

I'm not sure what isn't clear here?

bhagyshricompany commented 1 year ago

@fabiocav pls comment and validate

brettsam commented 1 year ago

There's a lot of questions here and it's clearly confusing to more folks than just you, so let me explain a little bit how the dotnet-isolated worker (and all workers) communicate logs back to the Functions Host. I think you have some of this understanding already, but let me just make it clear for anyone else that may be coming to this issue.

The Functions Host can receive logs from the Worker in two ways:

In the dotnet-isolated worker, one thing that calling ConfigureFunctionsWorkerDefaults() does is wire up a standard .NET ILoggerProvider to send logs back to the Host over grpc. So when you call LogDebug() from within your Function, this message is sent back through grpc to the Host to log wherever it needs to. When running locally via the Functions Core Tools, this log appears in the Console.

Why didn't LogDebug() work by default?

Now, to get to why LogDebug() doesn't work by default. The .NET Logging infrastucture defaults the minimum LogLevel as Information. So logging Debug messages from the Worker will go nowhere by default. As you discovered, modifying host.json does nothing here; those settings are for the host, not the Worker. You need to do whatever you'd do for any normal .NET console application to enable Debug logs to make it out of the Worker process.

For example, doing this, in addition to allowing Debug logs in host.json would work:

...
.ConfigureFunctionsWorkerDefaults()
.ConfigureLogging(loggingBuilder => loggingBuilder.SetMinimumLevel(LogLevel.Debug))
...

There are lots of other ways (reading an appsettings.json file, using Configuration, etc). But it's all typical .NET Logging infrastructure at that point.

This does confuse a lot of folks that expect host.json to work for the worker as well. We didn't want to do this by default (but maybe we should have) as it tends to couple two things that aren't strictly meant to be coupled -- i.e host settings are not generally applicable to the Worker. Our goal was to make the Worker as much of a typical Console application as possible, but that's clearly made it confusing, especially for those coming from the C# in-proc function world.

There are ways to have host.json apply its logging settings, however. You can do something like this to apply the logging section of host.json to the logging configuration of the Worker:

...
.ConfigureFunctionsWorkerDefaults()        
.ConfigureLogging((context, loggingBuilder) =>
{
    // FYI, there's lots of other ways to do this...
    var hostJsonLoggingSection = new ConfigurationBuilder()
        .AddJsonFile("host.json")
        .Build()
        .GetSection("logging");
    loggingBuilder.AddConfiguration(hostJsonLoggingSection);
})
...

Double logging?

In the example you shared above, you've added an additional Console logger to the Worker. I mentioned above that the Host is monitoring stdout as well as the grpc pipeline. In this case, you're emitting your logs to both of these, which results in both logs being emitted.

Other questions?

I don't think I've answered all the things you mentioned... but hopefully this is a start. Logging is one of the unfortunately complicated areas in Functions becuase of the process split between the Host and the Worker. We have some longer-term visions to make this more clear, but those aren't quite ready yet.

In the meantime, please let me know what other questions were in your comment that I can help explain. Hopefully we can use this issue (and others) as a starting point for some documentation improvements.

TehWardy commented 1 year ago

Hi @brettsam,

Initial Response

Thank you for that detailed response to the a lot of the points that have been raised here. It's interesting that you're highlighting where the confusion is coming from and hinting that there are of course technical reasons for it all.

My initial knee jerk reaction of course to go into "as a consumer why do I need to be concerned with this". Of course the obvious answer is "well it's just professional to know what you're dealing with".

I think that logging is essentially used in 2 key ways (feel free to correct me if you know something I don't though) ...

To my mind these things have very different needs but fundamentally we as developers need to ensure we are logging quality information in order to get quality information back out. This is where my question originally came from, whilst I have felt that I can basically "scatter gun" and log "all the things" I don't feel like I have the fine grained control over it that I should and it comes from this "sense" that there is (as you put it) "not one source of data".

Breaking it down

As you have pointed out ...

  1. There are some deliberate decisions that have been made by the development teams that result in differences between local and cloud behaviors.
  2. I can setup my application exactly how I want it to get the desired output on the console.
  3. The cloud "log stream" page would not "by design" match because of the multiple sources for a stream of log data.
  4. Configs are also not "seemingly" consistently consumed also it seems (something I was not fully aware of).

My thinking was (as a logically minded developer) that "had the incorrect impression" that log stream = stdout locally. Given this premise is wrong what I would like to see is that I CAN achieve this with some simple setup.

My unified recommendation for all dot net "web and function applications"

AppSettings.json ...

  "Logging": {
    "LogLevel": {
      "Default": "Debug",
      "Microsoft": "Warning",
      "Microsoft.Hosting.Lifetime": "Information"
      ...
    },

Program.cs ...

...
.ConfigureLogging((context, loggingBuilder) =>
{
   loggingBuilder.SourceLevelsFrom = LoggingLevelSource.Config;
    loggingBuilder.AddConsole();
    // add any other providers here (purely optional)
})
...

The result of this would be ...

Other Ideas

Is it worth adding a "Standard Output" page in to the cloud environment for functions that outputs console output "as it would be seen locally" if the log stream page is not the place for this?

I would also urge Microsoft to consider making this stuff behave such that there is only one way of doing all this. The additional coding and documentation overheads of all this must be a pain for the dev team to be constantly being asked questions but also when people do genuinely get stuck and then reach out to Azure Support they seem just as confused by it all.

Final thoughts

As always, thank you for the feedback and it's great to see development staff engaging instead of this being a standard "we have a feedback portal over here that will take you 20 minutes to find the right place to post".

What I've done so far ... I built a custom logging provider that could push messages to a signalr Hub in my app it couldn't have been more than 100 lines of code or so but that literally mirrors exactly what I get on the console locally. So when I deploy my applications in to the cloud now I just load up a web page that has the signalr client setup for that.

Being signalr I also get the added benefit that I can now stream "cross application" logs (for example, an API call that results in a workflow execution I can log the API call details then log the going on in the workflow engine to the same stream).

This in my case was the "easiest" way around the problems I was having and I accept that it's not a one size fits all situation but the reason I mention this is because what the log stream page does is fairly similar ... so can we setup in our applications a subscription to "an applications stdout" stream(s) in our app logic for existing apps already in the cloud with a cloud endpoint?

This might be something that already exists that i'm missing, and if it does that might be the jumping off point for my suggestions above.

Again, many thanks for the ongoign support here, it's refreshing to be able to engage on this.

springcomp commented 3 days ago

FYI

Completing the post from @brettsam, in #2531, I illustrate how to reliably have full control on your logging experience. This also aligns with both the latest practices and versions of the NuGet packages.

Image

For instance, latest versions of the Microsoft.Azure.Functions.Worker.ApplicationInsights package adjusts the logging behaviour so that logs are no longer relayed to the host but instead are emitted directly from your worker process to Application Insights.