Open TehWardy opened 1 year ago
Thanks for reporting.Once check at our end and update you.
@bhagyshricompany Simple suggestion here (I know this is not how it works at the moment but why not is my thinking) .... When I spin up a web / functions app locally and I get all my logging spat out to the console, is it unreasonable to expect that same output in the "Log Stream" part of the azure portal ... if not ... maybe there should be a place for that?
This is a one stop simple shop for "like for like" without which this stuff is just a maze of pain we have to chase through to find answers to seemingly simple questions.
What's worse is ... Explained this way, apparently no one at Azure support understands why this is not the case either.
It's not a complaint ... just a keen observation from a fan.
Console App vs. Azure Functions Logging: The differences you observed between a console app and Azure Functions logging can be attributed to the underlying infrastructure and the way Azure Functions handles logging. While the console app has a straightforward logging setup, Azure Functions have a more complex logging infrastructure that interacts with other Azure services like Application Insights.
Configuration of Logging: Configuring logging in Azure Functions can be tricky, and sometimes the behavior might not match your expectations. Azure Functions provides various options for logging, including host.json, application settings, and code-based configuration. To ensure consistent logging behavior, it's essential to understand how these configurations interact and override each other.
Application Insights Integration: Application Insights is a powerful tool for monitoring and analyzing your application's behavior in the cloud. However, it may add complexity if you only need basic logging and don't want the additional telemetry data. If you prefer a more straightforward logging approach, you can consider disabling or reducing the verbosity of Application Insights.
Debugging and Support: When facing issues with Azure Functions or other Azure services, reaching out to Azure Support can sometimes be challenging, especially if they encounter issues with logging or telemetry data. It's essential to provide clear information about the problem, including relevant logs, error messages, and steps to reproduce the issue.
Community and Feedback: As a developer, your feedback on Azure Functions and related services is valuable. If you encounter specific issues with logging or any other aspects of Azure Functions, consider sharing your experiences on the Azure Feedback Forum or GitHub repository for Azure Functions. This can help the development team understand common pain points and improve the service accordingly.
@fabiocav pls comment and validate
Thank you for the feedback.
Right now I'm trying to standardize the behavior of the log stream output in the cloud ... The idea being that we can go that page and set the logging level from the drop down and then using the output to help track down complex issues between applications.
I'd just like to get some consistent behavior across the entire Azure platform such that a web app that talks to a functions app using something like Service bus or HTTP triggers both behave the same way.
I thought something as simple as this ...
var builder = WebApplication.CreateBuilder(args);
builder.Configuration
.AddEnvironmentVariables()
.SetBasePath(Directory.GetCurrentDirectory())
.AddJsonFile("appsettings.json", optional: false, reloadOnChange: true);
builder.Services.AddLogging(loggingBuilder =>
{
loggingBuilder.AddAzureWebAppDiagnostics();
loggingBuilder.AddSimpleConsole(options =>
{
options.TimestampFormat = "yyyy-MM-ddTHH:mm:ss ";
options.SingleLine = true;
});
});
// add other stuff
var host = builder.Build();
host.Run();
Then some simple configuration either in host.json or appsetttings.json (depending on app type) ...
"Logging": {
"LogLevel": {
"Default": "Debug",
"Microsoft": "Warning",
"Microsoft.Hosting.Lifetime": "Information"
},
Notice a couple things here ... I added a dependency on Microsoft.Extensions.Logging.AzureAppServices to enable app insights. I configured "as documented" the logging levels I want "per namespace" whilst also specifying a default.
If I spin it up I would expect to get debug level logging from my own code and only warnings / Errors from Microsoft's code except for the overridden namespace at Information level.
This is how it seems to be documented, but this is not how it behaves.
In my web application I get ALL logging up to the drop down selected level on the log streaming page UI. I would expect ...
All Logging but with a final clause ... Up to the level configured in the drop on the UI, but not beyond the basic configuration defined in the app itself.
... and this disconnect causes me issues. It's essentially impossible for me to get debug level logging from my own code without getting pages and pages of Microsoft logging in the same stream too.
The worst case for example ... I set the logging level debug where in my own code I expected roughly 3 lines logged out. I got about 160 lines of Microsoft logging.
This tells me the cloud is ignoring my logging configuration. Yet in the console window output locally it behaves perfectly.
For a developer, the worst nightmare we have is when something behaves differently when deployed to our local environment. It makes working with the technology a nightmare.
My issue boils down to a ton of similar questions ...
.., but the bottom line is ...
have you checked? one of the concerns which you asked? Viewing Cloud Log Output: To view the log output in the cloud environment similar to what you see in the local console window, you should check the Azure portal for your app service. In the Azure portal, navigate to your web app or function app, and look for the "Monitoring" or "Diagnose and solve problems" section. There, you should find options to access logs, including the "Log stream" or "Application logs." The log stream option provides real-time streaming of log messages, similar to what you see in the local console.
That's what I was talking about here ... https://github.com/Azure/Azure-Functions/issues/2388#issuecomment-1638453889
.. my premise was "console output logically should match log stream output" ideally. For some reason though these things are not the same.
pls share the function name,instanceid,region,timestamp etc.
You can repro this by deploying what I create in the first post. I have multiple functions and web apps that are experiencing this problem.
since raising this I built my own logging provider that pushes logs to a signalr hub and i'm getting a very good and consistent logging experience from it.
I'm not sure what isn't clear here?
@fabiocav pls comment and validate
There's a lot of questions here and it's clearly confusing to more folks than just you, so let me explain a little bit how the dotnet-isolated worker (and all workers) communicate logs back to the Functions Host. I think you have some of this understanding already, but let me just make it clear for anyone else that may be coming to this issue.
The Functions Host can receive logs from the Worker in two ways:
In the dotnet-isolated worker, one thing that calling ConfigureFunctionsWorkerDefaults()
does is wire up a standard .NET ILoggerProvider
to send logs back to the Host over grpc. So when you call LogDebug()
from within your Function, this message is sent back through grpc to the Host to log wherever it needs to. When running locally via the Functions Core Tools, this log appears in the Console.
LogDebug()
work by default?Now, to get to why LogDebug()
doesn't work by default. The .NET Logging infrastucture defaults the minimum LogLevel as Information
. So logging Debug
messages from the Worker will go nowhere by default. As you discovered, modifying host.json does nothing here; those settings are for the host, not the Worker. You need to do whatever you'd do for any normal .NET console application to enable Debug logs to make it out of the Worker process.
For example, doing this, in addition to allowing Debug logs in host.json would work:
...
.ConfigureFunctionsWorkerDefaults()
.ConfigureLogging(loggingBuilder => loggingBuilder.SetMinimumLevel(LogLevel.Debug))
...
There are lots of other ways (reading an appsettings.json file, using Configuration, etc). But it's all typical .NET Logging infrastructure at that point.
This does confuse a lot of folks that expect host.json to work for the worker as well. We didn't want to do this by default (but maybe we should have) as it tends to couple two things that aren't strictly meant to be coupled -- i.e host settings are not generally applicable to the Worker. Our goal was to make the Worker as much of a typical Console application as possible, but that's clearly made it confusing, especially for those coming from the C# in-proc function world.
There are ways to have host.json apply its logging settings, however. You can do something like this to apply the logging section of host.json to the logging configuration of the Worker:
...
.ConfigureFunctionsWorkerDefaults()
.ConfigureLogging((context, loggingBuilder) =>
{
// FYI, there's lots of other ways to do this...
var hostJsonLoggingSection = new ConfigurationBuilder()
.AddJsonFile("host.json")
.Build()
.GetSection("logging");
loggingBuilder.AddConfiguration(hostJsonLoggingSection);
})
...
In the example you shared above, you've added an additional Console logger to the Worker. I mentioned above that the Host is monitoring stdout as well as the grpc pipeline. In this case, you're emitting your logs to both of these, which results in both logs being emitted.
I don't think I've answered all the things you mentioned... but hopefully this is a start. Logging is one of the unfortunately complicated areas in Functions becuase of the process split between the Host and the Worker. We have some longer-term visions to make this more clear, but those aren't quite ready yet.
In the meantime, please let me know what other questions were in your comment that I can help explain. Hopefully we can use this issue (and others) as a starting point for some documentation improvements.
Hi @brettsam,
Initial Response
Thank you for that detailed response to the a lot of the points that have been raised here. It's interesting that you're highlighting where the confusion is coming from and hinting that there are of course technical reasons for it all.
My initial knee jerk reaction of course to go into "as a consumer why do I need to be concerned with this". Of course the obvious answer is "well it's just professional to know what you're dealing with".
I think that logging is essentially used in 2 key ways (feel free to correct me if you know something I don't though) ...
To my mind these things have very different needs but fundamentally we as developers need to ensure we are logging quality information in order to get quality information back out. This is where my question originally came from, whilst I have felt that I can basically "scatter gun" and log "all the things" I don't feel like I have the fine grained control over it that I should and it comes from this "sense" that there is (as you put it) "not one source of data".
Breaking it down
As you have pointed out ...
My thinking was (as a logically minded developer) that "had the incorrect impression" that log stream = stdout locally. Given this premise is wrong what I would like to see is that I CAN achieve this with some simple setup.
My unified recommendation for all dot net "web and function applications"
AppSettings.json ...
"Logging": {
"LogLevel": {
"Default": "Debug",
"Microsoft": "Warning",
"Microsoft.Hosting.Lifetime": "Information"
...
},
Program.cs ...
...
.ConfigureLogging((context, loggingBuilder) =>
{
loggingBuilder.SourceLevelsFrom = LoggingLevelSource.Config;
loggingBuilder.AddConsole();
// add any other providers here (purely optional)
})
...
The result of this would be ...
Other Ideas
Is it worth adding a "Standard Output" page in to the cloud environment for functions that outputs console output "as it would be seen locally" if the log stream page is not the place for this?
I would also urge Microsoft to consider making this stuff behave such that there is only one way of doing all this. The additional coding and documentation overheads of all this must be a pain for the dev team to be constantly being asked questions but also when people do genuinely get stuck and then reach out to Azure Support they seem just as confused by it all.
Final thoughts
As always, thank you for the feedback and it's great to see development staff engaging instead of this being a standard "we have a feedback portal over here
What I've done so far ... I built a custom logging provider that could push messages to a signalr Hub in my app it couldn't have been more than 100 lines of code or so but that literally mirrors exactly what I get on the console locally. So when I deploy my applications in to the cloud now I just load up a web page that has the signalr client setup for that.
Being signalr I also get the added benefit that I can now stream "cross application" logs (for example, an API call that results in a workflow execution I can log the API call details then log the going on in the workflow engine to the same stream).
This in my case was the "easiest" way around the problems I was having and I accept that it's not a one size fits all situation but the reason I mention this is because what the log stream page does is fairly similar ... so can we setup in our applications a subscription to "an applications stdout" stream(s) in our app logic for existing apps already in the cloud with a cloud endpoint?
This might be something that already exists that i'm missing, and if it does that might be the jumping off point for my suggestions above.
Again, many thanks for the ongoign support here, it's refreshing to be able to engage on this.
FYI
Completing the post from @brettsam, in #2531, I illustrate how to reliably have full control on your logging experience. This also aligns with both the latest practices and versions of the NuGet packages.
For instance, latest versions of the Microsoft.Azure.Functions.Worker.ApplicationInsights
package adjusts the logging behaviour so that logs are no longer relayed to the host but instead are emitted directly from your worker process to Application Insights.
This is a plea to whoever is responsible for the mess that is logging in Azure Functions applications. I am at my wits end just trying to get simple things to behave in a simple consistent manner with Azure functions and my last ditch attempt was to rely on debug level logging calls which led to this fiasco.
Expecting nothing less than the obvious question "what's my MVP here to reproduce the issue", here we go ...
modify the project file to this ...
Modify the Progam.cs to this ...
ILogger log = host.Services.GetService<ILogger>();
log.LogInformation("Hello World!"); log.LogDebug("This is a debug message");
host.Run();
var host = new HostBuilder() .ConfigureServices((builder, services) => { services.AddLogging(loggingBuilder => { loggingBuilder.AddSimpleConsole(options => { options.SingleLine = true; options.TimestampFormat = "hh:mm:ss "; });
ILogger log = host.Services.GetService<ILogger>();
log.LogInformation("Hello World!"); log.LogDebug("This is a debug message");
host.Run();
[Function("Function1")] public void Run([TimerTrigger("0 /1 *")] MyInfo myTimer) { log.LogInformation("Hello World!"); log.LogDebug("This is a debug message"); }
[Function("Function1")] public void Run([TimerTrigger("0 /1 *")] MyInfo myTimer, ILogger log) { log.LogInformation("Hello World!"); log.LogDebug("This is a debug message"); }
var host = new HostBuilder() .ConfigureFunctionsWorkerDefaults() .ConfigureServices((builder, services) => { services.AddLogging(loggingBuilder => { loggingBuilder.AddSimpleConsole(options => { options.SingleLine = true; options.TimestampFormat = "hh:mm:ss "; });
var host = new HostBuilder() .ConfigureFunctionsWorkerDefaults() .Build();
Azure Functions integrates with Application Insights by storing telemetry events in Application Insights tables. Setting a category log level to any value different from Information will prevent the telemetry to flow to those tables. As outcome, you won't be able to see the related data in Application Insights or Function Monitor tab.
Other function app configuration options are managed depending on where the function app runs:
Deployed to Azure: in your application settings On your local computer: in the local.settings.json file. Configurations in host.json related to bindings are applied equally to each function in the function app.
You can also override or apply settings per environment using application settings.