Azure / azure-functions-host

The host/runtime that powers Azure Functions
https://functions.azure.com
MIT License
1.93k stars 441 forks source link

[Custom handlers] Add ability to specify level for each log message #6608

Open anthonychu opened 4 years ago

anthonychu commented 4 years ago

Currently custom handler logs are an array of strings. Is it possible to allow setting the type (Error, Warning, etc) of the log message?

We could accept objects instead of strings. But this would be a breaking change.

Another possibility is to support prefixing the string with the level. Something like:

"Logs": [
    "[WARNING] this is a warning"
]

@pragnagopa @yojagad

pragnagopa commented 4 years ago

Adding support to prefixing would mean that we would be doing a string parse on each log to figure out the level - this could be perf impact.

Another option

@anthonychu - Can you please share design proposal for introducing LogLevel? Also, please do assign a priority for the issue. Thanks!

Fishrock123 commented 4 years ago

I am encountering this, and had filed an issue at https://github.com/Azure-Samples/functions-custom-handlers/issues/17

mattias-fjellstrom commented 3 years ago

This issue is highly relevant.

I recently started experimenting with custom handlers in Go, so I've just come across the issue of handling logs. To me it seems a bit strange to return all the logs in the response to the host. What if the relative timing of the logs is important? This information is lost in the current solution.

A natural solution would allow me to use any logging framework (maybe as simple as fmt.Println(...)) to send my logs to stdout, and have them appear in Application Insights with sensible values for severity etc. With the current solution logs sent to stdout does not include the invocation context, so I can't tell which function or which function invocation my log comes from.

Arnavion commented 3 years ago

For my custom functions I just write to stdout and stderr from the worker process and ignore the Logs array in the HTTP response entirely. That gives me two levels of logging (stdout == info, stderr == error). Not ideal, but better than encoding the level into the message.

It also removes the need to encode timestamps into the message, since the host already has callbacks to log stdout and stderr as soon as it gets them.

mattias-fjellstrom commented 3 years ago

@Arnavion I also went in that direction. I log everything to stdout in a custom JSON-format, and then I utilise the parse_json() (reference) function when I query my logs in Application Insights. This allows me to get sensible logs with the levels I want. It is not ideal, and you need to spend a little time setting up the queries (and save them).

I like it how AWS have solved it in CloudWatch with their CloudWatch insights. You just log to stdout in JSON-format and behind the scenes it is automatically parsed and you can search for any field in your JSON log out of the box.

Arnavion commented 3 years ago

Yes. Azure Functions' "logs in the HTTP response as a JSON array" method (the topic of this issue) is fundamentally a non-starter. Easiest example - any function that hangs and is killed because of timeout will never get around to writing the HTTP response out, so it will lose all its logs, which are precisely the logs you'd want the most. Also, it requires your programming language / libraries to support writing a custom logger that aggregates logs into an array scoped to the request handler, which is a non-trivial task in of itself. (Loggers are usually global, and multiple Function invocations can run in parallel.)

The only sane logging a custom handler can do is via stdout/err.

So the ideal solution is to parse logs from the worker process as they're generated, plus in some format that allows at least log level and ideally full structured logging like .Net Functions have.

Arnavion commented 3 years ago

Another approach I've been prototyping for my functions is to skip the middleman and log to Log Analytics directly, via https://docs.microsoft.com/en-au/azure/azure-monitor/logs/data-collector-api#sample-requests.

Pros:

Cons:

Jason5Lee commented 2 years ago

I ran into this issue a while ago. I decided to just use Information and Error level by printing the log to either stdout or stderr. Then I realized that even simply printing to stderr, Azure Log Stream seems to be able to "infer" the log level, maybe from the log content.

image

Both of these message are simply printed to stderr. Is there any document about this behavior?

Jason5Lee commented 2 years ago

It seems that as long as the message contains warn (ignore capitalizing) it will be treated as a warning.

horizondeep commented 8 months ago

Hi @pragnagopa/@fabiocav ..I am facing the same issue, where logs are working fine in local VS code but after deployment I cannot see them in the panel. I did try setting up the RUST_LOG as an environment variable but still no luck. Logs are not printing. Using env_logger and log from Rust at "info" level.