Closed plondino closed 5 months ago
Reading... at a glance we have the level
parameter so you can use a different logging level. The original ask from customers was to copy Powertools formatting and log level to 3P logging.
Lemme read fully and get back to you today.
PS: It's uncommon for us to take this long to reply a "bug" issue, please accept our apologies as our on-call shift had an unexpected change.
Appreciate the patience. I agree with the pain - what would be a good experience to you with an additional parameter to skip touching log level?
Now, please allow me to add some context for anyone reading in the future (maintainers included).
I would expect these levels to be preserved when copying them over to the powertools logger. overriding higher levels to lower levels causes undesired messages to be displayed
That's expected and came from an original pain from a diverse set of customers unaware of logging in Python. To summarize our previous Slack, now Discord discussions, and that issue, it boiled down to:
"As a customer, I want
boto3
et al to have the same JSON structure as Powertools when they log, and control the logging level from Powertools.""As a customer, I want optionally choose which third-party loggers to change or to exclude when copying from my Logger"
"As a customer, I want to copy my Logger JSON structure and logging level to all possible loggers in my app"
With those in mind, by default it'll include all loggers. You can use include
for an explicit logger to modify, or an exclude
for an exception basis (what not to touch).
As a side effect from this ask, it can get noisy so customers adjust as they see fit.
That said, we could surely add a new parameter to skip touching log levels, update docs, and explain why that's desired in the grand scheme of things (e.g., logging hierarchy is a common source of doubt for devs and non-devs).
Does that sum it up?
This is the block of code we'd be improving once we hear from you: https://github.com/aws-powertools/powertools-lambda-python/blob/583bacc33e0d15137360105c024e71c577d032e3/aws_lambda_powertools/logging/utils.py#L9
Look forward to hearing from you :)
Thank you for the thorough response and the history. An option to preserve the existing levels works for me, even if it's not the default.
The one thing that still seems potentially counter-intuitive is that if there are nested loggers, they aren't touched by the existing code. E.g. if boto3
is set to CRITICAL
and the powertools logger is set to INFO
, it will be reconfigured and you will see INFO messages, but if boto3.resources
is set to CRITICAL
, it will be skipped because the logic does not recurse and INFO messages will not be displayed. I personally would prefer to set my own levels at whatever granularity I intend, but it may be counter-intuitive to users who rely on this behavior to make all messages visible at the level they set in the powertools logger. Not sure of the solution offhand - lots of loggers get defined in modules that are never assigned a specific level, and in most cases it makes sense to leave the "NOT SET" level 0 alone so that it just bubbles up to where it is set. You could set all the top level loggers and make all nested loggers "NOT SET" and propagate=true
.
thank you so much again @CausticYarn for your great thoughts - I'll be tackling this today and ping for a review if you don't mind
Edit: Tackling a bug first: https://github.com/aws-powertools/powertools-lambda-python/pull/4295
PR is up: #4299
I wanted to brush up the docs a bit more but it got late here, so i added the bare minimum for now. Part of me is tempted to make this the default behaviour in v3, the other part knows a great chunk of customers use this feature in combination with log sampling, in which the debug
log level will be set for a % of cold starts for all loggers.
One of the reasons we reverted propagate=True
was that Lambda environment touches the root
logger :/
Edit: suffice to say we'll include it in the next release so you can use it :)
This is now released under 2.38.0 version!
Expected Behaviour
I often use a logging config for fine grained control over the logging level of third party dependencies in a lambda application. I often need to set the overall level of an application to DEBUG or INFO but there are some very chatty libraries that obscure useful information so I set them to WARNING or ERROR. I would expect these levels to be preserved when copying them over to the powertools logger.
Current Behaviour
Current behavior is to set all existing loggers to the level of the powertools logger. It's unclear why this is necessary, as levels below the powertools logger won't get displayed anyway (e.g. DEBUG on a library when the powertools logger is INFO) and overriding higher levels to lower levels causes undesired messages to be displayed (e.g. overriding WARNING and setting it to INFO). At the very least this should be configurable. It also has odd behavior because if the logging configuration has a level set one level down, e.g.
chatty_library.submodule
that does not get overridden, so if it's higher thanchatty_library
it will get blocked.Code snippet
Possible Solution
Don't override log level in existing loggers, or make it configurable as an option to pass to
copy_config_to_registered_loggers
.It does seem that setting the logging config AFTER copying the loggers may set the values to the desired state but not sure if there are any side effects from doing it in this order.
Steps to Reproduce
See code sample.
Powertools for AWS Lambda (Python) version
latest
AWS Lambda function runtime
3.11
Packaging format used
PyPi
Debugging logs
No response