-
## Enhancement Request
### Enhancement Overview
When viewing application logs, it is often useful to see contextual properties to help understand the logs. For instance, when viewing a log message…
-
from loguru import logger
LOG_FORMAT = "{time:YYYY-MM-DD HH:mm:ss.SSS} | {level} | {extra} | {message}"
logger.remove()
logger.add(sys.stdout, format=LOG_FORMAT)
logger.info(f"{'{'}", e = "extra1"…
-
Currently, Ruff uses some hard-coded name "patterns" to determine whether a module-local object might be a logger (for the purposes of e.g. `flake8-logging-format` G rules).
https://github.com/…
-
Hi!
My caddy and management container are stuck in a reboot loop, please see the following docker logs:
docker ps -a
CONTAINER ID IMAGE COMMAND CR…
-
## Some ideas of implementing logging:
- log only errors in one log file.
- log errors and successful usage in one log file.
- log errors and successful usage in different files in dedicated folder…
-
non-op logger is extremely hard to debug issue. it caused us a few hours to find out why our op e2e test failed with no error
we are now changing it to
```
loggerConfig := zap.NewDevelopmentC…
-
Currently the logs are handled manually with `console` messages. They should use our logger package to follow our usual display standards and logic.
https://github.com/ubiquity/ubiquibot-logger
…
-
Pytorch lightning has a Logger abstraction, through which they interface with Wandb, CometML, etc.
https://lightning.ai/docs/pytorch/stable/extensions/logging.html
Any plans for something similar …
-
The log level can currently be configured on a logger-by-logger basis (see "Per-level overrides" [here](https://clickhouse.com/docs/en/operations/server-configuration-parameters/settings#logger)):
…
-
It seems that using this transport prevents me from changing the pino default log level.
For example:
```
import pino from 'pino';
const logger = pino({
transport: {
targets: [
…