Delgan / loguru

Python logging made (stupidly) simple
MIT License
19.43k stars 690 forks source link

Loguru Handler error #1091

Open yuanjie-ai opened 6 months ago

yuanjie-ai commented 6 months ago

--- Logging error in Loguru Handler #1 --- Record was: {'elapsed': datetime.timedelta(seconds=435, microseconds=812143), 'exception': None, 'extra': {}, 'file': (name='glogging.py', path='/www/server/pyporject_evn/versions/3.10.0/lib/python3.10/site-packages/gunicorn/glogging.py'), 'function': 'error', 'level': (name='ERROR', no=40, icon='❌'), 'line': 272, 'message': 'Worker (pid:2424837) was sent code 134!', 'module': 'glogging', 'name': 'gunicorn.glogging', 'process': (id=2424587, name='MainProcess'), 'thread': (id=140529604818752, name='MainThread'), 'time': datetime(2024, 2, 27, 13, 14, 37, 871067, tzinfo=datetime.timezone(datetime.timedelta(seconds=28800), 'CST'))} Traceback (most recent call last): File "/www/server/pyporject_evn/versions/3.10.0/lib/python3.10/site-packages/loguru/_handler.py", line 200, in emit with self._protected_lock(): File "/www/server/pyporject_evn/versions/3.10.0/lib/python3.10/contextlib.py", line 135, in enter return next(self.gen) File "/www/server/pyporject_evn/versions/3.10.0/lib/python3.10/site-packages/loguru/_handler.py", line 114, in _protected_lock raise RuntimeError( RuntimeError: Could not acquire internal lock because it was already in use (deadlock avoided). This likely happened because the logger was re-used inside a sink, a signal handler or a 'del' method. This is not permitted because the logger and its handlers are not re-entrant. --- End of logging error ---

bneijt commented 6 months ago

Could you share more context: how can you reproduce this?

Delgan commented 6 months ago

As indicated in the RuntimeError exception, this problems occurs because Loguru's logging methods were somehow recursively called.

The logs you shared seems to indicate the issue originated from a Gunicorn function: https://github.com/benoitc/gunicorn/blob/88fc4a43152039c28096c8ba3eeadb3fbaa4aff9/gunicorn/glogging.py#L272

Can you give more details about the handlers you've configured? Do you make use of Gunicorn within Loguru-related functions?

jprmartinho commented 6 months ago

Hi @Delgan , first of all, thank you for loguru !

I am having a similar issue when I try to intercept logs from 3rd party app:

--- Logging error in Loguru Handler #1 ---
Record was: {'elapsed': datetime.timedelta(seconds=19, microseconds=527500), 'exception': None, 'extra': {}, 'file': (name='saver.py', path='/home/coder/workspace/pckg-sf/pckg/.venv/lib/python3.9/site-packages/io_saver/io/saver.py'), 'function': 'save_text', 'level': (name='INFO', no=20, icon='ℹ️'), 'line': 116, 'message': 'Saving text...', 'module': 'saver', 'name': 'io_saver.io.saver', 'process': (id=131049, name='MainProcess'), 'thread': (id=140688262096704, name='MainThread'), 'time': datetime(2024, 3, 12, 10, 59, 30, 546150, tzinfo=datetime.timezone(datetime.timedelta(0), 'UTC'))}
Traceback (most recent call last):
  File "/home/coder/workspace/pckg-sf/pckg/.venv/lib/python3.9/site-packages/loguru/_handler.py", line 200, in emit
    with self._protected_lock():
  File "/opt/python/lib/python3.9/contextlib.py", line 119, in __enter__
    return next(self.gen)
  File "/home/coder/workspace/pckg-sf/pckg/.venv/lib/python3.9/site-packages/loguru/_handler.py", line 114, in _protected_lock
    raise RuntimeError(
RuntimeError: Could not acquire internal lock because it was already in use (deadlock avoided). This likely happened because the logger was re-used inside a sink, a signal handler or a '__del__' method. This is not permitted because the logger and its handlers are not re-entrant.
--- End of logging error ---

I followed loguru's documentation and created an intercept handler:

class InterceptHandler(logging.Handler):
    def emit(self, record: logging.LogRecord) -> None:
        # Get corresponding Loguru level if it exists.
        level: Union[str, int]
        try:
            level = logger.level(record.levelname).name
        except ValueError:
            level = record.levelno

        # Find caller from where originated the logged message.
        frame, depth = inspect.currentframe(), 0
        while frame and (depth == 0 or frame.f_code.co_filename == logging.__file__):
            frame = frame.f_back
            depth += 1

        logger.opt(depth=depth, exception=record.exc_info).log(level, record.getMessage())

And in my _config.py python module (which runs before everything) I am placing the logging.basicConfig(handlers=[InterceptHandler()], level=0, force=True)

After this, the pipeline starts running and calls this setup_customer_logger function where I added my ObjectStorageLogHandler handler to loguru logger:

class ObjectStorageLogHandler(logging.Handler):
    """Object storage logging handler
    Args:
        logging.Handler (_type_): the logging handler object to inherit from.
    """

    def __init__(self, path: str) -> None:
        logging.Handler.__init__(self)
        self.path: str = path

    def emit(self, record: logging.LogRecord) -> None:
        """
        Trigger each time that there is a log event.

        Args:
            record (logging.LogRecord): Logger record to be saved.
        """
        save_text(data=self.format(record), filepath=self.path)

def setup_custom_logger(objs_path: str, sink_text_io: TextIO = sys.stderr) -> None:
    """Setup logging

    The remove() method is called first to remove the configuration of previous handlers.
    The add() method adds 2 new handlers to the logger.
        1. The ObjectStorageLogHandler log handler logs to the object storage, and only records logs with INFO severity or greater.
        2. The sys.stderr log handler logs to the standard error, and only records logs with WARNING severity or greater.

    Args:
        objs_path (str): object storage path.
        sink_text_io (TextIO, optional): text io sink. Defaults to sys.stderr.
    """
    logger.remove()

    logger.add(
        ObjectStorageLogHandler(path=objs_path),
        level=LOG_LEVEL_OBJS,
        format=LOG_FORMAT,
        backtrace=False,
        diagnose=False,
    )

    logger.add(
        sink_text_io,
        level=LOG_LEVEL_STDERR,
        format=LOG_FORMAT,
        backtrace=False,
        diagnose=False,
    )

What am I doing wrong here? I want to intercept standard logging messages toward these Loguru sinks.

Thank you and see you soon

jprmartinho commented 5 months ago

Hi,

I got it, it's working!

The problem was within 3rd-party saver python module, which was being used in emit method of the logging handler used as a loguru sink. Wiithin this saver, a logger was being initialized and used, thus being redirected to my loguru sinks (because of the intercept handler).

Thank you

Delgan commented 5 months ago

@jprmartinho Sorry for not answering sooner, but glad you figured it out.

It's quite unfortunate, but yes, indirectly using the logger within a sink can cause this surprising error.