Delgan / loguru

Python logging made (stupidly) simple
MIT License
19.6k stars 694 forks source link

Write queued messages to sink after explicit logger.flush(requeue: Optional[bool] = False) #1209

Open reneleonhardt opened 1 week ago

reneleonhardt commented 1 week ago

My use case: I would like to print multiple tqdm progress bars and log messages at the "same" time in an asyncio app. Now all writes to stdout/stderr are interleaved while tqdm is refreshing the progress bars multiple times per second.

I just saw the logger.add(enqueue=True) option, could this be enhanced in _queued_writer() to wait much longer with writing/flushing to the sink until a command like logger.flush() would be explicitly called? In this case, after all progress bars have been closed (and maybe even removed with leave=False), when tqdm stops updating them. Then every message would be logged just normally long after they have been recorded, so original timestamps would be written when the event has been queued minutes ago. flush(requeue=True) would re-enable queuing, otherwise the logger would automatically configure(enqueue=False) on itself to stop queuing from now on for future log messages.

After debugging I found a workaround for me to retain the default message format and color / escape codes, sorry for the noise, thank you for this amazing library!

import io

from loguru import logger

logger.configure(handlers=[dict(sink=(stringio := io.StringIO()), colorize=True)])
logger.info("First")
print("Second")
print(stringio.getvalue())
# Second
# 2024-09-21 00:00:00.000 | INFO     | __main__:<module>:6 - First
Delgan commented 1 week ago

Sorry, I didn't fully grasp your problem but maybe you can benefit from logger.complete(). It's designed to process all pending messages in the queue (equivalent to your "flush" idea?).