Closed sergeykad closed 5 months ago
Hi.
If one routes logs to a Kafka handler but also uses logging within this Kafka handler, this would lead to infinite recursion. You need to filter-out the logs emitted from within your Kafka handler.
Here is a simplified example:
import sys
from loguru import logger
from contextvars import ContextVar
def kafka_sink(message):
logger.debug("Within the kakfa sink")
print("[KAKFA]", message, end="")
def avoid_recursion(record):
return record["function"] != "kafka_sink"
if __name__ == "__main__":
logger.remove()
logger.add(sys.stderr)
logger.add(kafka_sink, filter=avoid_recursion)
logger.info("First message")
logger.debug("Another message")
Thanks @Delgan .
Unfortunately, I wasn't able to solve the problem using your suggestion. Still, it gave me the idea to solve it by setting a thread-local variable in the relevant logging.Handler
implementation emit
method and checking the variable before forwarding any messages.
Depending on your use case, using a thread-local is another viable solution, yes. :+1:
Hi, I redirected the standard Python logs to Loguru according to this example. Additionally, I have a
logging.Handler
implementation that sends log messages to Kafka.This combination causes the following errors:
RuntimeError: Could not acquire internal lock because it was already in use (deadlock avoided). This likely happened because the logger was re-used inside a sink, a signal handler or a '__del__' method. This is not permitted because the logger and its handlers are not re-entrant.
The issue happens when Kafka tries logging messages while writing another log message to a Kafka topic.
Is there a way to detect that this problem will happen during interception of the standard Python logging and filter out such messages? Is there a better approach to solving this problem?
I use Kafka for other things besides logging, so filtering out all Kafka logging is not desirable.