Delgan / loguru

Python logging made (stupidly) simple
MIT License
20.09k stars 704 forks source link

Duplicate logs with SQLAlchemy after intercept #765

Open singhish opened 1 year ago

singhish commented 1 year ago

Hey @Delgan! Neat tool here that I am trying to implement in one of my projects.

I am trying to integrate SQLAlchemy's logging with Loguru using your InterceptHandler class exactly as it is written in the docs, but am still getting duplicate logs from SQLAlchemy. Here is a minimal example that shows what I'm talking about:

# example.py
import logging

from loguru import logger
from sqlalchemy import Column, String, create_engine
from sqlalchemy.ext.declarative import declarative_base

class InterceptHandler(logging.Handler):
    def emit(self, record):
        # Get corresponding Loguru level if it exists
        try:
            level = logger.level(record.levelname).name
        except ValueError:
            level = record.levelno

        # Find caller from where originated the logged message
        frame, depth = logging.currentframe(), 2
        while frame.f_code.co_filename == logging.__file__:
            frame = frame.f_back
            depth += 1

        logger.opt(depth=depth, exception=record.exc_info).log(
            level, record.getMessage()
        )

logging.basicConfig(handlers=[InterceptHandler()], level=0)

db_url = "sqlite:///./test.db"

engine = create_engine(db_url, echo="debug")

Base = declarative_base()

class Test(Base):
    __tablename__ = "test"

    id = Column(String, primary_key=True, index=True)
    data = Column(String)

Base.metadata.create_all(engine)

This is the output I desire:

$ python example.py
2022-12-16 14:31:52.504 | INFO     | sqlalchemy.log:log:179 - BEGIN (implicit)
2022-12-16 14:31:52.504 | INFO     | sqlalchemy.log:log:179 - PRAGMA main.table_info("test")
2022-12-16 14:31:52.504 | INFO     | sqlalchemy.log:log:179 - [raw sql] ()
2022-12-16 14:31:52.504 | DEBUG    | sqlalchemy.log:log:179 - Col ('cid', 'name', 'type', 'notnull', 'dflt_value', 'pk')
2022-12-16 14:31:52.505 | INFO     | sqlalchemy.log:log:179 - PRAGMA temp.table_info("test")
2022-12-16 14:31:52.505 | INFO     | sqlalchemy.log:log:179 - [raw sql] ()
2022-12-16 14:31:52.505 | DEBUG    | sqlalchemy.log:log:179 - Col ('cid', 'name', 'type', 'notnull', 'dflt_value', 'pk')
2022-12-16 14:31:52.505 | INFO     | sqlalchemy.log:log:179 - 
CREATE TABLE test (
        id VARCHAR NOT NULL, 
        data VARCHAR, 
        PRIMARY KEY (id)
)

2022-12-16 14:31:52.505 | INFO     | sqlalchemy.log:log:179 - [no key 0.00007s] ()
2022-12-16 14:31:52.506 | INFO     | sqlalchemy.log:log:179 - CREATE INDEX ix_test_id ON test (id)
2022-12-16 14:31:52.506 | INFO     | sqlalchemy.log:log:179 - [no key 0.00010s] ()
2022-12-16 14:31:52.506 | INFO     | sqlalchemy.log:log:179 - COMMIT

But, this is the output I'm getting. Note the duplicate lines coming from SQLAlchemy's own internal logger:

$ python example.py 
2022-12-16 14:31:52,504 INFO sqlalchemy.engine.Engine BEGIN (implicit)
2022-12-16 14:31:52.504 | INFO     | sqlalchemy.log:log:179 - BEGIN (implicit)
2022-12-16 14:31:52,504 INFO sqlalchemy.engine.Engine PRAGMA main.table_info("test")
2022-12-16 14:31:52.504 | INFO     | sqlalchemy.log:log:179 - PRAGMA main.table_info("test")
2022-12-16 14:31:52,504 INFO sqlalchemy.engine.Engine [raw sql] ()
2022-12-16 14:31:52.504 | INFO     | sqlalchemy.log:log:179 - [raw sql] ()
2022-12-16 14:31:52,504 DEBUG sqlalchemy.engine.Engine Col ('cid', 'name', 'type', 'notnull', 'dflt_value', 'pk')
2022-12-16 14:31:52.504 | DEBUG    | sqlalchemy.log:log:179 - Col ('cid', 'name', 'type', 'notnull', 'dflt_value', 'pk')
2022-12-16 14:31:52,505 INFO sqlalchemy.engine.Engine PRAGMA temp.table_info("test")
2022-12-16 14:31:52.505 | INFO     | sqlalchemy.log:log:179 - PRAGMA temp.table_info("test")
2022-12-16 14:31:52,505 INFO sqlalchemy.engine.Engine [raw sql] ()
2022-12-16 14:31:52.505 | INFO     | sqlalchemy.log:log:179 - [raw sql] ()
2022-12-16 14:31:52,505 DEBUG sqlalchemy.engine.Engine Col ('cid', 'name', 'type', 'notnull', 'dflt_value', 'pk')
2022-12-16 14:31:52.505 | DEBUG    | sqlalchemy.log:log:179 - Col ('cid', 'name', 'type', 'notnull', 'dflt_value', 'pk')
2022-12-16 14:31:52,505 INFO sqlalchemy.engine.Engine 
CREATE TABLE test (
        id VARCHAR NOT NULL, 
        data VARCHAR, 
        PRIMARY KEY (id)
)

2022-12-16 14:31:52.505 | INFO     | sqlalchemy.log:log:179 - 
CREATE TABLE test (
        id VARCHAR NOT NULL, 
        data VARCHAR, 
        PRIMARY KEY (id)
)

2022-12-16 14:31:52,505 INFO sqlalchemy.engine.Engine [no key 0.00007s] ()
2022-12-16 14:31:52.505 | INFO     | sqlalchemy.log:log:179 - [no key 0.00007s] ()
2022-12-16 14:31:52,506 INFO sqlalchemy.engine.Engine CREATE INDEX ix_test_id ON test (id)
2022-12-16 14:31:52.506 | INFO     | sqlalchemy.log:log:179 - CREATE INDEX ix_test_id ON test (id)
2022-12-16 14:31:52,506 INFO sqlalchemy.engine.Engine [no key 0.00010s] ()
2022-12-16 14:31:52.506 | INFO     | sqlalchemy.log:log:179 - [no key 0.00010s] ()
2022-12-16 14:31:52,506 INFO sqlalchemy.engine.Engine COMMIT
2022-12-16 14:31:52.506 | INFO     | sqlalchemy.log:log:179 - COMMIT

Package versions:

python = "^3.10"
loguru = "^0.6.0"
sqlalchemy = "^1.4.45"

Any ideas as far as how to have SQLAlchemy's output get redirected to loguru without all the duplicate lines? Surely I'm not the first person to have run into this issue.

Thanks, and once again, lots of love for Loguru!

Delgan commented 1 year ago

Hi @singhish, thanks for the reproducible example.

I guess you need to do the following after calling basicConfig():

logging.getLogger("sqlalchemy").setLevel("DEBUG")
db_url = "sqlite:///./test.db"
engine = create_engine(db_url, echo=False)

More details are available in the SQLAlchemy documentation: Configuring logging.