Open spearsem opened 5 years ago
a compromise may be to have a 2 level object - an outer one we replace in sys.std* and a inner one
then both get to keep track of the test state and we can trigger warnings on the leak of the "sys.stdout/err" between tests
"after we are done, the wrapper can then redirect the calls to the original"
I meant not exactly that, after we are done, wrapper can live in
potentially copied objects and redirect calls, while new sys.stdout
will
hold original stream.
On Thu, Jan 26, 2023, 7:30 PM Bruno Oliveira @.***> wrote:
It actually has an obvious and clear solution β wrap sys.stdout so it would redirect output to original stream after end of session and do its' magic before that.
That's actually a good idea, haven't thought of it. Has this been mentioned in this thread before? I missed it.
Perhaps we could indeed install a "wrapper", which would hold a pointer to the original sys.stdout, which redirects all calls to a a separate capture stream... after we are done, the wrapper can then redirect the calls to the original sys.stdout. π€
β Reply to this email directly, view it on GitHub https://github.com/pytest-dev/pytest/issues/5502#issuecomment-1405271320, or unsubscribe https://github.com/notifications/unsubscribe-auth/AZ5CUDVY7VQYKS5Z7HM2JVTWUKRBRANCNFSM4H34OUIQ . You are receiving this because you were mentioned.Message ID: @.***>
+1 This issue makes flask_matomo not work as well.
As Linus Torvalds would say, "never break userspace", and as a user, we expect that our tools will play nice together. If some logging setup is easier to do, people will do it. It's inevitable. When reading this, there seems to be a lot of "not my problem" and other type judgement here. The reality of the situation is there IS a problem, and we can fix it in one place (pytest) or we can fix it in a thousand other places. Seems to me that it would be benefitial to at least acknowledge the problem and that pytest has the power to fix it, and that doing so might be the most fruitful way of moving forward together. Otherwise, if we try and fix it elsewhere, there will always be someone who makes a package and "hasn't gotten the logging memo" and the problem will persist forever.
It seems like our rational options are either:
To quote Gary Goldstein: βAll you can change is yourself, but sometimes that changes everything!β
Thanks @andyDoucette for the clear remark. I agree we should strive to arrive at a solution.
@nicoddemus as part of fd capture at least we have to destroy the textio, the buffer and the file-descriptor of stdout/err so the file object being held would be broken to begin with
I see what you are saying, but this issue is the context of sys
capture (due to logging), so I don't think we need to mess with fd
capture?
i propose that pytest adds a own logging setup that adds a handler well before importing user code, thus preventing the problem with logging basic setup as a starting point
That would be ideal, however it is hard to convince/educate all users to do that.
I still like the "wrapper" idea, seems like it would solve the original issue, no?
To be clear, I mean something like this:
sys.stdout
and sys.stderr
, which capture can control to start/stopping capture.This lets users setup logging as they are used to; the only downside I see is that "wrapper" object living forever, but should be a small class and I do not see much problem with it.
Again this would only affect sys
capture.
Do other people envison other problems with this approach?
def clear_loggers(): """Remove handlers from all loggers""" import logging loggers = [logging.getLogger()] + list(logging.Logger.manager.loggerDict.values()) for logger in loggers: handlers = getattr(logger, 'handlers', []) for handler in handlers: logger.removeHandler(handler)
This worked for me. I never even setup logging on import. I use it inside daemon threads that might close unexpectedly. The logging errors were preventing me from seeing the tests report with nox. It shouldn't be captured.
This function with with teardown setup saved me, thanks a ton!
This function with with teardown setup saved me, thanks a ton!
Just to understand, where am I supposed to place and call this function? I'm encountering the same exact problem as everyone else
This function with with teardown setup saved me, thanks a ton!
Just to understand, where am I supposed to place and call this function? I'm encountering the same exact problem as everyone else I am seeing same issue. I am guessing to call this function in atexit callback?
This function with with teardown setup saved me, thanks a ton!
Just to understand, where am I supposed to place and call this function? I'm encountering the same exact problem as everyone else I am seeing same issue. I am guessing to call this function in atexit callback?
I tried to add it in my cleanup file and called at end of my atexit callback it didnt work for me. but I am seeing my log file has correct messages byt at end I get value error. I have simple setup where I create objects in some file like test_object.py and the another file has atexit handler where I delete object
This function with with teardown setup saved me, thanks a ton!
Just to understand, where am I supposed to place and call this function? I'm encountering the same exact problem as everyone else
I don't think my solution is good, it was rushed to deliver a poc, but what I did was to create a pytest fixture with that code, lets call this fixture clear_log
.
After that, I just used the decorator usefixtures in the problematic functions, like:
@pytest.mark.usefixtures("clear_log")
def test_something()
...
Not very neat, but solved the problem for me.
@Renatochaz thanks man
when I add second atexit handler I see that same problem arises. but if I use clear_loggers method in second atexit handler then nothing gets logged but I dont see errors. if I dont add that then I see errors with logger, any help will be good
I'm still seeing this in python 3.10.11.
I think this should be fixed in lib/python3.10/logging/__init__.py
by catching the ValueError.
Following fixture based on the previously mentioned idea of handlers cleanup works for my async app to avoid closed resources error after the tests have been actually finished.
Be careful about fixture order if you have another session teardowns with logging.
@pytest.fixture(scope='session', autouse=True)
def cleanup_logging_handlers():
try:
yield
finally:
for handler in logging.root.handlers[:]:
if isinstance(handler, logging.StreamHandler):
logging.root.removeHandler(handler)
Spent a while chasing this very issue down testing a module using multiprocessing.log_to_stderr
so putting the details here in case it helps someone else.
import logging
import multiprocessing
multiprocessing.log_to_stderr(logging.INFO)
def test():
assert True
The error is logged in Python 3.11 but still is cryptic unwanted console noise
pytest mp_log_err.py
--- Logging error ---
Traceback (most recent call last):
File "/home/calum/.pyenv/versions/3.11.5/lib/python3.11/logging/__init__.py", line 1114, in emit
stream.write(msg + self.terminator)
ValueError: I/O operation on closed file.
Call stack:
File "/home/calum/.pyenv/versions/3.11.5/lib/python3.11/multiprocessing/util.py", line 332, in _exit_function
info('process shutting down')
File "/home/calum/.pyenv/versions/3.11.5/lib/python3.11/multiprocessing/util.py", line 54, in info
_logger.log(INFO, msg, *args)
Message: 'process shutting down'
Arguments: ()
I implemented a modified version of the above fixture but specifically for multiprocessing
@pytest.fixture(scope="session", autouse=True)
def cleanup_multiprocessing_logging_handlers():
"""Cleanup multiprocessing logging handlers."""
try:
yield
finally:
mp_logger = logging.root.manager.loggerDict.get("multiprocessing")
if mp_logger:
for handler in mp_logger.handlers:
if isinstance(handler, logging.StreamHandler):
mp_logger.addHandler(logging.NullHandler())
mp_logger.removeHandler(handler)
Good morning,
I am seeing the same issue as described in https://github.com/pytest-dev/pytest/issues/14 but using a much more modern combination of both (anaconda) Python and pytest.
In the relevant .ini file I have this:
where
-n 4
is to use 4 parallel test runners with pytest-xdist. Edit: I was able to isolate the behavior to when using parallel worker with xdist, so likely it is an issue with an xdist worker prematurely closing a logger stream.Basically, when I run one particular test file, I see a large number of repeated error messages in the pytest output:
The issue appears to be related to pytest prematurely closing the stream associated with a given logger for the underlying code module in question. I can't really post all the code since it is an extended example from my work, but I can confirm there is nothing exotic or unusual happening with this logger. The module being tested just uses the normal convention to define
and there are no duplicate loggers or conflicts with this logger's name. The logger itself is not defined in any multiprocessing context or anything like that. Just a boring import of a top-level module from our code base.
But during the test execution, something weird is happening with pytest such that the eventual calls into that logger produce these errors.
If I turn off pytest capturing with
--capture=no
, then the messages go away, but unfortunately so does a bunch of other necessary output that I want pytest to capture and display.How can I debug this further? I'm sorry that I cannot provide a minimal working example, but I can definitely confirm that there is nothing weird going on with these tests. It's a very straightforward use of
logging
and a very straightforward test file with simple imports and function calls.pip list
from the virtual environment you are using