Open Hikari9 opened 5 years ago
@ionelmc this is a specifically comprehensive version of the issue you asked me to move before. Please check it out if you have the time
Tinkering around this issue, I was able to come up with the following workarounds. I don't know why they work as opposed to python setup.py test
, but I don't want to leave the issue be for consistency.
Running pytest
directly (instead of python setup.py test
) works fine as expected. Of course, test requirements have to be pip install
ed manually first for this to work.
Since Workaround 1 worked, I observed that when I run pip install first BEFORE running python setup.py test
, then it also works. These are the steps I did:
pip install
the test packages beforehand
pip install coverage pytest pytest-cov
Show the versions with pip freeze
$> pip freeze
atomicwrites==1.3.0
attrs==19.1.0
coverage==4.5.3
more-itertools==6.0.0
pluggy==0.9.0
py==1.8.0
pytest==4.3.1
pytest-cov==2.6.1
six==1.12.0
Run python setup.py test
$> python setup.py test
running pytest
running egg_info
writing sample.egg-info/PKG-INFO
writing dependency_links to sample.egg-info/dependency_links.txt
writing top-level names to sample.egg-info/top_level.txt
reading manifest file 'sample.egg-info/SOURCES.txt'
writing manifest file 'sample.egg-info/SOURCES.txt'
running build_ext
============================================ test session starts ============================================
platform linux -- Python 3.7.1, pytest-4.3.1, py-1.8.0, pluggy-0.9.0 -- /home/Hikari9/pytest-test/env/bin/python
cachedir: .pytest_cache
rootdir: /home/Hikari9/pytest-test, inifile: setup.cfg
plugins: cov-2.6.1
collected 1 item
tests/test_sample.py::test_program_main PASSEDCoverage.py warning: No data was collected. (no-data-collected)
----------- coverage: platform linux, python 3.7.1-final-0 -----------
Name Stmts Miss Cover Missing
--------------------------------------------------
sample/__init__.py 0 0 100%
sample/program.py 5 0 100%
--------------------------------------------------
TOTAL 5 0 100%
========================================= 1 passed in 4.16 seconds ==========================================
Notice how the coverage this time around is correct at 100%.
I also had a way to bypass pip install
.
The idea is to run coverage in parallel mode and run the subprocess with python -m coverage run -m <module>
. In order to achieve this, the flag --parallel-mode
is not enough; you have to add parallel = true
to the configuration file.
Uninstall all packages from virtualenv (allow setup.py to install it later)
pip freeze | xargs pip uninstall -y
After this, pip freeze
should be empty.
Change the command in tests/test_sample.py
as follows:
# tests/test_sample.py
from __future__ import print_function
from __future__ import with_statement
import os
import sys
import subprocess
import pytest
def test_program_main():
# NEW COMMAND: python -m coverage run -m sample.program
command = [sys.executable, '-m', 'coverage', 'run', '-m', 'sample.program']
with subprocess.Popen(command, # python -m coverage run -m sample.program
stdout=subprocess.PIPE, # pipe to stdout
env=os.environ.copy() # pass parent's environment
) as process:
# assert compiled and run successfully (ie. timeout expires)
with pytest.raises(subprocess.TimeoutExpired):
process.communicate(timeout=3)
process.wait()
Confirm that coverage is still 0% when running python setup.py test
$> python setup.py test
running pytest
Searching for coverage==4.5.3
Best match: coverage 4.5.3
Processing coverage-4.5.3-py3.7-linux-x86_64.egg
Using /home/Hikari9/pytest-test/.eggs/coverage-4.5.3-py3.7-linux-x86_64.egg
Searching for pytest-cov==2.6.1
Best match: pytest-cov 2.6.1
Processing pytest_cov-2.6.1-py3.7.egg
Using /home/Hikari9/pytest-test/.eggs/pytest_cov-2.6.1-py3.7.egg
Searching for pytest==4.3.1
Best match: pytest 4.3.1
Processing pytest-4.3.1-py3.7.egg
Using /home/Hikari9/pytest-test/.eggs/pytest-4.3.1-py3.7.egg
Searching for six>=1.10.0
Best match: six 1.12.0
Processing six-1.12.0-py3.7.egg
Using /home/Hikari9/pytest-test/.eggs/six-1.12.0-py3.7.egg
Searching for py>=1.5.0
Best match: py 1.8.0
Processing py-1.8.0-py3.7.egg
Using /home/Hikari9/pytest-test/.eggs/py-1.8.0-py3.7.egg
Searching for pluggy>=0.7
Best match: pluggy 0.9.0
Processing pluggy-0.9.0-py3.7.egg
Using /home/Hikari9/pytest-test/.eggs/pluggy-0.9.0-py3.7.egg
Searching for more-itertools>=4.0.0
Best match: more-itertools 6.0.0
Processing more_itertools-6.0.0-py3.7.egg
Using /home/Hikari9/pytest-test/.eggs/more_itertools-6.0.0-py3.7.egg
Searching for attrs>=17.4.0
Best match: attrs 19.1.0
Processing attrs-19.1.0-py3.7.egg
Using /home/Hikari9/pytest-test/.eggs/attrs-19.1.0-py3.7.egg
Searching for atomicwrites>=1.0
Best match: atomicwrites 1.3.0
Processing atomicwrites-1.3.0-py3.7.egg
Using /home/Hikari9/pytest-test/.eggs/atomicwrites-1.3.0-py3.7.egg
running egg_info
writing sample.egg-info/PKG-INFO
writing dependency_links to sample.egg-info/dependency_links.txt
writing top-level names to sample.egg-info/top_level.txt
reading manifest file 'sample.egg-info/SOURCES.txt'
writing manifest file 'sample.egg-info/SOURCES.txt'
running build_ext
============================================ test session starts ============================================
platform linux -- Python 3.7.1, pytest-4.3.1, py-1.8.0, pluggy-0.9.0 -- /home/Hikari9/pytest-test/env/bin/python
cachedir: .pytest_cache
rootdir: /home/Hikari9/pytest-test, inifile: setup.cfg
plugins: cov-2.6.1
collected 1 item
tests/test_sample.py::test_program_main PASSEDCoverage.py warning: No data was collected. (no-data-collected)
----------- coverage: platform linux, python 3.7.1-final-0 -----------
Name Stmts Miss Cover Missing
--------------------------------------------------
sample/__init__.py 0 0 100%
sample/program.py 5 5 0% 2-7
--------------------------------------------------
TOTAL 5 5 0%
========================================= 1 passed in 4.12 seconds ==========================================
Add parallel = true
in setup.cfg under [coverage:run]
:
[coverage:run]
parallel = true
For example, my setup.cfg now looks like:
; setup.cfg
[aliases]
test = pytest
[tool:pytest]
addopts = -vs --cov
norecursedirs =
*/env/*
*/.eggs/*
[coverage:run]
parallel = true
source = sample
[coverage:report]
show_missing = True
Confirm that coverage is now 100% with this change:
$> python setup.py test
running pytest
Searching for coverage==4.5.3
Best match: coverage 4.5.3
Processing coverage-4.5.3-py3.7-linux-x86_64.egg
Using /home/Hikari9/pytest-test/.eggs/coverage-4.5.3-py3.7-linux-x86_64.egg
Searching for pytest-cov==2.6.1
Best match: pytest-cov 2.6.1
Processing pytest_cov-2.6.1-py3.7.egg
Using /home/Hikari9/pytest-test/.eggs/pytest_cov-2.6.1-py3.7.egg
Searching for pytest==4.3.1
Best match: pytest 4.3.1
Processing pytest-4.3.1-py3.7.egg
Using /home/Hikari9/pytest-test/.eggs/pytest-4.3.1-py3.7.egg
Searching for six>=1.10.0
Best match: six 1.12.0
Processing six-1.12.0-py3.7.egg
Using /home/Hikari9/pytest-test/.eggs/six-1.12.0-py3.7.egg
Searching for py>=1.5.0
Best match: py 1.8.0
Processing py-1.8.0-py3.7.egg
Using /home/Hikari9/pytest-test/.eggs/py-1.8.0-py3.7.egg
Searching for pluggy>=0.7
Best match: pluggy 0.9.0
Processing pluggy-0.9.0-py3.7.egg
Using /home/Hikari9/pytest-test/.eggs/pluggy-0.9.0-py3.7.egg
Searching for more-itertools>=4.0.0
Best match: more-itertools 6.0.0
Processing more_itertools-6.0.0-py3.7.egg
Using /home/Hikari9/pytest-test/.eggs/more_itertools-6.0.0-py3.7.egg
Searching for attrs>=17.4.0
Best match: attrs 19.1.0
Processing attrs-19.1.0-py3.7.egg
Using /home/Hikari9/pytest-test/.eggs/attrs-19.1.0-py3.7.egg
Searching for atomicwrites>=1.0
Best match: atomicwrites 1.3.0
Processing atomicwrites-1.3.0-py3.7.egg
Using /home/Hikari9/pytest-test/.eggs/atomicwrites-1.3.0-py3.7.egg
running egg_info
writing sample.egg-info/PKG-INFO
writing dependency_links to sample.egg-info/dependency_links.txt
writing top-level names to sample.egg-info/top_level.txt
reading manifest file 'sample.egg-info/SOURCES.txt'
writing manifest file 'sample.egg-info/SOURCES.txt'
running build_ext
============================================ test session starts ============================================
platform linux -- Python 3.7.1, pytest-4.3.1, py-1.8.0, pluggy-0.9.0 -- /home/Hikari9/pytest-test/env/bin/python
cachedir: .pytest_cache
rootdir: /home/Hikari9/pytest-test, inifile: setup.cfg
plugins: cov-2.6.1
collected 1 item
tests/test_sample.py::test_program_main PASSEDCoverage.py warning: No data was collected. (no-data-collected)
----------- coverage: platform linux, python 3.7.1-final-0 -----------
Name Stmts Miss Cover Missing
--------------------------------------------------
sample/__init__.py 0 0 100%
sample/program.py 5 0 100%
--------------------------------------------------
TOTAL 5 0 100%
========================================= 1 passed in 4.15 seconds ==========================================
pytest-cov
?pytest-cov
, and is only solvable by setuptools
or maybe coverage
?Oooof ... so there are two issues here:
Can you avoid using setup.py test
and just virtualenv/tox/pipenv or whatever?
Yeah, that's my workaround for now (running pytest directly).
As an additional comment, I don't think signals are used in subprocess.Process.communicate(timeout)
(see the quote reply from issue description).
For the general solution, I'm betting that workaround 3 has promise, ie. setting parallel = true
, as it works with eggs even without setting the path (this forces coverage run
inside subprocess though).
Coverage docs says you can inject import coverage; coverage.process_startup()
if you add it to sitecustomize.py. I'm not a packaging guru but maybe you can mimic that instead of .pth for eggs, then set parallel = true
as a default? Or eagerly load such a script in pytest-cov
is active?
You can use sitecustomize.py
in lieu of the nonfunctional pytest-cov.pth but then how would you use it really? You'd change the user's homedir from setup.py
- but that is quite wrong and dangerous.
I'm afraid there's no other way to put this: setup.py test
was never a sensible way to run tests, and egg are deprecated now anyway. You could use setup.py test
, override the test command and make it use a virtualenv instead of eggs but it seems like a waste of time.
I'm experiencing a similar problem (coverage metrics not collected from subprocess terminated by timeout) even when running pytest --cov=
or coverage run -m pytest
.
It can be reproduced with this example.py
(note the sleep for 10 seconds):
import time
import sys
if __name__ == '__main__':
print('hello')
sys.stdout.flush()
time.sleep(10)
And with this test_example.py
(note the timeout after 2 seconds):
import subprocess
import sys
def test_example_main():
try:
res = subprocess.run([sys.executable, 'example.py'],
capture_output=True, timeout=2)
except subprocess.TimeoutExpired as e:
assert 'hello' in e.stdout.decode()
else:
assert 'hello' in res.stdout.decode()
pytest gives 0% coverage:
$ pytest --cov=. test_example.py
========================= test session starts ==========================
platform linux -- Python 3.8.4, pytest-6.0.1, py-1.9.0, pluggy-0.13.1
rootdir: /home/user
plugins: cov-2.10.0
collected 1 item
test_example.py . [100%]
----------- coverage: platform linux, python 3.8.4-final-0 -----------
Name Stmts Miss Cover
-------------------------------------
example.py 6 6 0%
test_example.py 8 1 88%
-------------------------------------
TOTAL 14 7 50%
========================== 1 passed in 2.06s ===========================
Without a timeout (e.g., with sleep(1)
inside example.py
), coverage is 100%.
I understand that this is because the subprocess is not running the atexit
handler which writes the coverage data. In fact, by modifying example.py
as follows I get 88% coverage, with or without a timeout (not 100% because coverage data is written before time.sleep
, which is not recorded as covered):
import time
import sys
import atexit
if __name__ == '__main__':
print('hello')
sys.stdout.flush()
atexit._run_exitfuncs()
time.sleep(10)
As a workaround, I could send a signal to the process after the timeout and trigger the write of coverage data (I need subprocess.communicate
for this because subprocess.run
terminates the process); i.e., with this example.py
:
import time
import sys
import atexit
import signal
def on_alarm(signum, frame):
atexit._run_exitfuncs()
if __name__ == '__main__':
signal.signal(signal.SIGALRM, on_alarm)
print('hello')
sys.stdout.flush()
time.sleep(10)
And this test_example.py
:
import signal
import subprocess
import sys
def test_example_main():
p = subprocess.Popen([sys.executable, 'example.py'],
stdout=subprocess.PIPE)
try:
out, err = p.communicate(timeout=2)
except subprocess.TimeoutExpired as e:
p.send_signal(signal.SIGALRM)
assert 'hello' in e.stdout.decode()
else:
assert 'hello' in out.decode()
I get 100% coverage after a timeout:
$ pytest --cov=. --cov-report=term test_example.py -s
========================= test session starts ==========================
platform linux -- Python 3.8.4, pytest-6.0.1, py-1.9.0, pluggy-0.13.1
rootdir: /home/user
plugins: cov-2.10.0
collected 1 item
test_example.py .
----------- coverage: platform linux, python 3.8.4-final-0 -----------
Name Stmts Miss Cover
-------------------------------------
example.py 11 0 100%
test_example.py 11 1 91%
-------------------------------------
TOTAL 22 1 95%
========================== 1 passed in 2.05s ===========================
But this requires changes to the code under test (and SIGALRM
could have other uses in the program). Is there any cleaner/better alternative?
Use sigterm or sigint - those are normally used to stop a running process.
Thank you; SIGTERM
is definitely better than SIGALRM
for this.
But I was also looking for a way to avoid modifying the program under test; given that the subprocess is a Python process, one way could be registering the signal handler inside sitecustomize.py
. For example, this works:
example.py
import time
import sys
if name == 'main': print('hello') sys.stdout.flush() time.sleep(10)
- `test_example.py`
import signal import subprocess import sys
def test_example_main(): p = subprocess.Popen([sys.executable, 'example.py'], stdout=subprocess.PIPE) try: out, err = p.communicate(timeout=2) except subprocess.TimeoutExpired as e: p.send_signal(signal.SIGTERM) assert 'hello' in e.stdout.decode() else: assert 'hello' in out.decode()
- `sitecustomize.py`
import coverage import atexit import signal
def callback(signum, frame): atexit._run_exitfuncs()
signal.signal(signal.SIGTERM, callback) coverage.process_startup()
- coverage
$ PYTHONPATH=. pytest --cov=. --cov-report=term test_example.py -s ========================= test session starts ========================== platform linux -- Python 3.8.4, pytest-6.0.1, py-1.9.0, pluggy-0.13.1 rootdir: /home/user plugins: cov-2.10.0 collected 1 item
test_example.py .
TOTAL 26 1 96%
========================== 1 passed in 2.06s ===========================
(The `PYTHONPATH=.` is necessary on my system because there is a system-wide `sitecustomize.py` which would get imported otherwise.)
It's unfortunate that [`subprocess.run` sends only `SIGKILL` to the process](https://github.com/python/cpython/blob/master/Lib/subprocess.py#L505) (which cannot be caught) after `communicate` raises a `TimeoutExpired` exception. If it sent `SIGTERM` first (a best practice), `test_example.py` could avoid messing with signals.
I believe you could implement your own subprocess.run that handles timeouts better in your test suite. That would not add testing concerns in the delivered code, just make the test suite a little bit uglier - I think that's acceptable (and it looks like a matter of 1-2 extra lines of code - proc.communicate+exception handling).
For the cleanup routine I would use pytest_cov.embed.cleanup_on_sigterm
instead of implementing a custom routine that messes with atexit. See https://pytest-cov.readthedocs.io/en/latest/subprocess-support.html#if-you-got-custom-signal-handling
Thanks! pytest_cov.embed.cleanup_on_sigterm
seems much better, it saves coverage data and unregisters pycov
's atexit handler; it also takes care of running already-registered handlers for SIGTERM
.
Originally posted by @Hikari9 in https://github.com/ionelmc/cookiecutter-pylibrary/issues/13#issuecomment-472733094
Summary
Coverage is not collected in
subprocess.Popen
when timeout is raised bycommunicate(timeout=...)
.This only occurs in tests that are run with
python setup.py test
(runningpytest
works fine).Versions
Problem Details
Use case: I want to test
if __name__ == '__main__':
sections in my program.Context: I run tests with
python setup.py test
. All my test requirements are in thetests_require
argument (not pip install). Essentially, mysetup.py
file looks like this:These are the contents of my config file (
setup.cfg
):I want to test the following program with
__name__ == '__main__'
, which MAY RUN INDEFINITELY. A similar real-world program like this would be some server listener (Flask, RabbitMQ, etc).The following would be a simple example (a main program will run for 4 seconds):
I want to test with the CLI command "python -m <module>" (with coverage).
I don't want to transfer to a
def main()
function then call from the test, or useimp.load_module
. I just want to test the actual pure CLI method that runningpython -m sample.program
works without exceptions, without sacrificing coverage.How I tested this was to use
subprocess.Popen
that runspython -m sample.program
, like the following:The test will open a subprocess running
python -m <module>
, effectively going inside theif __name__ == '__main__'
block. Since the process times out 3 seconds (before 4 seconds), then the test should pass. And according to subprocess docs,Therefore, there should be no issue with
SIGTERM
or some other terminate signal, which according to Coverage.py docs, are issues in writing to coverage files, or so I've thought...Ideally, this should work with
python setup.py test
.Expected Result: Have 100% code coverage
HOWEVER (the real world is harsh :sob: :sob: :sob: :sob:)
Actual Result:
The total coverage is effectively 0% even though tests PASSED (see additional log below)...
Additional Logs
Full log of
python setup.py test
:Notice that total coverage is 0% even though it should be 100%.
Also, this suspicious line says a lot I think?
How to reproduce bug
Create a project with the following directory structure:
See file contents from Problem Details section.
Create and activate virtual environment (Python 3.7.1)
Make sure packages are initially empty
Run tests
Some workarounds
See comment https://github.com/pytest-dev/pytest-cov/issues/276#issuecomment-473171284 below.
Related Issues