pytest-dev / pytest-cov

Coverage plugin for pytest.
MIT License
1.75k stars 212 forks source link

pytest-cov and xdist not recording worker lines as covered #384

Open gsteinb opened 4 years ago

gsteinb commented 4 years ago

Hi there.

I am currently having an interesting issue when using pytest-cov with the xdist plugin. I have created a pytest_sessionfinish hook that currently uses a worker to write some files to the directory which is then used by my pytest_terminal_summary to print some output to the screen.

However I also have a pytest test, that tests my plugin with the xdist plugin. dist_result = testdir.runpytest('-n', '2', PYTEST_OPT_IS_MERGE)

When I run this test in my local environment

Distributor ID: Ubuntu Description: Ubuntu 18.04.3 LTS Release: 18.04 Codename: bionic

I have no problems. We also have a codebuild setup on AWS that runs our tests for each build. When it runs in the docker container on AWS, the coverage does not pass specifically for lines that the worker is using.

The docker container is currently running CentOS 7.

these are the lines of code that are not currently running starting from if_is_worker

    def pytest_sessionfinish(self, session):
        if _is_worker(session.config):
            master_output = Path(
                session.config.workerinput[PYTEST_MASTER_OUTPUT_DIR])
            shutil.copytree(
                self.output_dir / constants.MERGED_COVERAGE_DEFAULT_OUT_PATH,
                master_output / PYTEST_COVERAGE_DIR_NAME /
                (PYTEST_COVERAGE_DIR_NAME +
                 f"_{session.config.workerinput['workerid']}"))
            if self.output_dir.exists():
                shutil.rmtree(self.output_dir)

Is there any reason that there would be a discrepancy? Thanks in advance.

~ Gerrit

ionelmc commented 4 years ago

I would need to know at python and package versions at least. See https://github.com/pytest-dev/pytest-cov/tree/master/.github/ISSUE_TEMPLATE

gsteinb commented 4 years ago

ah sorry about that.


test develop-inst-noop: /mnt/scratch/gerrit/mvls-cadence-xrun
test installed: apipkg==1.5,astroid==2.3.3,atomicwrites==1.3.0,attrs==19.3.0,coverage==4.5.4,execnet==1.7.1,importlib-metadata==1.4.0,isort==4.3.21,lazy-object-proxy==1.4.3,mccabe==0.6.1,more-itertools==8.1.0,-e git+git@bitbucket.org:movellus/mvls-cadence-xrun.git@4ac797d6c966032d46b204a98817dd9e0305f0b0#egg=mvls_cadence_xrun,packaging==20.0,pluggy==0.13.1,py==1.8.1,pylint==2.4.4,pyparsing==2.4.6,pytest==4.6.9,pytest-cov==2.8.1,pytest-forked==1.1.3,pytest-xdist==1.31.0,six==1.13.0,typed-ast==1.4.1,wcwidth==0.1.8,wrapt==1.11.2,yapf==0.29.0,zipp==1.0.0
test runtests: commands[0] | pytest -x --cov /mnt/scratch/gerrit/mvls-cadence-xrun/src --no-cov-on-fail --cov-report= --collect-only -qq
tests/test_mvls_cadence_xrun/test_coverage.py: 24
tests/test_mvls_cadence_xrun/test_hdl_exec_simulator.py: 6
tests/test_pytest_mvls_cadence_xrun/test_hooks.py: 5
tests/test_pytest_mvls_cadence_xrun/test_options.py: 3
Coverage.py warning: No data was collected. (no-data-collected)

test runtests: commands[1] | pytest -x --cov /mnt/scratch/gerrit/mvls-cadence-xrun/src --no-cov-on-fail --cov-report term-missing --cov-fail-under 99
======================================== test session starts =========================================
platform linux -- Python 3.6.8, pytest-4.6.9, py-1.8.1, pluggy-0.13.1
cachedir: .tox/test/.pytest_cache
rootdir: /mnt/scratch/gerrit/mvls-cadence-xrun
plugins: forked-1.1.3, cov-2.8.1, xdist-1.31.0, mvls-cadence-xrun-0.2.2.dev13+g4ac797d.d20200115
collected 38 items                                                              ```

This output from tox should contain information like python version etc. 
gsteinb commented 4 years ago

@ionelmc Is this still something you can look at for me?

ionelmc commented 4 years ago

@gsteinb hey, so I've looked more at that strange snippet you posted. Not sure what's going on but without a reproducer all I can do is waste time with guesses. It looks like you have a complex setup with paths coming in from configuration and running pytest in pytest - lots of things can and will go wrong with that sort of setup.