jaraco / pytest-perf

MIT License
3 stars 1 forks source link

0.12.0: pytest is failing with errors because it needs git tree? #6

Open kloczek opened 2 years ago

kloczek commented 2 years ago

I'm trying to package your module as an rpm package. So I'm using the typical PEP517 based build, install and test cycle used on building packages from non-root account.

Looks like pytest-perf test suite is glued to git working tree (I'm using as input source tar ball autogenerated tar by github from git tag). Here is pytest output:

+ PYTHONPATH=/home/tkloczko/rpmbuild/BUILDROOT/python-pytest-perf-0.12.0-2.fc35.x86_64/usr/lib64/python3.8/site-packages:/home/tkloczko/rpmbuild/BUILDROOT/python-pytest-perf-0.12.0-2.fc35.x86_64/usr/lib/python3.8/site-packages
+ /usr/bin/pytest -ra
=========================================================================== test session starts ============================================================================
platform linux -- Python 3.8.13, pytest-7.1.1, pluggy-1.0.0
rootdir: /home/tkloczko/rpmbuild/BUILD/pytest-perf-0.12.0, configfile: pytest.ini
plugins: perf-0.12.0, forked-1.4.0, xdist-2.5.0
collected 2 items

pytest_perf/runner.py FF                                                                                                                                             [100%]

================================================================================= FAILURES =================================================================================
_____________________________________________________________________ [doctest] runner.BenchmarkRunner _____________________________________________________________________
071
072     >>> br = BenchmarkRunner()
UNEXPECTED EXCEPTION: CalledProcessError(128, ['git', 'remote', 'get-url', 'origin'])
Traceback (most recent call last):
  File "/usr/lib64/python3.8/doctest.py", line 1336, in __run
    exec(compile(example.source, filename, "single",
  File "<doctest runner.BenchmarkRunner[0]>", line 1, in <module>
  File "/home/tkloczko/rpmbuild/BUILD/pytest-perf-0.12.0/pytest_perf/runner.py", line 80, in __init__
    self.control_env = self._setup_env(upstream_url(spec, control), *deps)
  File "/home/tkloczko/rpmbuild/BUILD/pytest-perf-0.12.0/pytest_perf/runner.py", line 107, in upstream_url
    origin = subprocess.check_output(cmd, **_text).strip()
  File "/usr/lib64/python3.8/subprocess.py", line 415, in check_output
    return run(*popenargs, stdout=PIPE, timeout=timeout, check=True,
  File "/usr/lib64/python3.8/subprocess.py", line 516, in run
    raise CalledProcessError(retcode, process.args,
subprocess.CalledProcessError: Command '['git', 'remote', 'get-url', 'origin']' returned non-zero exit status 128.
/home/tkloczko/rpmbuild/BUILD/pytest-perf-0.12.0/pytest_perf/runner.py:72: UnexpectedException
--------------------------------------------------------------------------- Captured stderr call ---------------------------------------------------------------------------
fatal: not a git repository (or any parent up to mount point /home/tkloczko)
Stopping at filesystem boundary (GIT_DISCOVERY_ACROSS_FILESYSTEM not set).
______________________________________________________________________ [doctest] runner.upstream_url _______________________________________________________________________
100
101     >>> upstream_url()
UNEXPECTED EXCEPTION: CalledProcessError(128, ['git', 'remote', 'get-url', 'origin'])
Traceback (most recent call last):
  File "/usr/lib64/python3.8/doctest.py", line 1336, in __run
    exec(compile(example.source, filename, "single",
  File "<doctest runner.upstream_url[0]>", line 1, in <module>
  File "/home/tkloczko/rpmbuild/BUILD/pytest-perf-0.12.0/pytest_perf/runner.py", line 107, in upstream_url
    origin = subprocess.check_output(cmd, **_text).strip()
  File "/usr/lib64/python3.8/subprocess.py", line 415, in check_output
    return run(*popenargs, stdout=PIPE, timeout=timeout, check=True,
  File "/usr/lib64/python3.8/subprocess.py", line 516, in run
    raise CalledProcessError(retcode, process.args,
subprocess.CalledProcessError: Command '['git', 'remote', 'get-url', 'origin']' returned non-zero exit status 128.
/home/tkloczko/rpmbuild/BUILD/pytest-perf-0.12.0/pytest_perf/runner.py:101: UnexpectedException
--------------------------------------------------------------------------- Captured stderr call ---------------------------------------------------------------------------
fatal: not a git repository (or any parent up to mount point /home/tkloczko)
Stopping at filesystem boundary (GIT_DISCOVERY_ACROSS_FILESYSTEM not set).
========================================================================= short test summary info ==========================================================================
FAILED pytest_perf/runner.py::runner.BenchmarkRunner
FAILED pytest_perf/runner.py::runner.upstream_url
============================================================================ 2 failed in 0.10s =============================================================================

Is it any way to use test suite sithout git tree? 🤔 (I donl't provide network acces from my build systems)

jaraco commented 2 years ago

Yes, currently this project relies on git to locate the baseline code for its performance. I'm not sure it's the best approach and it's certainly undesirable to have that dependency, but currently that's the best it's got. I could potentially update the test suite to be lenient to such cases, but would that help, or would you just run into the same issue again downstream when a project is using this package to test its performance (and again relies on git to find that upstream)?

jaraco commented 2 years ago

I also note that in #3 (and #4), we're working on changes that would allow a downstream project to specify the baseline rather than having to discover it and clone it.

jaraco commented 1 year ago

4 has been merged and released as 0.13.0. Does passing --perf-baseline pointing to a suitable alternate (checkout of the code) help? I suspect some of the tests still need a git clone (like control('v0.9.2').

Correction: #4 only provided facilities for soliciting the --perf-baseline parameter, but doesn't actually do anything with it. We'd need to incorporate some of the ideas from #3 as well to honor those parameters.

mtelka commented 11 months ago

I tried to test 0.13.1 and the following six tests failed because of this issue:

FAILED exercises.py::exercises.py:check_perf_isolated - subprocess.CalledProc...
FAILED exercises.py::exercises.py:with deps and extras - subprocess.CalledPro...
FAILED exercises.py::exercises.py:diff_from_oh_nine_two_perf - subprocess.Cal...
FAILED exercises.py::exercises.py:simple test - subprocess.CalledProcessError...
FAILED pytest_perf/runner.py::runner.BenchmarkRunner
FAILED pytest_perf/runner.py::runner.upstream_url

Note: tested without --perf-baseline.

jaraco commented 11 months ago

Maybe pytest-perf should simply skip any tests if a baseline copy is unavailable. That is, in the general case, when testing a single sdist in isolation from the world, there's nothing against which to compare performance. It's not obvious to me whether pytest-perf should detect this condition implicitly or whether downstream integrators should be expected to opt out of running the relevant tests. And there's probably a third option, where downstream integrators could provide some signal that they're doing an isolated build (e.g. an env variable ISOLATED_BUILD) and libraries like pytest-perf could use that signal to infer to disable the behavior.

jaraco commented 6 months ago

In the feature/no-git branch, I've added a trap for the condition where the runner can't be constructed, and in that case, skip the tests.

Unfortunately, that doesn't address the unit tests in the docstrings (runner.BenchmarkRunner and runner.upstream_url). Even if the plugin supports a non-git environment, it's going to be difficult to get other tests to run not from a git environment. And those two tests are testing the bulk of the behavior in that module. If they're skipped, there's not a whole lot left to test.

Is there really value in making sure the tests run from a source dist, even if the results are mostly meaningless? In other words, maybe downstream packagers should just make a note not to run the tests for this package.

mtelka commented 6 months ago

Since few key projects needs pytest-perf as a test dependency (namely setuptools and importlib-metadata) it would be great to make testing of pytest-perf somehow working. Even if it tests almost nothing meaningful the testing should pass.

The other option would be to patch out the pytest-perf dependency from affected projects and do not bother with packaging of pytest-perf at all. But that adds additional maintenance burden for affected projects. Unfortunately, there is no support in the Python world (AFAIK) to express optional (or recommended) test dependencies. Something like "suggests" in the Perl world.