what-studio / profiling

Was an interactive continuous Python profiler.
BSD 3-Clause "New" or "Revised" License
2.97k stars 113 forks source link

setup.py test fails with newest pip #37

Open tony opened 8 years ago

tony commented 8 years ago

e0961da

OS X El Capitan

❯ python --version
Python 2.7.10

~/work/python/profiling .venv master*
❯ pip --version
pip 8.1.2 from /Users/me/work/python/profiling/.venv/lib/python2.7/site-packages (python 2.7)
~/work/python/profiling master*
❯ virtualenv .venv
New python executable in /Users/me/work/python/profiling/.venv/bin/python
Installing setuptools, pip, wheel.... .venv/bin/actdone.

~/work/python/profiling master*
❯ . .venv/bin/activate

~/work/python/profiling .venv master*
❯ pip install -e .
❯ python setup.py test
running test
Searching for yappi>=0.92
Best match: yappi 0.94
Processing yappi-0.94-py2.7-macosx-10.11-intel.egg

Using /Users/me/work/python/profiling/.eggs/yappi-0.94-py2.7-macosx-10.11-intel.egg
Searching for greenlet>=0.4.4
Reading https://pypi.python.org/simple/greenlet/
Best match: greenlet 0.4.9
Downloading https://pypi.python.org/packages/ba/19/7ae57aa8b66f918859206532b1afd7f876582e3c87434ff33261da1cf50c/greenlet-0.4.9.tar.gz#md5=00bb1822d8511cc85f052e89d1fd919b
Processing greenlet-0.4.9.tar.gz
Writing /var/folders/kx/gmc5nl8x2hg370c3nktpc7vm0000gn/T/easy_install-lVzsh8/greenlet-0.4.9/setup.cfg
Running greenlet-0.4.9/setup.py -q bdist_egg --dist-dir /var/folders/kx/gmc5nl8x2hg370c3nktpc7vm0000gn/T/easy_install-lVzsh8/greenlet-0.4.9/egg-dist-tmp-gCLetw
creating /Users/me/work/python/profiling/.eggs/greenlet-0.4.9-py2.7-macosx-10.11-intel.egg
Extracting greenlet-0.4.9-py2.7-macosx-10.11-intel.egg to /Users/me/work/python/profiling/.eggs

Installed /Users/me/work/python/profiling/.eggs/greenlet-0.4.9-py2.7-macosx-10.11-intel.egg
Searching for gevent>=1.1rc2
Best match: gevent 1.1.1
Processing gevent-1.1.1-py2.7-macosx-10.11-intel.egg

Using /Users/me/work/python/profiling/.eggs/gevent-1.1.1-py2.7-macosx-10.11-intel.egg
Traceback (most recent call last):
  File "setup.py", line 135, in <module>
    test_suite='...',
  File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/distutils/core.py", line 151, in setup
    dist.run_commands()
  File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/distutils/dist.py", line 953, in run_commands
    self.run_command(cmd)
  File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/distutils/dist.py", line 972, in run_command
    cmd_obj.run()
  File "/Users/me/work/python/profiling/.venv/lib/python2.7/site-packages/setuptools/command/test.py", line 152, in run
    self.distribution.fetch_build_eggs(self.distribution.tests_require)
  File "/Users/me/work/python/profiling/.venv/lib/python2.7/site-packages/setuptools/dist.py", line 313, in fetch_build_eggs
    replace_conflicting=True,
  File "/Users/me/work/python/profiling/.venv/lib/python2.7/site-packages/pkg_resources/__init__.py", line 834, in resolve
    raise VersionConflict(dist, req).with_context(dependent_req)
pkg_resources.VersionConflict: (gevent 1.1.1 (/Users/me/work/python/profiling/.eggs/gevent-1.1.1-py2.7-macosx-10.11-intel.egg), Requirement.parse('gevent==1.1rc1'))
tony commented 8 years ago

What is the way you expect tests to be ran?

My intuition is also to run py.test, so I install pytest and run py.test.

❯ py.test
========================================== test session starts ==========================================
platform darwin -- Python 2.7.10, pytest-2.9.1, py-1.4.31, pluggy-0.3.1
rootdir: /Users/me/work/python/profiling, inifile: setup.cfg
collected 42 items

test/test_cli.py .....
test/test_profiler.py ...
test/test_sampling.py F....
test/test_stats.py ..........EEEE
test/test_timers.py sss
test/test_tracing.py ...
test/test_utils.py ....
test/test_viewer.py .....

================================================ ERRORS =================================================
__________________________ ERROR at setup of test_deep_stats_dump_performance ___________________________
file /Users/me/work/python/profiling/test/test_stats.py, line 255
  def test_deep_stats_dump_performance(benchmark):
        fixture 'benchmark' not found
        available fixtures: deep_stats, tmpdir_factory, pytestconfig, cache, recwarn, monkeypatch, record_xml_property, capfd, capsys, tmpdir
        use 'py.test --fixtures [testpath]' for help on them.

/Users/me/work/python/profiling/test/test_stats.py:255
__________________________ ERROR at setup of test_deep_stats_load_performance ___________________________
file /Users/me/work/python/profiling/test/test_stats.py, line 260
  def test_deep_stats_load_performance(benchmark):
        fixture 'benchmark' not found
        available fixtures: deep_stats, tmpdir_factory, pytestconfig, cache, recwarn, monkeypatch, record_xml_property, capfd, capsys, tmpdir
        use 'py.test --fixtures [testpath]' for help on them.

/Users/me/work/python/profiling/test/test_stats.py:260
_________________________ ERROR at setup of test_shallow_stats_dump_performance _________________________
file /Users/me/work/python/profiling/test/test_stats.py, line 266
  def test_shallow_stats_dump_performance(benchmark):
        fixture 'benchmark' not found
        available fixtures: deep_stats, tmpdir_factory, pytestconfig, cache, recwarn, monkeypatch, record_xml_property, capfd, capsys, tmpdir
        use 'py.test --fixtures [testpath]' for help on them.

/Users/me/work/python/profiling/test/test_stats.py:266
_________________________ ERROR at setup of test_shallow_stats_load_performance _________________________
file /Users/me/work/python/profiling/test/test_stats.py, line 271
  def test_shallow_stats_load_performance(benchmark):
        fixture 'benchmark' not found
        available fixtures: deep_stats, tmpdir_factory, pytestconfig, cache, recwarn, monkeypatch, record_xml_property, capfd, capsys, tmpdir
        use 'py.test --fixtures [testpath]' for help on them.

/Users/me/work/python/profiling/test/test_stats.py:271
=============================================== FAILURES ================================================
__________________________________________ test_itimer_sampler __________________________________________

    @pytest.mark.flaky(reruns=10)
    def test_itimer_sampler():
        assert signal.getsignal(signal.SIGPROF) == signal.SIG_DFL
        try:
>           _test_sampling_profiler(ItimerSampler(0.0001))

test/test_sampling.py:39:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

sampler = <profiling.sampling.samplers.ItimerSampler object at 0x10925ff90>

    def _test_sampling_profiler(sampler):
        profiler = SamplingProfiler(base_frame=sys._getframe(), sampler=sampler)
        with profiler:
            spin_100ms()
            spin_500ms()
        stat1 = find_stats(profiler.stats, 'spin_100ms')
        stat2 = find_stats(profiler.stats, 'spin_500ms')
        ratio = stat1.deep_hits / stat2.deep_hits
        # 1:5 expaected, but tolerate (0.8~1.2):5
>       assert 0.8 <= ratio * 5 <= 1.2
E       assert (0.9700854700854701 * 5) <= 1.2

test/test_sampling.py:32: AssertionError
======================== 1 failed, 34 passed, 3 skipped, 4 error in 3.80 seconds ========================

Then I look in test/requirements.txt

 cat test/requirements.txt
pytest>=2.6.1
pytest-benchmark>=3.0.0
pytest-rerunfailures>=0.05
eventlet>=0.15  # python>=2.6,<3
gevent>=1  # python>=2.5,<3 !pypy
gevent>=1.1a1  # python>=3 !pypy
gevent==1.1rc1  # pypy<2.6.1
gevent>=1.1rc2  # pypy>=2.6.1
greenlet>=0.4.4  # python>=2.4
yappi>=0.92  # python>=2.6,!=3.0 !pypy

Pip won't accept it:

~/work/python/profiling .venv master*
❯ pip install -r test/requirements.txt
Double requirement given: gevent>=1.1a1 (from -r test/requirements.txt (line 6)) (already in gevent>=1 (from -r test/requirements.txt (line 5)), name='gevent')

I see what you're doing with reqs across different versions.

Have you considered something like https://github.com/saltstack/salt/tree/develop/requirements and putting conditional logic inside of setup.py and tox?

tony commented 8 years ago

Note: eventually I do get it to work if I install by hand, but recent versions of pip are going to fuss.

~/work/python/profiling .venv master*
❯ pip install pytest-benchmark pytest-rerunfailures
Collecting pytest-benchmark
  Using cached pytest_benchmark-3.0.0-py2.py3-none-any.whl
Collecting pytest-rerunfailures
Requirement already satisfied (use --upgrade to upgrade): pytest>=2.6 in ./.venv/lib/python2.7/site-packages (from pytest-benchmark)
Collecting statistics; python_version < "3.4" (from pytest-benchmark)
Requirement already satisfied (use --upgrade to upgrade): py>=1.4.29 in ./.venv/lib/python2.7/site-packages (from pytest>=2.6->pytest-benchmark)
Collecting docutils>=0.3 (from statistics; python_version < "3.4"->pytest-benchmark)
Installing collected packages: docutils, statistics, pytest-benchmark, pytest-rerunfailures
Successfully installed docutils-0.12 pytest-benchmark-3.0.0 pytest-rerunfailures-2.0.0 statistics-1.0.3.5

~/work/python/profiling .venv master*
❯ py.test
=============================================================================================== test session starts ===============================================================================================
platform darwin -- Python 2.7.10, pytest-2.9.1, py-1.4.31, pluggy-0.3.1
benchmark: 3.0.0 (defaults: timer=time.time disable_gc=False min_rounds=5 min_time=5.00us max_time=1.00s calibration_precision=10 warmup=False warmup_iterations=100000)
rootdir: /Users/me/work/python/profiling, inifile: setup.cfg
plugins: benchmark-3.0.0, rerunfailures-2.0.0
collected 42 items

test/test_cli.py .....
test/test_profiler.py ...
test/test_sampling.py R.....
test/test_stats.py ..............
test/test_timers.py sss
test/test_tracing.py ...
test/test_utils.py ....
test/test_viewer.py .....

---------------------------------------------------------------------------------------------- benchmark: 4 tests ---------------------------------------------------------------------------------------------
Name (time in us)                               Min                    Max                   Mean                StdDev                 Median                   IQR            Outliers(*)  Rounds  Iterations
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
test_shallow_stats_load_performance        336.8855 (1.0)       1,417.8753 (1.0)         393.0669 (1.0)         65.5990 (1.0)         375.0324 (1.0)         49.8295 (1.0)          205;161    2268           1
test_deep_stats_load_performance           929.8325 (2.76)      2,190.1131 (1.54)      1,075.5450 (2.74)       152.6101 (2.33)      1,027.8225 (2.74)       115.7522 (2.32)          109;80     929           1
test_shallow_stats_dump_performance      5,430.9368 (16.12)     9,010.0765 (6.35)      6,036.5367 (15.36)      689.7672 (10.51)     5,772.1138 (15.39)      589.7880 (11.84)          22;13     157           1
test_deep_stats_dump_performance        80,684.9003 (239.50)   89,100.8377 (62.84)    85,758.3086 (218.18)   2,540.1061 (38.72)    86,325.8839 (230.18)   3,725.8863 (74.77)            4;0      12           1
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

(*) Outliers: 1 Standard Deviation from Mean; 1.5 IQR (InterQuartile Range) from 1st Quartile and 3rd Quartile.
================================================================================= 39 passed, 3 skipped, 1 rerun in 12.25 seconds ==================================================