Open tony opened 8 years ago
What is the way you expect tests to be ran?
My intuition is also to run py.test
, so I install pytest and run py.test
.
❯ py.test
========================================== test session starts ==========================================
platform darwin -- Python 2.7.10, pytest-2.9.1, py-1.4.31, pluggy-0.3.1
rootdir: /Users/me/work/python/profiling, inifile: setup.cfg
collected 42 items
test/test_cli.py .....
test/test_profiler.py ...
test/test_sampling.py F....
test/test_stats.py ..........EEEE
test/test_timers.py sss
test/test_tracing.py ...
test/test_utils.py ....
test/test_viewer.py .....
================================================ ERRORS =================================================
__________________________ ERROR at setup of test_deep_stats_dump_performance ___________________________
file /Users/me/work/python/profiling/test/test_stats.py, line 255
def test_deep_stats_dump_performance(benchmark):
fixture 'benchmark' not found
available fixtures: deep_stats, tmpdir_factory, pytestconfig, cache, recwarn, monkeypatch, record_xml_property, capfd, capsys, tmpdir
use 'py.test --fixtures [testpath]' for help on them.
/Users/me/work/python/profiling/test/test_stats.py:255
__________________________ ERROR at setup of test_deep_stats_load_performance ___________________________
file /Users/me/work/python/profiling/test/test_stats.py, line 260
def test_deep_stats_load_performance(benchmark):
fixture 'benchmark' not found
available fixtures: deep_stats, tmpdir_factory, pytestconfig, cache, recwarn, monkeypatch, record_xml_property, capfd, capsys, tmpdir
use 'py.test --fixtures [testpath]' for help on them.
/Users/me/work/python/profiling/test/test_stats.py:260
_________________________ ERROR at setup of test_shallow_stats_dump_performance _________________________
file /Users/me/work/python/profiling/test/test_stats.py, line 266
def test_shallow_stats_dump_performance(benchmark):
fixture 'benchmark' not found
available fixtures: deep_stats, tmpdir_factory, pytestconfig, cache, recwarn, monkeypatch, record_xml_property, capfd, capsys, tmpdir
use 'py.test --fixtures [testpath]' for help on them.
/Users/me/work/python/profiling/test/test_stats.py:266
_________________________ ERROR at setup of test_shallow_stats_load_performance _________________________
file /Users/me/work/python/profiling/test/test_stats.py, line 271
def test_shallow_stats_load_performance(benchmark):
fixture 'benchmark' not found
available fixtures: deep_stats, tmpdir_factory, pytestconfig, cache, recwarn, monkeypatch, record_xml_property, capfd, capsys, tmpdir
use 'py.test --fixtures [testpath]' for help on them.
/Users/me/work/python/profiling/test/test_stats.py:271
=============================================== FAILURES ================================================
__________________________________________ test_itimer_sampler __________________________________________
@pytest.mark.flaky(reruns=10)
def test_itimer_sampler():
assert signal.getsignal(signal.SIGPROF) == signal.SIG_DFL
try:
> _test_sampling_profiler(ItimerSampler(0.0001))
test/test_sampling.py:39:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
sampler = <profiling.sampling.samplers.ItimerSampler object at 0x10925ff90>
def _test_sampling_profiler(sampler):
profiler = SamplingProfiler(base_frame=sys._getframe(), sampler=sampler)
with profiler:
spin_100ms()
spin_500ms()
stat1 = find_stats(profiler.stats, 'spin_100ms')
stat2 = find_stats(profiler.stats, 'spin_500ms')
ratio = stat1.deep_hits / stat2.deep_hits
# 1:5 expaected, but tolerate (0.8~1.2):5
> assert 0.8 <= ratio * 5 <= 1.2
E assert (0.9700854700854701 * 5) <= 1.2
test/test_sampling.py:32: AssertionError
======================== 1 failed, 34 passed, 3 skipped, 4 error in 3.80 seconds ========================
Then I look in test/requirements.txt
cat test/requirements.txt
pytest>=2.6.1
pytest-benchmark>=3.0.0
pytest-rerunfailures>=0.05
eventlet>=0.15 # python>=2.6,<3
gevent>=1 # python>=2.5,<3 !pypy
gevent>=1.1a1 # python>=3 !pypy
gevent==1.1rc1 # pypy<2.6.1
gevent>=1.1rc2 # pypy>=2.6.1
greenlet>=0.4.4 # python>=2.4
yappi>=0.92 # python>=2.6,!=3.0 !pypy
Pip won't accept it:
~/work/python/profiling .venv master*
❯ pip install -r test/requirements.txt
Double requirement given: gevent>=1.1a1 (from -r test/requirements.txt (line 6)) (already in gevent>=1 (from -r test/requirements.txt (line 5)), name='gevent')
I see what you're doing with reqs across different versions.
Have you considered something like https://github.com/saltstack/salt/tree/develop/requirements and putting conditional logic inside of setup.py and tox?
Note: eventually I do get it to work if I install by hand, but recent versions of pip are going to fuss.
~/work/python/profiling .venv master*
❯ pip install pytest-benchmark pytest-rerunfailures
Collecting pytest-benchmark
Using cached pytest_benchmark-3.0.0-py2.py3-none-any.whl
Collecting pytest-rerunfailures
Requirement already satisfied (use --upgrade to upgrade): pytest>=2.6 in ./.venv/lib/python2.7/site-packages (from pytest-benchmark)
Collecting statistics; python_version < "3.4" (from pytest-benchmark)
Requirement already satisfied (use --upgrade to upgrade): py>=1.4.29 in ./.venv/lib/python2.7/site-packages (from pytest>=2.6->pytest-benchmark)
Collecting docutils>=0.3 (from statistics; python_version < "3.4"->pytest-benchmark)
Installing collected packages: docutils, statistics, pytest-benchmark, pytest-rerunfailures
Successfully installed docutils-0.12 pytest-benchmark-3.0.0 pytest-rerunfailures-2.0.0 statistics-1.0.3.5
~/work/python/profiling .venv master*
❯ py.test
=============================================================================================== test session starts ===============================================================================================
platform darwin -- Python 2.7.10, pytest-2.9.1, py-1.4.31, pluggy-0.3.1
benchmark: 3.0.0 (defaults: timer=time.time disable_gc=False min_rounds=5 min_time=5.00us max_time=1.00s calibration_precision=10 warmup=False warmup_iterations=100000)
rootdir: /Users/me/work/python/profiling, inifile: setup.cfg
plugins: benchmark-3.0.0, rerunfailures-2.0.0
collected 42 items
test/test_cli.py .....
test/test_profiler.py ...
test/test_sampling.py R.....
test/test_stats.py ..............
test/test_timers.py sss
test/test_tracing.py ...
test/test_utils.py ....
test/test_viewer.py .....
---------------------------------------------------------------------------------------------- benchmark: 4 tests ---------------------------------------------------------------------------------------------
Name (time in us) Min Max Mean StdDev Median IQR Outliers(*) Rounds Iterations
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
test_shallow_stats_load_performance 336.8855 (1.0) 1,417.8753 (1.0) 393.0669 (1.0) 65.5990 (1.0) 375.0324 (1.0) 49.8295 (1.0) 205;161 2268 1
test_deep_stats_load_performance 929.8325 (2.76) 2,190.1131 (1.54) 1,075.5450 (2.74) 152.6101 (2.33) 1,027.8225 (2.74) 115.7522 (2.32) 109;80 929 1
test_shallow_stats_dump_performance 5,430.9368 (16.12) 9,010.0765 (6.35) 6,036.5367 (15.36) 689.7672 (10.51) 5,772.1138 (15.39) 589.7880 (11.84) 22;13 157 1
test_deep_stats_dump_performance 80,684.9003 (239.50) 89,100.8377 (62.84) 85,758.3086 (218.18) 2,540.1061 (38.72) 86,325.8839 (230.18) 3,725.8863 (74.77) 4;0 12 1
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
(*) Outliers: 1 Standard Deviation from Mean; 1.5 IQR (InterQuartile Range) from 1st Quartile and 3rd Quartile.
================================================================================= 39 passed, 3 skipped, 1 rerun in 12.25 seconds ==================================================
e0961da
OS X El Capitan