Open GoogleCodeExporter opened 8 years ago
I'm sure it can be done as a plugin, but I need to think about it a bit. It may
be
that to support passing in the niters param, additional plugin hooks will have
to be
added.
Original comment by jpelle...@gmail.com
on 28 Dec 2006 at 3:04
I took a stab at this. Very simply run each test a number of times, rather
than pass
the number of iterations to the test. I took this approach mainly because I
couldn't get prepareTestCase to work.
Original comment by cas...@gmail.com
on 27 Mar 2009 at 12:30
One thing that could work is a decorator that controls the number of iterations
as OP
suggested. Something like
from nose.tools import pass_iterations
@with_setup(setup_foobar)
@pass_iterations
def perf_foobar(niters):
...
the implementation of pass_iterations would have to get access to the options
object
or plugin when the decorated function is run. I am not aware of any registry
that
allows this. Am I wrong?
Original comment by cas...@gmail.com
on 30 Mar 2009 at 12:29
Or you could use http://pypi.python.org/pypi/nose-testconfig/ to control the
number
of iterations in the test
Original comment by cas...@gmail.com
on 30 Mar 2009 at 12:40
The idea of passing the number of iterations as a function argument is to
increase
the number of iterations if the total running time is too short, as the standard
"timeit" module does.
Original comment by antoine.pitrou
on 30 Mar 2009 at 12:50
Sure, Antoine, but this is what I could with the current plugin API. The
number of
iterations is arguably something for configuration.
Original comment by cas...@gmail.com
on 30 Mar 2009 at 1:22
I use tests to verify that my refactoring didn't break anything. For
performance
tests, I want to know how fast it runs now compared to some other
implementation.
How would that work? Maybe the framework could be told to remember statistics
and
then later compare them.
For example, pretend I have some code named perf_1. First I would run perf_1
vaguely like
$ nosetests --performance --remember-as=v1 perfomanace_tests.py:perf_1
Then I would rewrite the code that is used inside perf_1. I'm not rewriting
perf_1
itself.
Then I might do this (notice how --remember-as now is v2):
$ nosetests --performance --remember-as=v2 perfomanace_tests.py:perf_1
And finally, I would need to compare v1 versus v2:
$ perf_compare v1 v2
I don't know how any of this would work but I really do want something like
this.
Original comment by matt%tpl...@gtempaccount.com
on 6 May 2010 at 1:48
More brainstorming:
At the most basic level, you want to make sure that your new code isn't "too
much slower than" the old, trusted code.
To do this, the system would need to know two things:
1) the "speed" of the trusted version of the code
2) the tolerable margin of performance difference
#1 would almost certainly need to come from some kind of extra benchmarking
step, unless someone knows of a machine-independent measure of performance (see
my SO question here: http://stackoverflow.com/questions/5852199)
#2 could be configured by a decorator. Something like @perf_tolerance(-.2, .5),
with a default of maybe +-10%
More issues with the benchmarking step:
* When doing the benchmark, how do we get nose to use the current (untrusted) tests with the trusted (old) code?
* Do we require the user to keep a "golden" checkout of the code under test?
* Even then, how can we make certain the tests are using that code, and not the local one? Setting the PYTHONPATH seems insufficient, since sys.path munging is widespread in testing code.
* What should the system do when the benchmark has gone bad, either by a change in machine, or a change in tests, or something else?
Original comment by workitha...@gmail.com
on 3 May 2011 at 4:24
Original issue reported on code.google.com by
antoine.pitrou
on 28 Dec 2006 at 9:47