Closed siefkenj closed 7 years ago
There are a quite a few methods named test_
, did you mean to write time_
there?
I sure did. So used to pyunit!
Good.
Running this against SymPy master I get:
$ asv run -b MatrixOperations c807dfe..master
· Cloning project.
· Fetching recent changes.
· Creating environments..
· Discovering benchmarks
·· Uninstalling from virtualenv-py3.5-fastcache-mpmath.
·· Building for virtualenv-py3.5-fastcache-mpmath
·· Installing into virtualenv-py3.5-fastcache-mpmath..
· Running 14 total benchmarks (2 commits * 1 environments * 7 benchmarks)
[ 0.00%] · For sympy commit hash 980f3e23:
[ 0.00%] ·· Building for virtualenv-py3.5-fastcache-mpmath...
[ 0.00%] ·· Benchmarking virtualenv-py3.5-fastcache-mpmath
[ 7.14%] ··· Running solve.TimeMatrixOperations.time_dense_add 163.90μs;...
[ 14.29%] ··· Running solve.TimeMatrixOperations.time_dense_multiply 416.36μs;...
[ 21.43%] ··· Running solve.TimeMatrixOperations.time_det 2/9 failed
[ 28.57%] ··· Running solve.TimeMatrixOperations.time_det_bareiss 2/9 failed
[ 35.71%] ··· Running solve.TimeMatrixOperations.time_det_berkowitz 105.25μs;...
[ 42.86%] ··· Running solve.TimeMatrixOperations.time_rank 1/9 failed
[ 50.00%] ··· Running solve.TimeMatrixOperations.time_rref 1/9 failed
[ 50.00%] · For sympy commit hash 1635382c:
[ 50.00%] ·· Building for virtualenv-py3.5-fastcache-mpmath...
[ 50.00%] ·· Benchmarking virtualenv-py3.5-fastcache-mpmath
[ 57.14%] ··· Running solve.TimeMatrixOperations.time_dense_add 165.28μs;...
[ 64.29%] ··· Running solve.TimeMatrixOperations.time_dense_multiply 492.55μs;...
[ 71.43%] ··· Running solve.TimeMatrixOperations.time_det 2/9 failed
[ 78.57%] ··· Running solve.TimeMatrixOperations.time_det_bareiss 2/9 failed
[ 85.71%] ··· Running solve.TimeMatrixOperations.time_det_berkowitz 105.62μs;...
[ 92.86%] ··· Running solve.TimeMatrixOperations.time_rank 1/9 failed
[100.00%] ··· Running solve.TimeMatrixOperations.time_rref 1/9 failed
Are those failures expected?
It is expected that for 10 x 10
and 6x6
matrices with symbols, rref
and det_bareiss
will take a really long time. It can take over 10 minutes to row reduce a 10 x 10
matrix on my machine, which means it'll be really significant when those tests get below 60s!
I see, the only worry I have is that the default timeout is 60 seconds. That means that merging this PR as-is would drastically increase the time spent per commit in the benchmark.
We have had a loose goal of trying to stay under 1 second per benchmark, see: https://github.com/sympy/sympy_benchmarks/issues/8
But we've also discussed having a set of slow benchmarks, which we wouldn't run on each commit, but maybe every 500th commit, and then use the bisecting function of asv
to find regressions.
Increasing the cache size might speed things up. The default value is
On Mon, Apr 3, 2017 at 3:40 PM, Björn Dahlgren notifications@github.com wrote:
I see, the only worry I have is that the default timeout is 60 seconds. That means that merging this PR as-is would drastically increase the time spent per commit in the benchmark.
We have had a loose goal of trying to stay under 1 second per benchmark, see:
8 https://github.com/sympy/sympy_benchmarks/issues/8
But we've also discussed having a set of slow benchmarks, which we wouldn't run on each commit, but maybe every 500th commit, and then use the bisecting function of asv to find regressions.
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/sympy/sympy_benchmarks/pull/34#issuecomment-291282441, or mute the thread https://github.com/notifications/unsubscribe-auth/ADuV_h4Om1uRJSHiOdRb-Dga8lrTSeR6ks5rsWdhgaJpZM4MxEZu .
I like the idea of a slow benchmark run more infrequently. I'm working on some optimized code to bring some of these computations to < 1s. The trouble is, things slow down really fast in the matrix library with Symbol
s around, and some of the algorithms for smaller matrices are hardcoded, so a benchmark of them won't really be testing the general algorithm. I could remove the size 10
matrix and change the size 6
to a size 5
.
For things like determinants, size ~4
is where hardcoded speed = algorithm speed, which is why I put a size 6
matrix in the tests.
Could the speed tests be set to timeout after 2 seconds? That seems like it would get a lot of good data and when the algorithms improve and they suddenly jump below that threshold, they'd suddenly start showing up.
@siefkenj We can absolutely change the timeout to 2 "travis-seconds" (I think they use Xeon processors on google compute enginge). Looking at the output from the most recently merged PR we have these timings on Travis:
· Running 25 total benchmarks (1 commits * 1 environments * 25 benchmarks)
[ 0.00%] · For sympy commit hash 025e63ae:
[ 0.00%] ·· Building for py2.7-fastcache-mpmath....
[ 0.00%] ·· Benchmarking py2.7-fastcache-mpmath
[ 4.00%] ··· Running dsolve.TimeDsolve01.time_dsolve 1.21s
[ 8.00%] ··· Running integrate.TimeIntegration01.time_doit 325.51ms
[ 12.00%] ··· Running integrate.TimeIntegration01.time_doit_meijerg 95.05ms
[ 16.00%] ··· Running ...onOperations.peakmem_jacobian_wrt_functions 37M
[ 20.00%] ··· Running ...sionOperations.peakmem_jacobian_wrt_symbols 37M
[ 24.00%] ··· Running ....TimeLargeExpressionOperations.peakmem_subs 37M
[ 28.00%] ··· Running ...imeLargeExpressionOperations.time_count_ops 48.37ms
[ 32.00%] ··· Running ...xprs.TimeLargeExpressionOperations.time_cse 60.31ms
[ 36.00%] ··· Running ...LargeExpressionOperations.time_free_symbols 10.07ms
[ 40.00%] ··· Running ...ssionOperations.time_jacobian_wrt_functions 249.31ms
[ 44.00%] ··· Running ...ressionOperations.time_jacobian_wrt_symbols 55.38ms
[ 48.00%] ··· Running ...erations.time_manual_jacobian_wrt_functions 126.68ms
[ 52.00%] ··· Running ...prs.TimeLargeExpressionOperations.time_subs 421.27ms
[ 56.00%] ··· Running logic.LogicSuite.time_dpll 5.24s
[ 60.00%] ··· Running logic.LogicSuite.time_dpll2 578.52ms
[ 64.00%] ··· Running logic.LogicSuite.time_load_file 9.75ms
[ 68.00%] ··· Running ...gDamper.time_kanesmethod_mass_spring_damper 4.91ms
[ 72.00%] ··· Running ...per.time_lagrangesmethod_mass_spring_damper 3.24ms
[ 76.00%] ··· Running refine.TimeRefine01.time_refine 11.26s
[ 80.00%] ··· Running solve.TimeMatrixSolve.time_solve ok
[ 80.00%] ····
======== ==========
param1
-------- ----------
GE 139.55ms
LU 141.16ms
ADJ 403.60ms
======== ==========
[ 84.00%] ··· Running solve.TimeMatrixSolve2.time_cholesky_solve 1.66ms
[ 88.00%] ··· Running solve.TimeMatrixSolve2.time_lusolve 615.77μs
[ 92.00%] ··· Running solve.TimeSolve01.time_solve 899.59ms
[ 96.00%] ··· Running solve.TimeSolve01.time_solve_nocheck 873.90ms
[100.00%] ··· Running sum.TimeSum.time_doit 23.78ms
In that case, would you mind creating a new folder e.g. slow_benchmarks/
next to benchmarks/
and
put a version of your class with the slow benchmarks there (and also move refine.py
).
We should probably test it too in .travis.yml
.
@pbrady thanks for pointing out SYMPY_CACHE_SIZE
, when increasing that variable, only 1/9 fail (>60 s) on my workstation.
@bjodah Is the Travis error related to my commit or is it something else?
Forgot to say, we need a folder tests
under slow_benchmarks
, __init__.py
in both of those folders and then move test_refine.py
to the new test folder.
Ok, so this line needs to be updated: https://github.com/siefkenj/sympy_benchmarks/blob/cb5664b2478705cb56fce37d383a21c43aaf8b0f/slow_benchmarks/tests/test_refine.py#L6
Great, thanks!
This adds several more benchmarks of matrix algorithms including
rank
,rref
,det
, as well as matrix multiplication and matrix addition.