Closed Jacob-Stevens-Haas closed 1 year ago
Thanks! Yeah, some of these tests are slower than necessary and can be improved with just a bit of work. Would be good to clean it up a bit.
New test times, out of a total of 144.75s:
25.13s call test/test_sindyc.py::test_simulate_with_interp[data_lorenz_c_1d]
23.16s call test/test_sindyc.py::test_simulate_with_interp[data_lorenz_c_2d]
5.49s setup test/test_pysindy.py::test_multiple_trajectories_and_ensemble
4.71s setup test/test_feature_library.py::test_parameterized_library
3.99s call test/test_feature_library.py::test_5D_pdes
3.75s call test/test_optimizers.py::test_trapping_sr3_quadratic_library[params5-trapping_sr3_params1]
3.68s call test/test_sindyc.py::test_simulate_with_vector_control_input[data_lorenz_c_2d]
3.67s call test/test_optimizers.py::test_trapping_sr3_quadratic_library[params5-trapping_sr3_params0]
3.37s call test/test_sindyc.py::test_simulate_with_vector_control_input[data_lorenz_c_1d]
3.13s call test/test_sindyc.py::test_simulate[data_lorenz_c_2d]
2.93s call test/test_optimizers.py::test_stable_linear_sr3_linear_library[params5]
2.78s call test/test_optimizers.py::test_sample_weight_optimizers[StableLinearSR3]
2.71s call test/test_feature_library.py::test_sindypi_library
2.54s call test/test_feature_library.py::test_3D_weak_pdes
2.40s call test/test_sindyc.py::test_simulate[data_lorenz_c_1d]
1.97s call test/test_optimizers.py::test_trapping_sr3_quadratic_library[params6-trapping_sr3_params1]
1.90s call test/test_optimizers.py::test_trapping_sr3_quadratic_library[params3-trapping_sr3_params1]
1.80s call test/test_optimizers.py::test_trapping_sr3_quadratic_library[params6-trapping_sr3_params0]
1.77s call test/test_optimizers.py::test_trapping_sr3_quadratic_library[params3-trapping_sr3_params0]
1.68s call test/test_optimizers.py::test_trapping_sr3_quadratic_library[params2-trapping_sr3_params1]
1.64s call test/test_optimizers_complexity.py::test_complexity
1.60s call test/test_optimizers.py::test_trapping_inequality_constraints[params2]
1.54s call test/test_sindyc.py::test_extra_u_warn
1.50s call test/test_optimizers.py::test_stable_linear_sr3_linear_library[params6]
1.44s call test/test_optimizers.py::test_stable_linear_sr3_linear_library[params7]
1.43s call test/test_optimizers.py::test_trapping_sr3_quadratic_library[params1-trapping_sr3_params1]
1.31s call test/test_optimizers.py::test_trapping_sr3_quadratic_library[params7-trapping_sr3_params0]
1.21s call test/test_optimizers.py::test_stable_linear_sr3_linear_library[params1]
1.12s call test/test_optimizers.py::test_stable_linear_sr3_linear_library[params3]
1.11s call test/test_optimizers.py::test_trapping_sr3_quadratic_library[params2-trapping_sr3_params0]
0.94s call test/test_optimizers.py::test_trapping_sr3_quadratic_library[params1-trapping_sr3_params0]
0.87s call test/test_optimizers.py::test_stable_linear_sr3_linear_library[params2]
0.85s call test/test_feature_library.py::test_5D_weak_pdes
0.84s call test/test_optimizers.py::test_trapping_sr3_quadratic_library[params7-trapping_sr3_params1]
0.84s call test/test_optimizers.py::test_trapping_inequality_constraints[params0]
0.82s call test/test_optimizers.py::test_trapping_inequality_constraints[params3]
0.81s call test/test_pysindy.py::test_multiple_trajectories_and_ensemble
0.81s call test/test_optimizers.py::test_optimizers_verbose_cvxpy[StableLinearSR3]
0.72s call test/test_optimizers.py::test_optimizers_verbose[StableLinearSR3]
0.72s call test/test_pysindy.py::test_simulate[data_lorenz]
0.68s call test/test_feature_library.py::test_3D_pdes
0.68s call test/test_optimizers.py::test_stable_linear_sr3_linear_library[params0]
0.65s call test/test_optimizers.py::test_trapping_inequality_constraints[params1]
0.51s call test/test_differentiation.py::test_centered_difference_noaxis_vs_axis
For data fixtures, that's data_lorenz_c_1d
, and data_lorenz_c_2d
@znicolaou - I want to ask about some of the longest tests remaining: test_parameterized_library
, test_multiple_trajectories_and_ensemble
(most of the runtime is in the shared fixture diffuse_multiple_trajectories
).
solve_ivp
as well as the weak library's selection of integration domains. Is there a way you can shrink the dataset?Similar questions for the 3D and 5D [Weak] PDEs. Currently the tests take around 50 sec to run. I'll work on test_stable_linear_sr3_linear_library
and test_trapping_inequality_constraints
. If you can help me with the PDE/WeakPDE ones, we could probably get the tests to run in under 20 seconds, down from around 5 min when I started this work. That would be a MASSIVE improvement.
Current status on the faster-tests
branch:
8.11s setup test/test_feature_library.py::test_parameterized_library
7.14s call test/test_feature_library.py::test_5D_pdes
5.17s call test/test_optimizers.py::test_stable_linear_sr3_linear_library[params5]
3.76s call test/test_feature_library.py::test_3D_weak_pdes
2.01s call test/test_optimizers.py::test_trapping_inequality_constraints[params1]
1.95s call test/test_optimizers.py::test_trapping_inequality_constraints[params2]
1.63s call test/test_optimizers.py::test_stable_linear_sr3_linear_library[params7]
1.32s call test/test_optimizers.py::test_stable_linear_sr3_linear_library[params3]
1.26s call test/test_optimizers.py::test_stable_linear_sr3_linear_library[params6]
1.12s call test/test_optimizers.py::test_stable_linear_sr3_linear_library[params4]
1.09s call test/test_optimizers.py::test_sample_weight_optimizers[StableLinearSR3]
0.98s call test/test_optimizers.py::test_trapping_inequality_constraints[params0]
0.91s call test/test_optimizers.py::test_trapping_inequality_constraints[params3]
0.79s call test/test_pysindy.py::test_multiple_trajectories_and_ensemble
0.75s call test/test_optimizers.py::test_stable_linear_sr3_linear_library[params2]
0.68s call test/test_optimizers.py::test_optimizers_verbose_cvxpy[StableLinearSR3]
0.67s call test/test_optimizers.py::test_stable_linear_sr3_linear_library[params1]
0.63s call test/test_optimizers.py::test_stable_linear_sr3_linear_library[params0]
0.58s call test/test_feature_library.py::test_3D_pdes
0.58s call test/test_feature_library.py::test_1D_pdes
0.56s call test/test_optimizers.py::test_optimizers_verbose[StableLinearSR3]
0.56s call test/test_feature_library.py::test_5D_weak_pdes
0.53s call test/test_differentiation.py::test_centered_difference_noaxis_vs_axis
Hey @Jacob-Stevens-Haas , I've been swamped the last few weeks, but hoping I can find some time to think about this soon...I think it should be possible to make faster tests, but it just takes time to implement. For the basic tests, we don't really need to integrate anything, we can just use random data instead. But I'm not so sure that the tests are really testing anything anyways--I don't think I've ever seen a problem identified from the unit tests.
Part of the reason you haven't seen a problem identified in unit tests is because we have so few true "unit" tests. Mostly, the integration tests stand in for unit tests. For example, when I made big changes last year, they caught things like the arrangement of the axes in data, which doesn't really require an integration test.
But given that integration tests exist now and are better than nothing, a separate consideration is whether they should be solving a known problem to a (probably arbitrary) degree of accuracy, or whether they should just test that the code executes without error.
~My recommendation is to delete the tests on 5D PDEs and test_multiple_trajectories_and_ensemble
, replace diffuse_multiple_trajectories
with a much simpler ODE, and remove 3D weak PDEs. That would cut total test time in half. FWIW, shrinking the weak PDE tests is a lot harder because of the infinite loop subdomain selection problem in weak PDEs~
EDIT1: I was able cut the diffuse_multiple_trajectories
test time by ~90%~ 97% by coarsening the st grid, broadening the integration and coefficient tolerances, only simulating one trajectory, and tweaking the optimizer params.
~I'll work on the 3D and 5D PDEs, but are you comfortable if I delete them, since they only test on random data and there are already tests for 1d and 2d PDEs?~
EDIT2: I coarsened those integrations, removed ensembling from test_ND_pde()
tests, and reduced derivative_order
to 2, cutting these tests to next-to-nothing. Please see #393 to review changes to PDE/WeakPDE tests. We're finally cooking with grease!!
With the pending PR, this is the final time trial:
======================================= slowest 20 durations ========================================
0.47s call test/test_differentiation.py::test_centered_difference_noaxis_vs_axis
0.46s call test/test_optimizers.py::test_stable_linear_sr3_linear_library
0.38s call test/test_pysindy.py::test_score_pde
0.36s call test/test_optimizers.py::test_target_format_constraints[3-StableLinearSR3]
0.35s call test/test_feature_library.py::test_sindypi_library
0.34s call test/test_optimizers.py::test_target_format_constraints[0-StableLinearSR3]
0.34s call test/test_optimizers.py::test_target_format_constraints[-1-StableLinearSR3]
0.27s call test/test_optimizers_complexity.py::test_complexity_parameter[STLSQ-threshold]
0.26s call test/test_optimizers_complexity.py::test_complexity_parameter[Ridge-alpha]
0.26s call test/test_optimizers.py::test_optimizers_verbose_cvxpy[StableLinearSR3]
0.25s call test/utils/test_odes.py::test_galerkin_models
0.24s call test/test_optimizers_complexity.py::test_complexity_parameter[SR3-threshold]
0.23s call test/test_optimizers.py::test_fit[optimizer6]
0.23s call test/test_optimizers.py::test_normalize_columns[StableLinearSR3]
0.22s call test/test_optimizers.py::test_optimizers_verbose[StableLinearSR3]
0.22s call test/test_optimizers.py::test_sample_weight_optimizers[StableLinearSR3]
0.19s call test/test_optimizers_complexity.py::test_complexity_parameter[Lasso-alpha]
0.18s call test/test_feature_library.py::test_1D_weak_pdes
0.18s call test/test_optimizers.py::test_miosr_equality_constraints[params1]
0.16s call test/test_optimizers.py::test_sindypi_fit[params3]
==================== 579 passed, 1 skipped, 16 deselected, 26 warnings in 13.74s ====================
While there are still a bunch of tests that could be deleted or simplified (I'm giving test_complexity_parameter
one more chance), esp if code is refactored, this speed meets my needs.
Thanks, Jake! Totally agree that tests for the actual functionality would be useful. Nice that coarsening helped. Your second consideration is definitely important, I think. Just running vs calculating the right values vs actually fitting data well are all possible, but just running is probably too weak IMO. FYI, the 1d, 2d, 5d thing is a bit of a legacy--the old PDE library hand coded different cases for different dimensions, but in the new library, if something works for 2d, it should work in all cases. Anyway, sorry I still haven't gotten to looking into details; I'm at a conference now and had to rush on a paper submission before leaving.
Ah, good to know about the PDE library legacy tests! Agree that "just running" is probably too weak. At the same time, I think that testing a whole problem, rather than just the library's functionality, is too expansive - it depends upon a particular SINDy parameters, rather than just library parameters, and fixing any potential bug in the library can cause the integration test to fail.
I think the ideal tests stay limited to just the library. E.g. that WeakSINDy can calculate the integral of a known function multiplied by the test function over a single domain cell. And a separate test for selecting domain cells.
I've been mulling around and proposing a lot of changes, and all of them will require refactoring. Refactoring cycles tend to involve running a lot of tests. On that note, here's all the tests that a long time.
pysindy.optimizers
) and shouldn't need to build a full SINDy model. Some may be improved by shortening the simulation data.if __name__ == "testing"
params4
) so it could be read from test output.asv