Closed jbm950 closed 8 years ago
This looks fine to merge. If you want to setup subfolders, maybe we should mirror the SymPy directory tree.
HI Jason,
"Any reason for this class name?"
I used that class name because it is the class name used in the Writing Benchmarks documentation. Since you brought it up, however, I agree a more meaningful name such as KanesMethodSuite
or KanesMethodTests
would be more appropriate and I'll add a commit with the change to both scripts here soon.
"If you want to setup subfolders, maybe we should mirror the SymPy directory tree."
I was wondering if how much benefit this would add over simply moving the tests to the top benchmarking folder. I feel like adding the SymPy directory tree would make it simpler for additional people to add benchmark tests by providing a more concrete structure to use. Also it would be easier and quicker to see what features of SymPy don't have benchmark tests written yet by simply going to the folder that SymPy holds that feature. On the flipside I do not know how rigid the SymPy directory tree is and if it changes somewhat frequently this could create a higher maintenance for the benchmarking repository and it doesn't seem there are too many contributors here in the first place.
If it is decided to mirror the SymPy tree I think it'd be a good idea to only add folders as benchmarks are added rather than putting in the whole structure at once. This would again aid in searching to see if benchmarks have been written by the presence or absence of portions of the directory tree. Also if it is decided to mirror the SymPy directory tree it might be a good idea to add this decision to the README file.
The rest of my questions and comments come from the discussion in the pull request from the main SymPy repository.
Is there a reason you singled out KanesMethod
over LagrangesMethod
for more in-depth testing? Is it due to KanesMethod
being a generally faster method between the two and therefore deserves more attention and focus?
"It may be worth moving the general symbolic models from pydy.models into SymPy so we can use them for benchmarking and testing purposes."
I looked at pydy.models and it is almost completely SymPy and not PyDy code. The only place PyDy is used is in the return statement for the two functions returns a pydy.system.System object. If the idea of moving the code to the SymPy repository is to simply test and benchmark the functions the return statement could be removed. This would be the simplest solution, however, a this would result in not obtaining the true run time of the models functions because it would be missing that final function call. The other choice would be to migrate the pydy.system.System code to SymPy also. This approach would be more complicates as this code has more dependencies within pydy than pydy.models. Also it has dependencies on numpy and scipy and it is my understanding SymPy makes an effort to not have external dependencies such as these.
As far as pydy.models providing additional opportunities for benchmarking and testing KanesMethod
, I did not see any additional calls to KanesMethod
's interface outside of forming the equations of motion. Thus the only benefit of adding these to SymPy for benchmarking purposes would be for additional equations of motion examples to test rather than extending the testing across more of KanesMethod
's interfaces (this is assuming that it was desired to run the tests on the functions themselves and not on the object they returned).
It was also suggested to look at the rolling disk example PR, the Whipple Bicycle Model PR and the bicycle equations of motion derivation in test_kane3.py for extending the testing of KanesMethod
's interface.
Rolling Disk Example PR - This code does not directly call any methods of KanesMethod
in addition to kanes_equations(). It does access the following methods of KanesMethod
however: kane.forcing_full
and kane.mass_matrix_full
. These sorts of functionalities seem to me like they'd be better with functionality testing rather than benchmark testing.
Whipple Bicycle Model PR - This code access a few more methods of KanesMethod
(kane.mass_matrix
, kane.forcing
, kane.q.diff(t)
, kane.q[:]
and kane.u[:]
) and actually calls kane.kindiffdict().
test_kane3.py in SymPy - This code also calls kane.kindiffdict() and in addition calls KM.linearize() (though it is my understanding that the linearization doesn't work here either. See Issue #9641)
Summary of looking into the rolling disk example PR, the Whipple Bicycle Model PR and the bicycle equations of motion derivation in [test_kane3.py]:
KanesMethod
(excluding methods that begin with an underscore) (Note: Some of these methods may be called internally)
KanesMethod
's interface the desired end goal or are some of these better suited for functionality tests?Decisions/Clarifications requested
KanesMethod
interface
LagrangesMethod
?Please let me know if there is any more information needed from me in order to make some of the above requested decisions.
Lot's of stuff to reply to! Here I go:
KanesMethodSuite
Yes, please change.
Directory tree
If you want to set things up to mirror SymPy I think that is a good idea. Please do and I'll +1. The directory structure in SymPy is pretty static except for occasional new directories.
good idea to add this decision to the README file
Yes, please do.
Is there a reason you singled out KanesMethod over LagrangesMethod for more in-depth testing?
No, just first thing that came to my mind. Ideally we'd have a comprehensive example that we use both methods for to get the same result.
pydy.models
If you copy these two examples, just copy the symbolic part. Both of these would be good to use to produce both KanesMethod and LagrangesMethod. These should be strictly symbolic.
did not see any additional calls to KanesMethod's interface
Those two examples are basic systems and use the standard code path in KanesMethod. So they are somewhat equivalent to your benchmarks. But it is nice in that you can scale up the size of the symbolic expressions easily. It would also be useful to have them in SymPy for many other test cases and benchmarks.
This code does not directly call any methods of KanesMethod in addition to kanes_equations()
That is true but the internal code path in KanesMethod is affected by things passed to the __init__
method. KanesMethod supports holonomic constraints, non-holonomic constraints, auxiliary forces, no forces, etc. We need tests/benchmarks that exercise the full code path in both KanesMethod and LagrangesMethod. Some of the slow bits in the classes are directly related to the combination of arguments passed to the __init__
and form_equations
methods.
This code access a few more methods of KanesMethod
Yep, we could test out anything in KanesMethod that causes a computation.
to_linearizer()
Very slow for complicated systems.
rhs()
This PR #24 deals with this.
Is the benchmarking of the full array of KanesMethod's interface the desired end goal or are some of these better suited for functionality tests?
We should catalog what examples are slow, profile them, and then create some benchmarks that target the slow parts. Most of the benchmarks will be of low level SymPy funcitonality. There are slow things in the *Methods
classes, but the real slow downs come from using those classes to solve big, complex problems that have large expressions and complicated relations. These will only be sped up by fixing things in SymPy's core.
Set up sub folders to mirror SymPy directory tree?
I'll +1 if you do it, otherwise just leave them all in the top level directory with clear names.
Should pydy.models be moved to SymPy?
Only the symbolics if you do, but it isn't a necessity at the moment.
If so is it desired to remove the pydy.system.System call in the return statements?
Nope, nothing in SymPy should depend on PyDy.
Decisions regarding the benchmarking of the full KanesMethod interface
If there are examples of different computations being slow, then we should benchmark them.
What about LagrangesMethod?
We should look into benchmarking this too.
Hi Jason,
Was it desired to add all of additional benchmarking in this pull request? If not (the scope of this PR is making sure everything present works) then I believe this PR is ready for a review.
No, smaller incremental PRs are typically better. If this ready, let's merge it.
I have some comments. Once addressed I think this is good.
"It is customary that all imports go at the top of files unless there are cyclic import issues."
Done
"Let's stick to pep8 formatted informative names unless there is good reason not to."
The variable names had been from the KanesMethod docstring. They have since been changed to more meaningful names at your suggestion.
"Won't this all fit on one line?"
Yes. This is left over from the initialization being in the test method and so the inputs had to be accessed through the class (self.attribute). This should be fixed now.
"Remove (fr, frstar) ="
Done
"What does this benchmark do that the above one doesn't?"
The second Lagrange's method test was not added due to benchmarking different inputs/portions of the LagrangesMethod
class but rather was added so that there would be a benchmark for both methods that use the same dynamics situation. If "cutting the fat" is recommended I would move to get rid of the first test (pendulum) instead of the new test since the new test is to mirror the KanesMethod
test.
"This line looks identical to line 38 and both examples are simple systems that do not have non-holonomic constraints, lagrange multipliers, kinematic loops, etc, so I still don't see the difference in what each of these are testing."
I admit they are small differences. The main difference is that the second one tests the same mass spring damper system that is tested with KanesMethod
and was included in order to compare the two more directly. Due to the differences being small, I would not be opposed to removing the pendulum from the tests as it does not offer much additional testing that is not covered by the mass spring damper tests.
The extraneous Lagrange test has been removed.
Looks good. Merging.
Created a benchmark test for KanesMethod and Lagranges method in a new subfolder mechanics. The tests run the imports and most of the code in a setup statement and only time the run of EOMMethod() and form_equations().
@moorepants