xogeny / XogenyTest

An example of a testing framework written in 100% Modelica
Other
15 stars 9 forks source link

Enhancement: Test discovery/test runner #10

Open bilderbuchi opened 8 years ago

bilderbuchi commented 8 years ago

It would be great if XogenyTest would offer some kind of test runner, which could be used/extended in your package, would discover all tests (e.g. in a Test subpackage) automatically, run them in sequence, and report the results back. That way, one would not have to run all the created tests manually every time, which gets tedious with a large number of tests.

I already tried thinking about how to best approach this, but I'm a bit stumped.

kdavies4 commented 8 years ago

Just a couple of notes on how I used XogenyTest:

I create a Tests package that has the same structure as the library to be tested. Each function has a corresponding test function and each model has a corresponding test model. I aggregate the test functions into higher-level test functions, following the package hierarchy upwards. A boolean "ok" response is the boolean and of the calls to all of the functions in the package and its subpackages. I aggregate the models in a Dymola test script (not pure Modelica, I know) by simulating them one by one. For models that are similar and simple, I guess you could instantiate multiple test models into a higher-level model, but that could get messy for the solver.

I know that this doesn't directly address your questions, but maybe it offers some ideas.

xogeny commented 8 years ago

The big issue here is that to do this right requires tool support. My goal, with this library, was to try and build a consensus around ways of doing testing with the hope that tool requirements would emerge organically from that. In a nutshell, you need to push the vendors to do more to support what you want.

bilderbuchi commented 8 years ago

@kdavies4 thanks for sharing your workflow! It is helpful, I'll see how far I get with a .mos script or some runner function.

@xogeny yeah, tool support would be useful, but I think first we'd need to have something "finished" that the tools could support and standardize on. Would it not be possible to first build something rudimentary but working (I love how you kept this library simple and thus universal), and then get tool vendors to add convenient support for it? How would one go about that?

I have already toyed with the idea, considering that there is some intersection of Modelica and Python here and there, to implement a Modelica plugin for pytest. That way we could use some of the aesomeness of pytest, but I'm not sure how well it could work considering Modelica is a "foreign language" for pytest.

OTOH, it would be great to try and stay within Modelica first, before looking elsewhere, if only for dependency/cross-lang reasons.

/soapbox It makes me sad that the same/a similar thing apparently gets reimplemented over and over again in N slightly different, incompatible ways, instead of sitting together and defining a common, maintained, well designed thing that all can help move forward. I wonder why testing support beyond asserts is not part of Modelica/the MSL? I'm not the only one wondering that.

Frankly, the more I get to know Modelica, the more I am astonished at some stuff that is not there, as well as that some stuff just disappears/is never implemented. What happened to the TestCase annotation that you are using in this library (I can't find a trace of it anywhere else online)? What happened to so many of the testing solutions that you can find, that seem to get a paper at a Modelica conference, then disappear without a trace? (e.g. MoUnit, OptimicaTestingToolkit) Do they all get folded into commercial tools? What happened to the Exception handling @adrpo proposed in 2008? To the partial derivative support that's even in Fritzson's current book iirc?

thorade commented 7 years ago

On github, the TestCase annotation is used by XogenyTest and modelica-compliance https://github.com/search?l=Modelica&q=TestCase&type=Code but it is not a "standard" annotation, just a vendor specific annotation, it seems.