Closed phlptp closed 6 years ago
Since the examples would be part of a nightly build, do we want to run each example multiple times with different cores?
I think some of the examples are hard coded to a particular core type. Those are fine. I guess I would like to see the ones that can take the core-type as an argument run with a couple different kinds of communication. I think the intent of the testing is to actually test the examples, rather than test HELICS itself, that should be done through the regular HELICS test executables, and other specific test cases. The intention with the examples is to provide simple examples of the different features. Therefore the intention with testing them is to make sure they are accomplishing what they intend to show. Some have controllable core type, in which case we should test that feature.
Should the script (bash, python, cmake?) for running the example tests be runnable on all of our target platforms?
My opinion priority 1 is make sure they are running regularly and actually work as advertised priority 2 is getting some nicer scripts to run them on linux priority 3 is adding bat file scripts as well in an install directory that packages them all up in some consistent location
It would probably be good if we could make the tests for this use the helics_runner if that is the route we want to go with.
@phlpt How do we want to check if the examples were run successfully? Compare the output to stdout from the federates against a set of expected output, or check for some form of expected value/timestamp appearing in the output of both federates that would indicate that the run finished successfully?
We have a lot of examples that get built as part of HELICS but they are not executed as part of any test plan currently. We really need to have a set of tests (probably nightly build) that runs the complete set of examples to make sure they all work appropriately.
I think this is going to involve part of a nightly build and probably some scripting and result publishing.