Closed MakisH closed 3 years ago
Fully agree.
One small things: I would not want to add the export
to all tutorials. That's maybe too much output for a normal user who just wants to run a first tutorial. We could instead let the systemtests add the tag before executing the tutorials.
Yes, that's what I mean: we add it only in the system tests. E.g. find every </participant> && add before <export:vtk/>
.
Closed by #256. The adjustment to the precice-config.xml
files is done within the systemtest after cloning the tutorial, as stated in the comment above.
tl;dr: Let's make every test case export vtk files of the interface meshes and then only check these files (alongside other consistent preCICE log files). This should cover the same use cases with lower complexity.
Current strategy
At the end of every system test case, we check all the result files except of files which are expected to be very different in every execution (e.g. events, performance measures etc) and except specific lines and columns of otherwise consistent logfiles.
Advantages of the current strategy
Disadvantages of the current strategy
What do we actually want to test?
We want to test for regressions in the behavior of complete systems. The main useful output of these systems is their results, but often solvers output inconsistent information alongside useful data. If preCICE and the adapters do their work correctly, the solvers' output has to stay the same, as we don't (should not) change what the solver outputs in a given time step.
Proposed strategy
Let's make every test case export vtk files of the interface meshes and then only check these files (alongside other consistent preCICE log files).
Given that we only test transient cases, if anything goes wrong inside an adapter (e.g. in reading data), then this should affect the interface mesh results of the next time step. Is there any issue with this assumption?
Advantages of the proposed stratery
Disadvantages