Open lisawim opened 10 months ago
These things are difficult to verify and maybe it's not worth getting lost in these tests. If you compare different variants of the same problem in a publication, for instance, you can compute the error wrt the same reference solution to make sure that the comparison makes sense. This you have to do anyway, in my opinion, because you have to somehow make sure you are solving to the same accuracy in order to compare sensible things. If you do implement such tests, that is of course awesome. But it should not be too high on your list of priorities. Also, you don't need to test the number of right hand side evaluations. This is just an issue of the sweeper. I.e. you don't want to make more evaluations than necessary. But the problem class does not know what is necessary and hence should worry about it. By the way, I like the issues. But I think you can open them in the main repo in the future. These issues apply also there after all ;)
For
v6
ofpySDC
all problem classes should be tested in a wide range. For those implementing different variants of the problem such asIMEX
orfully_implicit
it does make sense to test something likeeval_f
match?as it is already realised here (written by @brownbaerchen). In general, it also does make sense to do one time step for testing
solve_system
, testing the output error and compare it with a threshold.However, there are many classes also using some problem dependent tricks, computations, etc. that have to be tested individually. For instance,
allencahn_front_finel
implemented here uses Finel's trick to rewrite the right-hand side (a more detailed information about the rewriting can be found in the documentation of the class). For this problem, no meaningful test foreval_f
is found yet, and has to be added.eval_f
inallencahn_front_finel
For the other problem classes:
This entry can be supplemented at any time with ideas for problem-specific tests.