Closed ranocha closed 4 years ago
In GitLab by @sloede on May 11, 2020, 14:36
In general, I would favor to have a single directory for parameter files, i.e., to not separate parameter files for "testing" and files for "using". My experience in that case is that people tend to not set up tests for new features, while it is harder to determine which features are regularly tested and which are not.
Maybe we could instead create a test parameter file (e.g., test.toml
) that specifies for each test
t_end
and/or n_steps_max
basic
and extended
).This way, whenever you have developed a cool new features, you just put the parameter file into the examples/
folder and add a new section in the test.toml
file with the corresponding information for your new test.
There are probably other ways to do this as well, and I think this is not hard to change later on as well, so this should just be seen as a conversation starter.
In GitLab by @ranocha on May 11, 2020, 14:49
That's also okay for me. We can also just use the approach @gregorgassner mentioned: Set some low t_end
and add a comment that t_end = insane_number
is the usual choice for demanding applications.
In GitLab by @ranocha on May 11, 2020, 15:34
mentioned in merge request !43
In GitLab by @ranocha on May 27, 2020, 09:32
Since we already use such an infrastructure and #49 tracks the test coverage, we can close this issue, can't we?
In GitLab by @sloede on May 27, 2020, 09:42
Good call, closing this now.
In GitLab by @sloede on May 27, 2020, 09:42
closed
In GitLab by @sloede on May 9, 2020, 07:55
As suggested by @ranocha in #42, it would be great to have tests run automatically whenever a MR is created/updated or once
master
is changed. Right now, the only testing tool we have is Abraxas.jl, with tests being defined in trixi-tests. However, since it requires a setup process for testing (i.e., clone & instantiate external packages), it is somewhat cumbersome to set up for testing and not very Julian.A more Julian way would be to follow @ranocha's lead and use (abuse) Julia's canonical unit testing capabilities for packages, which uses
test/runtests.jl
to set up automated testing (docs) and employs the unit testing capabilities in Julia Base with@test
and@testset
(docs).One reason I have shunned away from this approach so far is that it does not allow to do full integration testing in a sane manner. That is, for a full-fledged test of Trixi, you should run an entire simulation setup and then compare all output files with some references. However, for this to work, you have to be able to store (and more importantly, update) reference files, which quickly and significantly increases the repository size. However, @erik-f has started to implement a
convtest
function for easy h-convergence testing in !41. Among other changes,run
now returns the final L2/Linf errors calculated. Therefore, we could use this information together with the already-present parameter files to set up some simple Trixi tests in a Julian way and without the need for a more sophisticated testing infrastructure. It would only be able to confirm that the L2/Linf errors are correct, but in many cases that is enough to detect breaking changes (or errors) in an implementation.Therefore, I propose the following changes:
examples/
.test/runtests.jl
that cover a broad range of "features" in Trixi by running the appropriate parameter files fromexamples/
and comparing the resulting L2/Linf errors (with a tolerance, of course).master
.Trixi "features" that should be tested are:
While this will not provide us with anything like absolute quality assurance, I see this as a quick way forward towards improving confidence that we do not break existing functionality with new commits. In the future, we can even use this to show code coverage results for the automated testing.
As usual, please feel free to discuss and improve!