Open JaredCrean2 opened 9 years ago
The second point about changing interfaces brings up a good point, namely we should be doing more test-driven development where you write the test (and therefore decide on the interface) and then write the code.
Here's a place to start: write a test that checks that EulerFlux(q) and RoeSolver(q,qg) give the same flux when q = qg (and all other data are consistent).
I've been thinking about how to go about testing the code, and what I have come up with is:
Also, I was able to build Pumi successfully locally and on Travis. I ran a small case on my laptop successfully, but tried on Travis. Currently none of Pumi's dependencies are built, so it can't load balance and a few auxiliary features might not work, but the core mesh capabilities are functional
Made a bunch of progress 9d7a7c8
We should add a reduced version of the convergence rate tests to our test suite. Maybe 3 or 4 meshes for each p
would be sufficient. Verifying the convergence rate is still as expected is a strong confirmation that a change didn't break anything.
Checking the convergence rate would be a great test, but it can be tricky. If we change anything to do with the discretization or iterative solver, the error can change slightly. We could compute the slope and consider the test passed provided it falls in a reasonable range of the expected slope. We should also check the absolute error, since the slope might not change but the error could shift up or down.
I just noticed that the RK4 method has zero tests. Someone (other than me) should figure out some tests.
@k1nshuk As @jehicken mentioned, we need testing, lots of it. The obstacles to testing that I see are:
I'm not sure about the second one, but the first sounds like something we should be able to solve. I think the options are either building Pumi on Travis or updating SimpleMesh to match the interface of PumiMesh2. I will send an email to Cameron and Dan to find out about the Pumi possibility, but we should also figure what we want to do with SimpleMesh.