Closed krober10nd closed 2 years ago
Yes, definitely a question which is interesting to investigate. What is a reasonable test case to compare with? Do you have a specific problem/method in mind which we should use as reference?
Perhaps everybody's favorite Burgers equation? It would also be good to throw into the mix explicit time stepping because if the mesh quality is compromised, the whole solver stops.
For the purpose of runtime comparison, I extended the moving example by a cell where we only evaluate the FV scheme without adaptation. In this example, we just solve the transport equation, but I think this is sufficient. I attached a screenshot below how it looks like as this is not yet visible on the RTD page. This comparison also nicely shows the advantage of moving the interface to keep the front sharp.
It is a fair question whether we should not look at explicit time stepping. I will implement some adaptive time stepping for the same example such that we can see the influence of the mesh quality.
Now, time stepping in the example is explicit. The minimal time step size in the scheme with the adapted mesh is 0.0065, in contrast to 0.0354 for the standard version, due to mesh quality. Of course, this increases the overall runtime of the adapted scheme. Remarkably, the solution of standard scheme is significantly less diffuse now.
So with the explicit (implicit) and without mesh adaptation is 10x (7x) than with mesh adaptation? Would it be possible to normalize this based on the global solution error?
Then we could try reducing the mesh adaptation frequency. I assume it's adapting every timestep but likely this is not necessary.
I think this is a very useful experiment by the way, thanks for showing it. In my experience, cost-benefit analysis is the first thing that comes up during any kind of presentation w.r.t. to mesh adaptation.
Yes, it's about 10x, but the runtime factor decreases a little when I refine the mesh (below its 4x with half mesh size).
I am not really sure how you would normalise the error with respect to runtime? When we compare the L2-errors of both solutions we get an improvement of about 5x, but this might change under mesh refinement.
Important is also to note that there might be a structural advantage depending on the exact solution you choose. For instance, a piecewise constant solution with a discontinuity only at the interface would have zero error in the adapted version. This is also one argument for not only looking at runtime here, you might get a structural benefit you might prefer.
The mesh adaptation frequency can be reduced by checking if cells have marked, and I added this check, but it seems that the mesh needs/wants to be adapted in most of the steps.
Standard explicit FV:
With adaptation:
Okay, I appreciate the complexity in the analysis. I think the notebooks are helpful for user applications.