Open amartinhuertas opened 2 years ago
In this article (which also discusses continuity of the tangent vectors), there are no jumps explicitly mentioned (see eqn 7 for example): https://www.sciencedirect.com/science/article/pii/S0021999111003226
Hi @davelee2804 ! The jumps are defined here, yes? (Again, as in the other papers, with a + not a -, i.e., no need for n \cdot no)
Also I will need to implement a more sophisticated version of \tau (as |u\cdot n|, not 1).
In my experience, if \tau is not set to the appropriate value, one of the effects might be that the method does not convergence at the proper order as you refine the mesh.
Hey @amartinhuertas , yes I see the definition, but it is not clear to me that this is applied in (7). I see the last two penalty equations, but there is no explicit reference to [[.]]
in there (that I can see).
Also, I just implemented the proper version of \tau
. results are marginally improved (final solution and entropy conservation). However this is a very simple test. Probably the difference would be much greater for a more sophisticated velocity field (which I will implement over the weekend...)
I see the last two penalty equations, but there is no explicit reference to [[.]] in there (that I can see).
Ok. Do you understand why this equation
< [[u]], v>_e = 0
, for all edges e
in the mesh, and all test functions v
, where [[u]] = u^+ + u^-
is equivalent to
< u \cdot n , v>_\partial K = 0
for all cells K
in the mesh ?
If you unroll the first for a given e
, you obtain < u^+ \cdot n^+ , v>_e + < u^- \cdot n^-, v>_e = 0
If you consider the two cells around e
say K^+
and K^-
, then
< u^+ \cdot n^+ , v>_\partial K^+ + < u^- \cdot n^- , v>_\partial K^-
contains < u^+ \cdot n^+ , v>_e + < u^- \cdot n^-, v>_e
Also, I just implemented the proper version of \tau. results are marginally improved (final solution and entropy conservation). However this is a very simple test. Probably the difference would be much greater for a more sophisticated velocity field (which I will implement over the weekend...)
Great! Thanks for your work @davelee2804 !
Note that an integral over \partial K is nothing but a sum of integrals over the edges of K with the integrand restricted to each edge seen from the perspective of K.
Thank you @amartinhuertas ! I had not appreciated the subtlety that in for example Kang et al. eqns (14) and (15) the first skeleton integral is over \partial K
while the second is over e
, this makes much more sense now! Looking at (7) in Nguyen et al I see that all the skeleton integrals are over \partial T
, which is consistent with
integrand restricted to each edge seen from the perspective of K
as you say.
I am now implementing a convergence study for the solid body rotation test (second order in space + time), to verify that everything is OK.
A few days ago I got some nonsensical results with order 4, so I will also double check that as well...
Thank you @amartinhuertas ! I had not appreciated the subtlety that in for example Kang et al. eqns (14) and (15) the first skeleton integral is over \partial K while the second is over e, this makes much more sense now! Looking at (7) in Nguyen et al I see that all the skeleton integrals are over \partial T, which is consistent with
Exactly. You can write hybridizable methods either by edges or grouping together edges into cell boundaries \partial K, for each K. The second approach is the one required to implement static condensation locally at each cell, towards assembly of the global problem on the skeleton, which is the main motivation for hybridizable methods.
I am now implementing a convergence study for the solid body rotation test (second order in space + time), to verify that everything is OK. A few days ago I got some nonsensical results with order 4, so I will also double check that as well...
Ok. Let us see.
Hey @amartinhuertas , just a heads up that the convergence is sub-optimal. There is too much dispersion, perhaps as a result of how I have formulated the time stepping. I'll take a closer look at this...
Hey @amartinhuertas , just a heads up that the convergence is sub-optimal. There is too much dispersion, perhaps as a result of how I have formulated the time stepping. I'll take a closer look at this...
I would try to first solve a steady advection problem with manufactured PDE solution. This way we eliminate noise related to time discretization.
Thanks @amartinhuertas , yes great point, this is the correct approach, I will do this.
Just a heads up, I changed the time integration to a stiffly-stable second order implicit runge-kutta scheme, and now the solid body rotation test is converging at the correct rate (second order convergence for second order in space+time discretisation) after a single revolution round the sphere (at least for the low resolutions that I have checked).
The entropy conservation error is also converging quadratically, so that is also a good sign...
Hi @amartinhuertas , I've added a convergence test for the solid body rotation configuration of the HDG advection solver. I'm wondering if now is a good time to do a code review and merge this branch, before starting work on the linear wave equation with jump penalisation on the pressure gradient tangent component - what do you think?
...SBRAdvectionHDG tests are failing in github-ci (but passing on my laptop) : https://github.com/gridapapps/GridapGeosciences.jl/runs/7274708478?check_suite_focus=true#step:9:1037
Should I also commit the Manifest.toml?
Should I also commit the Manifest.toml?
Yes, because we are pointing to versions of some of the packages which are not yet registered in the Julia registries.
Also, please, pull from master
if you didnt lately. I see in this PR the following changes that are already in master
, so that they should not be in this PR.
Hi @amartinhuertas , I've added a convergence test for the solid body rotation configuration of the HDG advection solver.
I see in the attached plot that there is a mild degradation in slope for the highest resolution that you tested. Have you tried even higher resolutions to be 100% sure that the convergence is actually quadratic?
I see in the attached plot that there is a mild degradation in slope for the highest resolution that you tested
Hi @amartinhuertas , higher resolutions are to the left on the x-axis, so the convergence rate is actually increasing as resolution increases. Does that make sense? Have I missed your point somehow?
...Ie: convergence rate from dx=0.12 to 0.06 is higher than from dx=0.24 to 0.12. Does that make sense?
...Ie: convergence rate from dx=0.12 to 0.06 is higher than from dx=0.24 to 0.12. Does that make sense?
Ok, you are right, I was confused. The convergence curve looks like a piece-wise linear function with 2 pieces, where the right piece seems to be parallel to the theoretical convergence, and the left piece seems to bend down a little bit, with an slope which is no longer parallel to the theoretical convergence. But, as you say, bending down means even faster convergence, and not the opposite. (Here it was my confusion).
I'm wondering if now is a good time to do a code review and merge this branch
Yes, it might be a good time to do this. The only thing is that I have some other code reviews on the queue, I wont be able to do this immediately.
Apart from this code review, we need: 1) To investigate issue https://github.com/gridapapps/GridapGeosciences.jl/issues/36 2) To fix (if actually required) and test the geometrical discretization of the sphere using an analytical mapping.
No problem. Note that for the SBRAdvectionHDGTests file that I committed, convergence between the lowest two resolutions is sub-optimal, but quadratic convergence is achieved between the next two.
No problem @amartinhuertas , there is no rush from me. Please let me know if there is anything I can do to help with any of the issues you mention. We are using an isoparametric mapping right now, is that correct?
We are using an isoparametric mapping right now, is that correct?
We are using biquadratic elements for the geometry, and bilinear elements for the unknown. I think that the term "isoparametric" is used to refer to those scenarios in which the order of the polynomial to approximate the geometry matches the order of the polynomial to approximate the unknown.
Aaah, yes, that is what the 2
is for in the CubedSphereDiscreteModel :) OK. Is there a particular reason we need to support an analytical mapping as well?
Is there a particular reason we need to support an analytical mapping as well?
We have been using an analytical mapping in all the results we obtained with GridapGeosciences + compatible FEs so far.
If we can represent the geometry exactly, better to stick to it, instead of approximating it, so that we eliminate any noise caused by geometrical errors (leaving integration errors apart). I remember e.g., that, with a bilinear approximation of the cubed sphere we were not able to obtained the theoretical convergence rates of RT+DG FEs for Darcy on the sphere.
OK, fair enough. To my mind supporting iso or super parametric mappings (as you are already doing) is better, as if we ever want to add mountains when we will have to ditch the analytic mapping anyway, so ensuring everything works for a numerical mapping makes it easier to transition to such a case if we ever want to. But I take your point, analytic is nicer...
Yes, sure, better to have both.
Super WIP. Opening PR in draft mode ... just for discussion purposes