Closed rbooth200 closed 7 years ago
I think what we have now is good in the sense that SPH and the meshless now take similar timesteps. The question about SPH is how good using h Div v in denominator is compared to using Delta v. They should be pretty similar I suspect, after all h Div v should be of order Delta v (with a possible factor of ndim in front). We should probably discuss this, I gave a look in the SEREN paper but there are not many details about this. If the answer is "at the moment we don't know" then I think we should test it in the paper and accept his for now. The meshless is already slower than SPH and we don't want to make it another factor of 2 slower. Btw, both gadget and Monaghan paper have a factor of 3 in front of the relative velocity.
It's definitely worth thinking about this sure. Just to add to some of the points above
1) The signal velocity approach is the one taken in all of Daniel Price's papers and is probably the better one to use. Also Daniel uses a beta in front of the velocity term (as in the quadratic viscosity term) which is typically 2*alpha = 2. I'm not sure if there's any reason for any of these numbers (2, 3, etc..) other than trying with certain tests and seeing what value gives good results.
2) I'm not sure about where the factor of 2 in the GIZMO paper comes from. Either he put it there since the kernel extent is 2h rather than h, or because the denominator has in effect 2c so in a uniform medium with no velocities, the timestep becomes dt = CFL2h/2c = CFLh/c, which is what we should be aiming for I guess.
3) The Godunov SPH of Inutsuka et al. (which is not too dissimilar to the MFV/MFM schemes) claim that they can use CFL numbers close to 1 (~0.9+) due to the 'extra stability' afforded by the Riemann solvers. Assuming that's true, then in principle the MFV should also allow higher Courant numbers than SPH right?
4) Many of the initial tests with the MFV used the older and less stable slope limiters and having the Courant number too high led to more oscillations. However, if we have a more stable limiter now then that should also allow us to get away with a higher CFL number?
After further discussion, I'm accepting this for now as it's clearly an improvement and it doesn't break anything. Further discussion on the best timestep criterion to use (especially for SPH, where we might also decide to use v_sig) is deferred to the paper. We will need in particular to test the case of a strong shock.
I've had a look into why the meshless is taking so many steps compared to SPH. The courant criteria differ between the two schemes. In Sph we use: h_i / (cs_i + Div v_i) while in the meshless we were using h_i / max_j(v_sig{i,j}), where v_sig{i,j} = cs_i + cs_j - max(dv, 0).
You can see straight away why for tests like the soundwave the meshless is going to to take twice as many steps. I've changed the courant condition in the meshless to 2 * h_i / max_j(v_sig{i,j}), which is how it is defined in the GIZMO paper. Now both SPH and the meshless should take closer to the same number of time-steps.
An important question is: which criterion is more correct? I'm not really sure, but it's worth noting that even with the above criterion our timestep criterion is more stringent than GIZMO since they use a definition of h that is twice as big. From the GIZMO paper and Monaghan (1997) I'd expect new expression to be stable up to courant numbers ~ 0.4-0.5. This is still much greater than the 0.15-0.25 that we typically use in the tests.
I'd like your thoughts on this, but it seems to me that both SPH and the meshless should be producing similar estimates of the timestep for the soundwave problem, so I'd suggest we either accept this change. Otherwise could modify the courant condition used in SPH to be based upon the signal speed and increase the courant numbers in the tests up to ~0.6.