Closed johnomotani closed 4 months ago
The interpolate_to_grid1d!()
function uses the fact that both the original coordinate grid and newgrid
are sorted (i.e. monotonically increasing values) to speed up the search. kstart
is a set of indices that chunk newgrid
into sections that each lie within a single element of the original grid.
Doing the interpolation element-by-element (in the original grid) makes sense whether we use Lagrange polynomials or FFTs-with-Chebyshev - the lookups, e.g. for grid points within the element, only need to be done once per element, rather than once per interpolated point.
It's true that Radau elements were not correctly handled. I've just fixed that.
I've just noticed that when using gauss-legendre elements in a moment-kinetic run, a lot of time (about half the run time) is spent in the interpolate_to_grid_1d!()
function, so I'm going to have a go at optimising it a bit more. The optimisation will probably need more per-element lookups, so the current structure of the function does seem like the right thing to me.
Edited to add: testing for a Radau first element will always be required, because they to handle whether the range between 0 and coord.grid[1]
is part of the grid (Radau element) or whether extrapolation to the left of coord.grid[1]
might be needed (Lobatto element).
Enables the
interpolate_1d!()
function when usinggausslegendre_pseudospectral
, using Lagrange polynomial evaluation.Also optimises
second_derivative!()
when usinggausslegendre_pseudospectral
, by removing the re-evaluation of per-element matrices, replacing it by using the 'global'K_matrix
andL_matrix
. Adds partial handling of periodic dimensions to the weak-form matrices ingauss_legendre
- without that this update would have broken the second-derivative test incalculus_tests.jl
. The handling is only partial because it does not support distributed-MPI.It does not seem worth trying to work out how to support distributed-MPI with weak form derivatives at the moment, because that would require a distributed-MPI matrix solver. The closest options seem to be MUMPS.jl and PETSc.jl - both interfaces to external packages that do support MPI, but the Julia interfaces only partially support MPI and as far as I can see do not yet support what we would need to do. The only use-case for distributed-MPI at the moment would be the second derivative in parallel thermal conduction for 'Braginskii electrons', which will run well enough with only shared-memory parallelism for the moment.
As this PR adds
gausslegendre_pseudospectral
to the interpolation tests, the amount of time taken to set up agausslegendre_pseudospectral
discretization got more annoying, so I've extended theinit_YY=false
option to skip calculation of a few more matrices that are only used for the collision operator - also renamed it ascollision_operator_dim
since now not all the matrices are 'YY' matrices.