Closed JordiManyer closed 11 months ago
Notes on 1: The issue here is that the dof layout is permuted, but we allocate caches that have the same dof layout as the original fespace. When we make the PVector consistent, we exchange info incorrectly.
Notes on 2: In parallel, everything should be ok (I think), since we enforce periodicity locally. In serial, however, there are less dofs than what we are procedurally predicting (since some of them have been glued together by the periodicity.
Notes on 3: The following is speculative, but I think it might be in the right direction: The elasticity solver by PETSc assumes we are solving an elasticity problem with a single variable, and computes the matrix kernels accordingly. However, we are not doing that. The diagonal blocks of our matrix will indeed be linear elasticity blocks, but this is more complex than what the PETSc solver can handle. In my opinion, what we should be looking at is a block solver preconditioner, where each diagonal block is preconditioned using the elasticity solver. From a first view, we seem to have no coupling between the different variables, therefore the preconditioner should be as good as the one we are using now.
parallel_reinit_order=2.jl
scripts/MPI/MPI_main_inverse_homenisation_AGM.jl