Open JordiManyer opened 11 months ago
@santiagobadia involved
Moreover, when we assemble ghost rows we also include some column ids which are NOT needed to multiply the matrix by a column vector. We need this extra rows as cache for in-place assembly, but these extra ghosts should not be included when communicating during matrix-vector product.
Yes, this is something that I also realized a while ago.
@JordiManyer The newly published version of PartitionedArrays (0.4.0) should help to fix these issues.
Moreover, when we assemble ghost rows we also include some column ids which are NOT needed to multiply the matrix by a column vector. We need this extra rows as cache for in-place assembly, but these extra ghosts should not be included when communicating during matrix-vector product.
Now PsparseMatrix can be in 2 different states (sub-assembled, or assembled). When assembled, the matrix has no ghost rows and the ghost columns are only the ones strictly needed for the sparse matrix-vector product. In sub-assembled state, you have ghost rows and cols (which will be usually the same).
In some specific scenarios (which are not as rare), owned rows of the matrix contain column identifiers from dofs which are NOT available in the local FESpaces. This means that we will never be able to 100% match the PRanges from the PSparseMatrix and the DistributedFESpace. (See Alberto's note)
The Prange in your FE space will naturally match the ghost rows/cols of a PSparseMatrix in sub-assembled state.
On top of this, it would be advantageous to reorder the dofs in the local FESpaces so that the dof layout contains owned ids first, then ghost ids. This is how we do things in the linear system, and would bring us on par with other softwares like PETSc.
Maybe this is not needed anymore since PSparseMatrix now considers split format (like in petsc).
The way you re-compute a sparse matrix has also been improved. You have two main options: Either refill the matrix using the coo vectors (1), or using a sub-assembled matrix (2).
1.
A, cache = psparse(I,J,V,row_partition,col_partition;reuse=true) |> fetch
## modify V
psparse!(A,V,cache) |> wait
2.
A = psparse(I,J,V,fe_partition,fe_partition;assemble=false) |> fetch
I,J,V = (nothing,nothing,nothing) # You can destroy the coo vectors at this point
B, cacheB = assemble(A;reuse=true) |> fetch
## Modify the local values of A (this is like in any sequential FE code)
assemble!(B,A,cacheB) |> wait
@fverdugo Thanks for the summary, I've had a look and I believe some of the issues have indeed been solved.
However, I disagree on the following:
The Prange in your FE space will naturally match the ghost rows/cols of a PSparseMatrix in sub-assembled state.
Maybe @amartinhuertas can correct me if I'm wrong, but my impression after the last discussion we had is that there is no way to match those two PRanges. The main problem is that in some specific cases, there are ghost contributions in the matrix that do not belong to ghost dofs in the FESpace. The most common case appearing in a DG setting with jump terms, where some owned dofs will get contributions from dofs which are two cells away. These dofs do not belong to the FESpace local portions, therefore the PRanges cannot match. In these cases, both the assembled and sub-assembled matrices can have more ghosts that do not belong to the FESpace.
After discussion, the following action points have been drafted:
PSparseMatrix
. FEOperator
or Solver
. IndexPartition
s in the matrix should be block-wise, so that the owner of a dof can be locally deduced by it's gid. This will save us some communication. I believe we are looking for the struct LocalIndicesWithVariableBlockSize
.
Here are some notes on the issue:
Given these observations, we concluded the following:
LocalView
type wont be needed anymore.FECompatiblePSparseMatrix
, that contains