Closed fdrmrc closed 9 months ago
I've added a test which does sanity checks using also a fully distributed tria and exploits the partitioning strategy described above. The relevant changes are:
I quickly tested with the Rtree and the sanity check runs smoothly. Here the agglomerates in each partition:
Process 0 | Process 1 | Process 2 |
---|---|---|
Finally, I've tested this with a fully distributed tria constructed out of an external mesh (see here https://dealii.org/developer/doxygen/deal.II/namespaceTriangulationDescription_1_1Utilities.html#a0411d757cd85a77d25bbb9303af93de7)
This gives automatically continuous partitions (it's using METIS) and hence no repartitioning policy is required. Moreover, the generated agglomerates look nicer. (As before, with $3$ processes and 10 agglomerates per process)
Process 0 | Process 1 | Process 2 |
---|---|---|
Nice. Very nice. (Y)
I'm merging this (I have already tested it on some 3D mesh)
To be merged after #85This PR enables the partition of locally owned regions of a distributed mesh by using the function
PolyUtils::partition_locally_owned_regions()
.What this function is doing is essentially the same as
SparsityTools::partition()
, but local to each process. The idea is to call METIS within each partition, in order to generate a given number of agglomerates in each locally owned region.Doing this requires some additional steps due to the fact that partitions generated by p4est can be discontinuous, see partitions $1$ and $4$ in the next picture.
This implies that METIS will throw because the local graph to each process is not continuous. The workaround for this is to use a
parallel::fullydistributed::Triangulation
and repartition this triangulation before callingpartition_locally_owned_regions
, in order to have continuous partitions within each process.On that hypercube with a strided partitioner , asking for 10 agglomerates in each partition with a triangulation distributed among $3$ processors amounts to call
and generates the following local agglomerates in each one of the $3$ MPI ranks:
Do you see any potential issue @luca-heltai ?