firedrakeproject / firedrake

Firedrake is an automated system for the portable solution of partial differential equations using the finite element method (FEM)
https://firedrakeproject.org
Other
517 stars 160 forks source link

BUG: Mesh partitioners are not deterministic across different `Ensemble` members #3866

Open JHopeCollins opened 3 hours ago

JHopeCollins commented 3 hours ago

Describe the bug Standard use of Ensemble is to create a topologically identical mesh on each ensemble.comm. The parallel partition of the mesh on each member should be identical, otherwise the communications over ensemble.ensemble_comm will be mismatched, which will lead to errors/incorrect results/parallel hangs. However, the mesh partitioners do not appear to be guaranteed to be deterministic across different ensemble members in the same run, so different partitions can be created.

This bug was observed in #3385 and "fixed" in the Ensemble tests in #3730 by specifying the simple partitioner type, which is suboptimal but deterministic.

Probably the best real fix would be to add a method to Ensemble to allow broadcasting a mesh from one ensemble.comm to all others. This would require first broadcasting the DMPlex (possibly quite involved), then broadcasting the coordinates (very straightforward).

Steps to Reproduce The partitioners will usually produce identical partitions, so reproducing the issue reliably is difficult.

Expected behavior Some provided mechanism for ensuring the same partition on different ensemble members.

Error message Parallel hand.

Environment: Any?

Additional Info

ksagiyam commented 2 hours ago

Provided that we start from a serial or parallel mesh (plexA) that all ensemble members agree with, we could probably take the following steps:

  1. One ensemble member call DMPlexDistribute() on plexA and get distribution_sf as an output,
  2. Broadcast distribution_sf across the ensemble members,
  3. The ensemble members who received distribution_sf call DMPlexMigrate() (https://petsc.org/release/manualpages/DMPlex/DMPlexMigrate/) with plexA using distribution_sf.
JHopeCollins commented 1 hour ago

That's good to know, thanks.

a serial or parallel mesh (plexA) that all ensemble members agree with

What do you mean by this?

ksagiyam commented 54 minutes ago

That means that all ensemble members start from the same representation of the conceptually the same mesh. In your case, all ensemble members start from the same serial (undistributed) mesh, I think, but the mesh you start from does not need to be serial in the proposed approach.

JHopeCollins commented 6 minutes ago

I still don't think I understand. The issue is that the ensemble members don't start from the same mesh. A fix for this would need to guarantee that the ensemble members in the following all have the same mesh partition (not guaranteed at the moment):

from firedrake import *
ensemble = Ensemble(COMM_WORLD, M=2)
mesh = UnitSquareMesh(16, 16, comm=ensemble.comm)

I think I'd like something like the following:

from firedrake import *
ensemble = Ensemble(COMM_WORLD, M=2)
root = (ensemble.ensemble_comm.rank == 0)

if root:
    mesh = UnitSquareMesh(16, 16, comm=ensemble.comm)

mesh = ensemble.bcast_mesh(mesh if root else None, root=root)