FEniCS / dolfinx

Next generation FEniCS problem solving environment
https://fenicsproject.org
GNU Lesser General Public License v3.0
731 stars 178 forks source link

Mesh refinement not working with ghosted meshes #1013

Closed garth-wells closed 1 day ago

garth-wells commented 4 years ago

Following recent changes to the mesh data structures, refinement of ghosted meshes broken.

chrisrichardson commented 4 years ago

I have posted a partial fix in branch chris/refinement-ghosting. It will now refine a ghosted mesh, and retain the ghosting when redistributing (if the flag for redistribute is set). However, meshes which are not redistributed will always become unghosted. This requires a fair bit more work to sort out.

garth-wells commented 4 years ago

See #1021

jorgensd commented 3 years ago
import dolfinx
from dolfinx import cpp
from dolfinx.mesh import refine
from mpi4py import MPI
import ufl
gm = cpp.mesh.GhostMode.shared_facet
mesh = dolfinx.UnitCubeMesh(MPI.COMM_WORLD, 5, 5, 5, ghost_mode=gm)

mesh.topology.create_entities(1)

mesh2 = refine(mesh, redistribute=False)
vol2 = dolfinx.fem.assemble_scalar(dolfinx.Constant(mesh2, 1) * ufl.dx)
print(MPI.COMM_WORLD.allreduce(vol2, op=MPI.SUM))

raises the following assert error when ran on four processors:

 mpirun -n 4 python3 test_refine_ghost.py 
python3: /home/shared/dolfinx_src/dolfinx/cpp/dolfinx/graph/partition.cpp:637: std::vector<long int> dolfinx::graph::partition::compute_ghost_indices(MPI_Comm, const std::vector<long int>&, const std::vector<int>&): Assertion `insert' failed.
jorgensd commented 1 day ago

I can no longer reproduce this:

import dolfinx
from dolfinx import cpp
from dolfinx.mesh import refine
from mpi4py import MPI
import ufl
gm = cpp.mesh.GhostMode.shared_facet
mesh = dolfinx.mesh.create_unit_cube(MPI.COMM_WORLD, 5, 5, 5, ghost_mode=gm)

mesh.topology.create_entities(1)

mesh2 = refine(mesh, redistribute=False)
vol2 = dolfinx.fem.assemble_scalar(dolfinx.fem.form(dolfinx.fem.Constant(mesh2, dolfinx.default_scalar_type(1)) * ufl.dx))
print(MPI.COMM_WORLD.allreduce(vol2, op=MPI.SUM))