Closed bpachev closed 3 weeks ago
You need to change the ghost_mode
to shared_facet
, see for instance https://jsdokken.com/dolfinx_docs/meshes.html#ghosting
or grep in this repository for unit tests.
Something like
mesh = dolfinx.mesh.create_unit_square(
MPI.COMM_WORLD, 10, 10, ghost_mode=dolfinx.mesh.GhostMode.shared_facet)
will fix the issue.
Setting the ghost mode to shared facet doesn't change the results of the above script. In fact, shared_facet is the default ghost mode for create_unit_square, as listed in the docs
Well, then you need something similar to https://fenicsproject.discourse.group/t/consistent-side-of-interior-boundaries-with-dolfinx/15968/2 .
The n("+")
term looks fishy to me. There are no guarantees on which side is "+" and which side is "-". DG schemes are normally written such that it doesn't matter which side is +/-.
So in light of that, we're OK if the results of an integral involving an n("+") term change with the number of MPI processes? I guess that makes sense given this wouldn't affect an actual DG scheme (which this is not). Which would make this not a bug, because we give no guarantees about the choice between +/- or that choice remaining stable across different numbers of processes.
Might be worth including some sort of warning about this in the docs.
So in light of that, we're OK if the results of an integral involving an n("+") term change with the number of MPI processes? I guess that makes sense given this wouldn't affect an actual DG scheme (which this is not). Which would make this not a bug, because we give no guarantees about the choice between +/- or that choice remaining stable across different numbers of processes.
Might be worth including some sort of warning about this in the docs.
I have written up a demo that illustrates how to do custom orientation of jumps at:
https://github.com/jorgensd/dolfinx-tutorial/issues/158
https://fenicsproject.discourse.group/t/wrong-facetnormal-vector-on-internal-boundaries/12887/2
in the case of one sided integration, and for jumps at:
https://gist.github.com/jorgensd/5a8b60121a705bb9896eb1465be8e111
(Here in the case of a bifurcation mesh, but it does work in general).
Summarize the issue
When assembling a vector from an interior facet integral, the vector sum and norm change with the number of MPI processes. While mesh partitioning does affect the order of degrees of freedom, it shouldn't change these aggregate metrics.
How to reproduce the bug
Within the Docker image
dolfinx/dolfinx:stable
, I ran the MWE in parallel with varying numbers of processesMinimal Example (Python)
Output (Python)
Version
0.9.0
DOLFINx git commit
4116cca8cf91f1a7e877d38039519349b3e08620
Installation
Used the docker image
dolfinx/dolfinx:stable
Additional information
Manual inspection of some of the vector entries indicates that the absolute value is correct, but the sign is flipped. However, as the mismatch in norm values indicates, that's not the only issue. This issue doesn't appear for exterior facet or cell integral types. I suspect the issue may have something to do with how the ordering of cells corresponding to a given facet changes in parallel, but am not sure yet.