Open janmodderman opened 3 months ago
Hi @janmodderman ... as far as I know, this space is not actually supported in a parallel distributed-memory setting.
You should use a fe space with the :zeromean
constraint instead (assuming that you are willing to impose that the mean value of the discrete pressure is equal to zero).
I'm actually interested in using a rigid object only defined on the domain boundary and hence describing all nodes on this boundary with 1 DOF, so I'm not sure if the :zeromean
will do here 🤔
Ok, I see. We typically use ConstantFESpace
to implement lagrange multipliers; these can be used, e.g., to impose the mean value of the pressure. But here it seems that you are using it for a different purpose?
Yes, we use the lagrange multipliers to constraint the nodal DOFs on the object boundary such that it reduces to rigid body motion (currently up to a maximum of 6DOF; 3 translational + 3 rotational)
Just to add a comment (maybe @amartinhuertas can correct me): I believe this is deliberately NOT implemented in distributed. The implementation is not the issue here, it is actually quite straighforward. The issue is that something like what we have in Gridap is never going to be scalable since the dof would have to tie all processors together (therefore growing the number of neighbors of the processor owning the dof). Is this correct?
I don't know if there is a workaround that would make this scalable.
Is this correct?
Yes. Essentially (in the simplest case), you have a new full (dense) row and a new full (dense) column added to the end of the original matrix (i.e., the one without constraint).
The new full row has to be owned by one of the processors (as any other row in the matrix).
As a consequence, e.g. when doing matrix-vector products, the processor that owns the full row has to send a point-to-point message with a single entry to any other processor, i.e., like a sort of broadcast operation implemented in terms of point-to-point communication. Something similar happens during assembly, but the other way around, i.e., all processors send to the one owning the full row their local contributions to that row. It would be more scalable to use broadcast and gather, respectively, but I think this implies and important refactoring of the assembly process.
I don't know if there is a workaround that would make this scalable.
All this is assuming that are willing to assemble the matrix with the full row and full column. An alternative to think is whether it might be beneficial condensing the constraint in a clever way (e.g., as per Shephard, 84' paper). At the end of the day, you can think this condensation as a triple matrix-matrix-matrix product, C^T A C. Where A is the matrix without constraints, and C is a matrix built out of the constraints. Indeed, MFEM library pursues this approach for hanging node constraints (see https://epubs.siam.org/doi/10.1137/18M1193992). It might also be useful to see how other libraries are solving this, e.g., FENICs, Firedrake, etc.
Hi all,
I've been trying to use a
ConstantFESpace
with the GridapDistributed.jl package, but within the definition of the reference space for theConstantFESpace
, I get an error that this functionsetup_cell_reffe
requests a::Gridap.Geometry.DiscreteModel
and cannot use a::GridapDistributed.GenericDistributedDiscreteModel
. I attached the error I obtained below:I have a reproduction file, but I cannot attach it here.
@oriolcg @JordiManyer
Best regards,
Jan