Open anupzope opened 5 years ago
This code is used when MAST computes the values that are placed in the force vector. Since you will be computing nodal pressure load contribution externally, we will need to add a separate route to add that contribution to our Newton-Raphson solver.
At this point, you should try to add a code block similar to the following (this is conceptually correct, but will need to be massaged slightly for bugs and working in an MPI environment):
libMesh::NumericVector<Real>& loads = system.add_vector("loads");
unsigned int
sys_num=0,
u_displ = 0,
v_displ = 1,
w_displ = 2;
libMesh::MeshBase::node_iterator niter = mesh.nodes_begin();
libMesh::MeshBase::node_iterator niterend = mesh.nodes_end();
for ( ; niter != niterend; ++niter) {
const libMesh::Node& n = **niter;
loads.set(n.dof_number(sys_num, u_displ, 0), f_x);
loads.set(n.dof_number(sys_num, v_displ, 0), f_y);
loads.set(n.dof_number(sys_num, w_displ, 0), f_z);
}
f_x, f_y, f_z
are the forces values for this node computed from your FSI module.
Question: When using MPI parallelism, does the node iteration loop results in iterations over local nodes on each process?
This depends on the kind of iterator you instantiate.
libMesh::MeshBase::nodes_begin()
will create an iterator for all nodes in the local copy of the mesh (see here) libMesh::MeshBase::local_nodes_begin()
will create an iterator for the nodes that are owed by the current rank. (see here). There also also other types of iterators (see the documentation of MeshBase), but this should suffice for now.
Similar to the code here, does following code correctly setup the discipline for accepting point loads from the FSI module?
Further, how do I pass the actual load values to the solver in the time stepping loop?