dealii / dealii

The development repository for the deal.II finite element library
https://www.dealii.org
Other
1.34k stars 741 forks source link

GridTools::collect_periodic_faces with parallel:shared::Triangulation in 1D #14879

Closed nils-schween closed 1 year ago

nils-schween commented 1 year ago

Hi everyone,

I have an issue with the function GridTools::collect_periodic_faces when I use it with parallel::shared::Triangulation as MeshType and dim = 1. During the linking stage of the compilation, I get the following error

my_application.cpp:375: error: undefined reference to 'void dealii::GridTools::collect_periodic_faces<dealii::parallel::shared::Triangulation<1, 1> >(dealii::parallel::shared::Triangulation<1, 1> const&, unsigned int, unsigned int, unsigned int, std::vector<dealii::GridTools::PeriodicFacePair<dealii::parallel::shared::Triangulation<1, 1>::cell_iterator>, std::allocator<dealii::GridTools::PeriodicFacePair<dealii::parallel::shared::Triangulation<1, 1>::cell_iterator> > >&, dealii::Tensor<1, dealii::parallel::shared::Triangulation<1, 1>::space_dimension, double> const&, dealii::FullMatrix<double> const&)'
collect2: error: ld returned 1 exit status

To me it looks like the explicit instantiation of the templated function GridTools::collect_periodic_faces() did not happen. I tried to explicitly instantiate it myself and I played around with the macro if conditions in the file grid_tools_dofs_handlers.inst.in. But I was not able to get it working. Can you help me to fix this issue?

Thank you very much.

masterleinad commented 1 year ago

For your use case (parallel::shared::Triangualtion and dim=1) it might be easier to just set the constraint yourself in a AffineConstraints object than making GridTools::collect_periodic_faces work since you know the boundary DoFs on all processes and there are only two of them.

nils-schween commented 1 year ago

I agree and I will look into setting up the constraints myself.

I am still wondering if it was necessary to adapt the collect_periodic_faces function or if it is already working for the parallel::shared::triangulation case with dim = 1 and it is only necessary to stamp out the right function from the template.

Maybe I should say some words on the background: The application I am writing is not meant to be a 1D application only, but I allow the user to freely set the dimension and it might make sense to look at 1D examples. Actually, I am doing this a lot to check the basic functionality. Since I would like the application to eventually run on cluster I am using the parallel::distributed::Triangulation class, which does not implement the 1D case. To handle the 1D case, I use std::conditional to change the type of the triangulation to parallel::shared::Triangulation if the user chooses the problem to be one dimensional. This allows me to use the same PETSc linear algebra for all dimensions; changing the type of triangulation is then the only adaptation of the code, i.e. one line is different and all dimensions are allowed. The user can also decide if periodic boundary conditions should be used. Now, if the user sets dim = 1 and periodic boundary conditions, the problem appears.

Moreover, I am using discontinuous elements and the only task the AffineConstraints object has, is to copy the local contributions of the assembly to the global system matrix. Using it to strongly enforce the boundary conditions requires many changes in my program.

Bottom line: If it was not necessary to adapt collect_periodic_faces and if it was only about the right template instance, I would prefer to use collect_periodic_faces even though it is a little bit of an overkill. This would allow me to keep the changes to a bare minimum while allowing the user to choose any dimension.

masterleinad commented 1 year ago

I am seeing

 template void
 collect_periodic_faces< Triangulation< 1 ,  1 > >(
 const  Triangulation< 1 ,  1 >  &,
 const types::boundary_id,
 const types::boundary_id,
 const unsigned int,
 std::vector<PeriodicFacePair< Triangulation< 1 ,  1 > ::cell_iterator>> &,
 const Tensor<1,  Triangulation< 1 ,  1 > ::space_dimension> &,
 const FullMatrix<double> &);

in my instantiations so it's most likely just an issue of finding the correct patch to also instantiate for that case. Can you describe some more about what exactly you were trying?

nils-schween commented 1 year ago

Thank you for your quick replies.

The triangulation in your example instantiation is not one of the parallel Triangulation classes. I would expect that all appearances of Triangulation are replaced with parallel::shared::Triangulation. I think it is explained best with a MWE. The following code is a test you wrote to check periodic boundary conditions. I adapted it to show where the problem appears.

#include <deal.II/fe/fe_dgq.h>

#include <deal.II/grid/grid_generator.h>
#include <deal.II/grid/grid_tools.h>

#include <deal.II/meshworker/mesh_loop.h>
#include <deal.II/base/mpi.h>
#include <mpi.h>
using namespace dealii;

template <int dim>
struct ScratchData
{};

struct CopyData
{};

template <int dim>
void
test()
{
  MPI_Comm mpi_communicator = MPI_COMM_WORLD;
  parallel::shared::Triangulation<dim> triangulation(mpi_communicator);

  const FE_DGQ<dim> fe(1);
  DoFHandler<dim>   dof_handler(triangulation);

  GridGenerator::hyper_cube(triangulation, 0., 1., true);
  std::vector<
    GridTools::PeriodicFacePair<typename Triangulation<dim>::cell_iterator>>
    periodicity_vector;
  GridTools::collect_periodic_faces(triangulation, 0, 1, 0, periodicity_vector);
  triangulation.add_periodicity(periodicity_vector);

  triangulation.refine_global(4);
  dof_handler.distribute_dofs(fe);
  using Iterator = typename DoFHandler<dim>::active_cell_iterator;

  const auto cell_worker = [&](const Iterator & /*cell*/,
                               ScratchData<dim> & /*scratch_data*/,
                               CopyData & /*copy_data*/) {};

  const auto boundary_worker = [&](const Iterator & /*cell*/,
                                   const unsigned int & /*face_no*/,
                                   ScratchData<dim> & /*scratch_data*/,
                                   CopyData & /*copy_data*/) {};

  const auto face_worker = [&](const Iterator & /*cell*/,
                               const unsigned int & /*f*/,
                               const unsigned int & /*sf*/,
                               const Iterator & /*ncell*/,
                               const unsigned int & /*nf*/,
                               const unsigned int & /*nsf*/,
                               ScratchData<dim> & /*scratch_data*/,
                               CopyData & /*copy_data*/) {};

  const auto copier = [&](const CopyData & /*c*/) {};

  ScratchData<dim> scratch_data;
  CopyData         copy_data;

  MeshWorker::mesh_loop(dof_handler.begin_active(),
                        dof_handler.end(),
                        cell_worker,
                        copier,
                        scratch_data,
                        copy_data,
                        MeshWorker::assemble_own_cells |
                          MeshWorker::assemble_boundary_faces |
                          MeshWorker::assemble_own_interior_faces_once,
                        boundary_worker,
                        face_worker);
}

int
main()
{
  test<1>();
  test<2>();
  test<3>();
  return 0;
}

If you comment out test<1>(), the linking error disappears, but this is exactly the case I would like to have.

masterleinad commented 1 year ago

It should be as easy as

diff --git a/source/grid/grid_tools_dof_handlers.inst.in b/source/grid/grid_tools_dof_handlers.inst.in
index d26891561e..1febd797c0 100644
--- a/source/grid/grid_tools_dof_handlers.inst.in
+++ b/source/grid/grid_tools_dof_handlers.inst.in
@@ -370,8 +370,6 @@ for (deal_II_dimension : DIMENSIONS; deal_II_space_dimension : SPACE_DIMENSIONS)
 for (deal_II_dimension : DIMENSIONS; deal_II_space_dimension : SPACE_DIMENSIONS)
   {
 #if deal_II_dimension <= deal_II_space_dimension
-#  if deal_II_dimension >= 2
-
     namespace GridTools
     \{
       template void
@@ -409,6 +407,5 @@ for (deal_II_dimension : DIMENSIONS; deal_II_space_dimension : SPACE_DIMENSIONS)
                        deal_II_space_dimension>::space_dimension> &,
         const FullMatrix<double> &);
     \}
-#  endif
 #endif
   }

to fix it but of course we should also have a test case.

nils-schween commented 1 year ago

Thank you! I tested it and it works in my case. I find it hard to come up with a simple program to test it. Do you think that this change is going to be merged into the main branch?

For your use case (parallel::shared::Triangualtion and dim=1) it might be easier to just set the constraint yourself in a AffineConstraints object than making GridTools::collect_periodic_faces work since you know the boundary DoFs on all processes and there are only two of them.

And I once more thought about your first post. Instead of setting the constraints myself, it makes more sense to fill the periodicity_vector myself. (Because I am using discontinuous elements). In 1D, there is only one periodic face pair. I tested it with the following code and it works

periodicity_vector.resize(1);
periodicity_vector[0].cell[0] = triangulation.begin();
periodicity_vector[0].cell[1] = triangulation.last();
periodicity_vector[0].face_idx[0] = 0;
periodicity_vector[0].face_idx[1] = 1;
std::bitset<3> temp_bitset;
temp_bitset[0] = 1;
periodicity_vector[0].orientation = temp_bitset;

Depending on if the change you proposed is merged or not, I will either of the two solutions. Once more, thank you very much.

masterleinad commented 1 year ago

Would you be willing to contribute a patch? A test case could be as easy as creating a parallel::shared::Triangulation object using GridGenerator::hyper_cube and refining it a couple of times to call GridTools::collect_periodic_faces for the boundaries in x-directions and checking that the output is sensible (say, correct number of faces).

nils-schween commented 1 year ago

Okay, I created a patch. But I do not know how I had to write the test that interacts with your CI. Moreover, I had to adapt the test your suggested. I could not work with GridGenerator::hyper_cube, because GridTools::collect_periodic_faces always works on the coarsest grid, and thus there is always only one pair. I used GridTools::sudivide_hyper_rectangle instead. This function produces more than one cell on level 0.