jerett-cc / ryujin

High-performance high-order finite element solver for hyperbolic conservation equations
https://conservation-laws.org
Other
0 stars 0 forks source link

Interpolation between meshes in parallel requires lucky partitioning. #14

Closed bangerth closed 1 month ago

bangerth commented 8 months ago

There is this error if the partitioning of coarse and fine meshes do not geographically overlap perfectly:

--------------------------------------------------------
An error occurred in line <952> of file </raid/bangerth/p/deal.II/1/install/include/deal.II/numerics/vector_tools_interpolate.templates.h> in function
    void dealii::VectorTools::interpolate_to_different_mesh(const dealii::InterGridMap<dealii::DoFHandler<dim, spacedim> >&, const VectorType&, const dealii::AffineConstraints<typename OutVector::value_type>&, VectorType&) [with int dim = 2; int spacedim = 2; VectorType = dealii::LinearAlgebra::distributed::Vector<double, dealii::MemorySpace::Host>; typename OutVector::value_type = double]
The violated condition was: 
    internal::is_locally_owned(cell1) == internal::is_locally_owned(cell2)
Additional information: 
    The two Triangulations are required to have the same parallel
    partitioning.

Stacktrace:
-----------
#0  ./high-order-euler: void dealii::VectorTools::interpolate_to_different_mesh<2, 2, dealii::LinearAlgebra::distributed::Vector<double, dealii::MemorySpace::Host> >(dealii::InterGridMap<dealii::DoFHandler<2, 2> > const&, dealii::LinearAlgebra::distributed::Vector<double, dealii::MemorySpace::Host> const&, dealii::AffineConstraints<dealii::LinearAlgebra::distributed::Vector<double, dealii::MemorySpace::Host>::value_type> const&, dealii::LinearAlgebra::distributed::Vector<double, dealii::MemorySpace::Host>&)
#1  ./high-order-euler: void dealii::VectorTools::interpolate_to_different_mesh<2, 2, dealii::LinearAlgebra::distributed::Vector<double, dealii::MemorySpace::Host> >(dealii::DoFHandler<2, 2> const&, dealii::LinearAlgebra::distributed::Vector<double, dealii::MemorySpace::Host> const&, dealii::DoFHandler<2, 2> const&, dealii::LinearAlgebra::distributed::Vector<double, dealii::MemorySpace::Host>&)
#2  ./high-order-euler: interpolate_between_levels(_braid_Vector_struct&, unsigned int, _braid_Vector_struct const&, unsigned int, _braid_App_struct* const&)
#3  ./high-order-euler: my_Init(_braid_App_struct*, double, _braid_Vector_struct**)
#4  ./high-order-euler: _braid_BaseInit
#5  ./high-order-euler: _braid_InitGuess
#6  ./high-order-euler: braid_Drive
#7  ./high-order-euler: main
--------------------------------------------------------

Calling MPI_Abort now.
To break execution in a GDB session, execute 'break MPI_Abort' before running. You can also put the following into your ~/.gdbinit:
  set breakpoint pending on
  break MPI_Abort
  set breakpoint pending auto

--------------------------------------------------------
An error occurred in line <952> of file </raid/bangerth/p/deal.II/1/install/include/deal.II/numerics/vector_tools_interpolate.templates.h> in function
    void dealii::VectorTools::interpolate_to_different_mesh(const dealii::InterGridMap<dealii::DoFHandler<dim, spacedim> >&, const VectorType&, const dealii::AffineConstraints<typename OutVector::value_type>&, VectorType&) [with int dim = 2; int spacedim = 2; VectorType = dealii::LinearAlgebra::distributed::Vector<double, dealii::MemorySpace::Host>; typename OutVector::value_type = double]
The violated condition was: 
    internal::is_locally_owned(cell1) == internal::is_locally_owned(cell2)
Additional information: 
    The two Triangulations are required to have the same parallel
    partitioning.

Stacktrace:
-----------
#0  ./high-order-euler: void dealii::VectorTools::interpolate_to_different_mesh<2, 2, dealii::LinearAlgebra::distributed::Vector<double, dealii::MemorySpace::Host> >(dealii::InterGridMap<dealii::DoFHandler<2, 2> > const&, dealii::LinearAlgebra::distributed::Vector<double, dealii::MemorySpace::Host> const&, dealii::AffineConstraints<dealii::LinearAlgebra::distributed::Vector<double, dealii::MemorySpace::Host>::value_type> const&, dealii::LinearAlgebra::distributed::Vector<double, dealii::MemorySpace::Host>&)
#1  ./high-order-euler: void dealii::VectorTools::interpolate_to_different_mesh<2, 2, dealii::LinearAlgebra::distributed::Vector<double, dealii::MemorySpace::Host> >(dealii::DoFHandler<2, 2> const&, dealii::LinearAlgebra::distributed::Vector<double, dealii::MemorySpace::Host> const&, dealii::DoFHandler<2, 2> const&, dealii::LinearAlgebra::distributed::Vector<double, dealii::MemorySpace::Host>&)
#2  ./high-order-euler: interpolate_between_levels(_braid_Vector_struct&, unsigned int, _braid_Vector_struct const&, unsigned int, _braid_App_struct* const&)
#3  ./high-order-euler: my_Init(_braid_App_struct*, double, _braid_Vector_struct**)
#4  ./high-order-euler: _braid_BaseInit
#5  ./high-order-euler: _braid_InitGuess
#6  ./high-order-euler: braid_Drive
#7  ./high-order-euler: main
--------------------------------------------------------

This happens when calling

mpirun -np 5 ./high-order-euler test.prm 

with the following input file:

subsection MPI Parameters
  set px = 5
  set Time Bricks = 4
  set Start Time = 0.0
  set Stop Time = 5.0
  set cfactor = 2 ## 2 is default
  set max_iter = 2
end
subsection MGRIT
  set mgrit refinements = 1, 3, 5
  set print_solution = false
end

subsection OfflineData
end
subsection TimeLoop
  set basename                      = cylinder
  set enable checkpointing          = false
  set enable compute error          = false
  set enable compute quantities     = false
  set enable output full            = true
  set enable output levelsets       = false
  set error normalize               = false
  set error quantities              = rho, m_1, m_2, E
  set output checkpoint multiplier  = 1
  set output full multiplier        = 1
  set output granularity            = 1
  set output levelsets multiplier   = 1
  set output quantities multiplier  = 1
  set refinement timepoints         = 
  set resume                        = false
  set terminal show rank throughput = false
  set terminal update interval      = 5
end
subsection Equation
  set gamma                   = 1.4
  set reference density       = 1
  set vacuum state relaxation = 10000
end
subsection Discretization
  set geometry            = cylinder
  set mesh distortion     = 0
  set mesh repartitioning = false
end
subsection InitialValues
  set configuration = uniform
  set direction     = 1, 0
  set perturbation  = 0
  set position      = 1, 0
  subsection astro jet
    set jet width               = 0.05
    set primitive ambient right = 5, 0, 0.4127
    set primitive jet state     = 5, 30, 0.4127
  end
  subsection uniform
    set primitive state = 1.4, 3, 1
  end
end
subsection HyperbolicModule
  set cfl with boundary dofs        = false
  set limiter iterations            = 2
  set limiter newton max iterations = 2
  set limiter newton tolerance      = 1e-10
  set limiter relaxation factor     = 1
end
subsection TimeIntegrator
  set cfl max               = 0.15  #0.9
  set cfl min               = 0.15  #0.45
  set cfl recovery strategy = bang bang control
  set time stepping scheme  = erk 33
end
subsection VTUOutput
  set manifolds                  = 
  set schlieren beta             = 10
  set schlieren quantities       = rho
  set schlieren recompute bounds = true
  set use mpi io                 = true
  set vorticity quantities       = 
  set vtu output quantities      = rho, m_1, m_2, E
end
subsection Quantities
  set boundary manifolds           = 
  set clear statistics on writeout = true
  set interior manifolds           = 
end
jerett-cc commented 8 months ago

I have begun collecting cases that work, and those that don't. The following tuples follow the order (Global_Comm_Size, px, num_bricks, hierarcy...) Therefore to run one of these cases, one would type mpirun -n Global_Comm_Size high-order-euler PRM.prm px hierarcy and modify the App section in the parameter file by making set Time Bricks = num_bricks

Cases that work:

- (48,12,4, 1 3 5)

Cases that do not work:

- (64,16,4,1 3 5)
jerett-cc commented 1 month ago

This has been fixed by using two level mg interpolating functions in deal.ii, and creating temporary meshes for levels that do not exist. The problem was that we cannnot use the deal.II functionality if there is more than one level of refinement between the two meshes--so we need to do a recursive interpolate on temporary intermediate meshes for this.