KratosMultiphysics / Kratos

Kratos Multiphysics (A.K.A Kratos) is a framework for building parallel multi-disciplinary simulation software. Modularity, extensibility and HPC are the main objectives. Kratos has BSD license and is written in C++ with extensive Python interface.
https://kratosmultiphysics.github.io/Kratos/
Other
1.02k stars 245 forks source link

[TwoFluids][MPI] Linear solver settings, not consistent block size #8319

Closed RiedlAndreas closed 3 years ago

RiedlAndreas commented 3 years ago

Description Running a MPI TwoFluids simulation. I get an error message in navier_stokes_two_fluids_solver -> InitializeSolutionStep -> _GetLevelSetConvectionProcess RuntimeError: Error: Block size is not consistent. Local: 1 Max: 3 RuntimeError: Error: The block size employed 3 is not an exact multiple of the matrix size 8209

I use the following LinearSolverSettings:

"linear_solver_settings": {
            "solver_type" : "amgcl",
            "smoother_type" :"damped_jacobi",
            "krylov_type" : "lgmres",
            "coarsening_type" : "aggregation",
            "max_iteration" : 500,
            "provide_coordinates" : false,
            "gmres_krylov_space_dimension" : 500,
            "verbosity" : 1,
            "tolerance" : 1e-8,
            "scaling" : false,
            "block_size" : 3,
            "use_block_matrices_if_possible" : true,
            "coarse_enough" : 500
        }

Deleting the linear_solver_settings block and using the default settings does not help.

Starting the simulation with only 1 core works. But not with multiple cores..

Can anyone give me some hints for SolverSettings ? @rubenzorrilla @mrhashemi

mpentek commented 3 years ago

Perhaps @sunethwarna ?

rubenzorrilla commented 3 years ago

Is the same simulation working fine in serial?

RiedlAndreas commented 3 years ago

Yes. Without any problems. I also tried the TwoFluidExample with MPI and is not working. Other FluidDynamics MPI test are working.

rubenzorrilla commented 3 years ago

Can you try with a different MPI partitioner?

mpentek commented 3 years ago

Can you try with a different MPI partitioner?

What do you mean by that? Which setting to touch?

rubenzorrilla commented 3 years ago

This "partition_in_memory" : true in the model_import_settings.

RiedlAndreas commented 3 years ago

Set partition_in_memory to true -> Unfortunately appears the same error

rubenzorrilla commented 3 years ago

I was afraid it was something with the partitioner but seems to not be related to this... @RiccardoRossi any hint?

mpentek commented 3 years ago

In case there is a working MPI example for TwoFluids, it would be nice to know, @RiedlAndreas can also share his (or a minimal version) to rule out any model issues or misuse of parameters.

RiedlAndreas commented 3 years ago

Minimal example to try out is the two_fluid_wave_example as MPI. two_fluid_wave_mpi.zip

swenczowski commented 3 years ago

After an e-mail has reached me in this concern, I also would like to contribute a guess:

Can you try and use a different "solver_type" (or have you done so already)? I just saw that "amgcl" is also set as the default solver.

Best regards to all, Simon

RiedlAndreas commented 3 years ago

That's a good guess and unfortunately a solution I didn't tried earlier. I was pretty sure that amgcl should be the most robust solver for this.

In fact it is working with other solvers like multi_level or bicgstab.

Any hints on which solver to choose regarding computing speed ?

RiccardoRossi commented 3 years ago

block siE should be 1 if u use a convection solver

On Wed, Feb 17, 2021, 3:05 PM RiedlAndreas notifications@github.com wrote:

Description Running a MPI TwoFluids simulation. I get an error message in navier_stokes_two_fluids_solver -> InitializeSolutionStep -> _GetLevelSetConvectionProcess RuntimeError: Error: Block size is not consistent. Local: 1 Max: 3 RuntimeError: Error: The block size employed 3 is not an exact multiple of the matrix size 8209

I use the following LinearSolverSettings:

"linear_solver_settings": { "solver_type" : "amgcl", "smoother_type" :"damped_jacobi", "krylov_type" : "lgmres", "coarsening_type" : "aggregation", "max_iteration" : 500, "provide_coordinates" : false, "gmres_krylov_space_dimension" : 500, "verbosity" : 1, "tolerance" : 1e-8, "scaling" : false, "block_size" : 3, "use_block_matrices_if_possible" : true, "coarse_enough" : 500 }

Deleting the linear_solver_settings block and using the default settings does not help.

Starting the simulation with only 1 core works. But not with multiple cores..

Can anyone give me some hints for SolverSettings ? @rubenzorrilla https://github.com/rubenzorrilla @jcotela https://github.com/jcotela

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/KratosMultiphysics/Kratos/issues/8319, or unsubscribe https://github.com/notifications/unsubscribe-auth/AB5PWEPVMP2HLWP42XVBYD3S7PEKHANCNFSM4XYNGISQ .

rubenzorrilla commented 3 years ago

That's a good guess and unfortunately a solution I didn't tried earlier. I was pretty sure that amgcl should be the most robust solver for this.

In fact it is working with other solvers like multi_level or bicgstab.

Any hints on which solver to choose regarding computing speed ?

AMGCL is the reference solver for CFD problems. If this is not working, you can use the multilevel_solver (the "old" multigrid we used to use when there was no MPI AMGCL).

What @RiccardoRossi is pointing to might be the cause of the problem. The same linear solver used for the NS equations is used for the resolution of the levelset convection problem, which features a block size 1 as he said.

RiedlAndreas commented 3 years ago

Thank you all for your input.

Unfortunately following settings still bring up the error message RuntimeError: Error: Block size is not consistent. Local: 1 Max: 3

"solver_type" : "amgcl",
 "block_size" : 1,
 "use_block_matrices_if_possible" : true,

It's ok for me, as I'm using the multi_level solver now.

rubenzorrilla commented 3 years ago

Couldn't it be that we are using the same linear solver for the NS problem and for the level set convection, whose block sizes are different?

@RiedlAndreas do you know at which point the simulation crashes?

rubenzorrilla commented 3 years ago

Solved in #8487