Closed RiedlAndreas closed 3 years ago
Perhaps @sunethwarna ?
Is the same simulation working fine in serial?
Yes. Without any problems. I also tried the TwoFluidExample with MPI and is not working. Other FluidDynamics MPI test are working.
Can you try with a different MPI partitioner?
Can you try with a different MPI partitioner?
What do you mean by that? Which setting to touch?
This "partition_in_memory" : true
in the model_import_settings
.
Set partition_in_memory to true -> Unfortunately appears the same error
I was afraid it was something with the partitioner but seems to not be related to this... @RiccardoRossi any hint?
In case there is a working MPI example for TwoFluids, it would be nice to know, @RiedlAndreas can also share his (or a minimal version) to rule out any model issues or misuse of parameters.
Minimal example to try out is the two_fluid_wave_example as MPI. two_fluid_wave_mpi.zip
After an e-mail has reached me in this concern, I also would like to contribute a guess:
Can you try and use a different "solver_type" (or have you done so already)? I just saw that "amgcl" is also set as the default solver.
Best regards to all, Simon
That's a good guess and unfortunately a solution I didn't tried earlier. I was pretty sure that amgcl should be the most robust solver for this.
In fact it is working with other solvers like multi_level or bicgstab.
Any hints on which solver to choose regarding computing speed ?
block siE should be 1 if u use a convection solver
On Wed, Feb 17, 2021, 3:05 PM RiedlAndreas notifications@github.com wrote:
Description Running a MPI TwoFluids simulation. I get an error message in navier_stokes_two_fluids_solver -> InitializeSolutionStep -> _GetLevelSetConvectionProcess RuntimeError: Error: Block size is not consistent. Local: 1 Max: 3 RuntimeError: Error: The block size employed 3 is not an exact multiple of the matrix size 8209
I use the following LinearSolverSettings:
"linear_solver_settings": { "solver_type" : "amgcl", "smoother_type" :"damped_jacobi", "krylov_type" : "lgmres", "coarsening_type" : "aggregation", "max_iteration" : 500, "provide_coordinates" : false, "gmres_krylov_space_dimension" : 500, "verbosity" : 1, "tolerance" : 1e-8, "scaling" : false, "block_size" : 3, "use_block_matrices_if_possible" : true, "coarse_enough" : 500 }
Deleting the linear_solver_settings block and using the default settings does not help.
Starting the simulation with only 1 core works. But not with multiple cores..
Can anyone give me some hints for SolverSettings ? @rubenzorrilla https://github.com/rubenzorrilla @jcotela https://github.com/jcotela
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/KratosMultiphysics/Kratos/issues/8319, or unsubscribe https://github.com/notifications/unsubscribe-auth/AB5PWEPVMP2HLWP42XVBYD3S7PEKHANCNFSM4XYNGISQ .
That's a good guess and unfortunately a solution I didn't tried earlier. I was pretty sure that amgcl should be the most robust solver for this.
In fact it is working with other solvers like multi_level or bicgstab.
Any hints on which solver to choose regarding computing speed ?
AMGCL is the reference solver for CFD problems. If this is not working, you can use the multilevel_solver
(the "old" multigrid we used to use when there was no MPI AMGCL).
What @RiccardoRossi is pointing to might be the cause of the problem. The same linear solver used for the NS equations is used for the resolution of the levelset convection problem, which features a block size 1 as he said.
Thank you all for your input.
Unfortunately following settings still bring up the error message RuntimeError: Error: Block size is not consistent. Local: 1 Max: 3
"solver_type" : "amgcl",
"block_size" : 1,
"use_block_matrices_if_possible" : true,
It's ok for me, as I'm using the multi_level solver now.
Couldn't it be that we are using the same linear solver for the NS problem and for the level set convection, whose block sizes are different?
@RiedlAndreas do you know at which point the simulation crashes?
Solved in #8487
Description Running a MPI TwoFluids simulation. I get an error message in navier_stokes_two_fluids_solver -> InitializeSolutionStep -> _GetLevelSetConvectionProcess
RuntimeError: Error: Block size is not consistent. Local: 1 Max: 3
RuntimeError: Error: The block size employed 3 is not an exact multiple of the matrix size 8209
I use the following LinearSolverSettings:
Deleting the linear_solver_settings block and using the default settings does not help.
Starting the simulation with only 1 core works. But not with multiple cores..
Can anyone give me some hints for SolverSettings ? @rubenzorrilla @mrhashemi