Closed ammarkh95 closed 3 years ago
Tagging @rubenzorrilla and @frawahl as they will now more about the embedded CFD and appropriate settings. In general would be great if you could share some insight and optimal/better settings for the embedded with MPI.
@philbucher
you may also try the amgcl_ns_solver with default settings
Riccardo
On Mon, Feb 8, 2021 at 8:53 AM Ammar Khallouf notifications@github.com wrote:
Description When running an embedded fluid dynamics simulation with MPI Parallelism type and default settings for AMGCL, the following run time error is returned: Error: Zero pivot in ILU
Full log: `Traceback (most recent call last): File "MainKratos.py", line 32, in simulation.Run() File "/home/ammar/software/Kratos/install/KratosMultiphysics/analysis_stage.py", line 50, in Run self.RunSolutionLoop() File "/home/ammar/software/Kratos/install/KratosMultiphysics/analysis_stage.py", line 67, in RunSolutionLoop is_converged = self._GetSolver().SolveSolutionStep() File "/home/ammar/software/Kratos/install/KratosMultiphysics/FluidDynamicsApplication/navier_stokes_embedded_solver.py", line 417, in SolveSolutionStep is_converged = super(NavierStokesEmbeddedMonolithicSolver,self).SolveSolutionStep() File "/home/ammar/software/Kratos/install/KratosMultiphysics/FluidDynamicsApplication/fluid_solver.py", line 127, in SolveSolutionStep is_converged = self._GetSolutionStrategy().SolveSolutionStep() RuntimeError: Error: Zero pivot in ILU
in /home/ammar/software/Kratos/applications/TrilinosApplication/custom_strategies/builder_and_solvers/trilinos_block_builder_and_solver.h:354:void TrilinosBlockBuilderAndSolver<TSparseSpace, TDenseSpace, TLinearSolver>::SystemSolveWithPhysics(TrilinosBlockBuilderAndSolver<TSparseSpace, TDenseSpace, TLinearSolver>::TSystemMatrixType&, TrilinosBlockBuilderAndSolver<TSparseSpace, TDenseSpace, TLinearSolver>::TSystemVectorType&, TrilinosBlockBuilderAndSolver<TSparseSpace, TDenseSpace, TLinearSolver>::TSystemVectorType&, ModelPart&) [with TSparseSpace = TrilinosSpace<Epetra_FECrsMatrix, Epetra_FEVector>; TDenseSpace = UblasSpace<double, boost::numeric::ublas::matrix, boost::numeric::ublas::vector >; TLinearSolver = LinearSolver<TrilinosSpace<Epetra_FECrsMatrix, Epetra_FEVector>, UblasSpace<double, boost::numeric::ublas::matrix, boost::numeric::ublas::vector > >; TrilinosBlockBuilderAndSolver<TSparseSpace, TDenseSpace, TLinearSolver>::TSystemMatrixType = Epetra_FECrsMatrix; TrilinosBlockBuilderAndSolver<TSparseSpace, TDenseSpace, TLinearSolver>::TSystemVectorType = Epetra_FEVector] /home/ammar/software/Kratos/applications/TrilinosApplication/custom_strategies/builder_and_solvers/trilinos_block_builder_and_solver.h:403:void TrilinosBlockBuilderAndSolver<TSparseSpace, TDenseSpace, TLinearSolver>::BuildAndSolve(typename TrilinosBlockBuilderAndSolver<TSparseSpace, TDenseSpace, TLinearSolver>::BaseType::TSchemeType::Pointer, ModelPart&, TrilinosBlockBuilderAndSolver<TSparseSpace, TDenseSpace, TLinearSolver>::TSystemMatrixType&, TrilinosBlockBuilderAndSolver<TSparseSpace, TDenseSpace, TLinearSolver>::TSystemVectorType&, TrilinosBlockBuilderAndSolver<TSparseSpace, TDenseSpace, TLinearSolver>::TSystemVectorType&) [with TSparseSpace = TrilinosSpace<Epetra_FECrsMatrix, Epetra_FEVector>; TDenseSpace = UblasSpace<double, boost::numeric::ublas::matrix, boost::numeric::ublas::vector >; TLinearSolver = LinearSolver<TrilinosSpace<Epetra_FECrsMatrix, Epetra_FEVector>, UblasSpace<double, boost::numeric::ublas::matrix, boost::numeric::ublas::vector > >; typename TrilinosBlockBuilderAndSolver<TSparseSpace, TDenseSpace, TLinearSolver>::BaseType::TSchemeType::Pointer = shared_ptr<Scheme<TrilinosSpace<Epetra_FECrsMatrix, Epetra_FEVector>, UblasSpace<double, boost::numeric::ublas::matrix, boost::numeric::ublas::vector > > >; TrilinosBlockBuilderAndSolver<TSparseSpace, TDenseSpace, TLinearSolver>::TSystemMatrixType = Epetra_FECrsMatrix; TrilinosBlockBuilderAndSolver<TSparseSpace, TDenseSpace, TLinearSolver>::TSystemVectorType = Epetra_FEVector] `
Initially, I thought the problem at hand might be ill-posed but when running a similar problem with different solvers or using OpenMP the simulation proceeds normally and the results look fine
Work Around To bypass this issue, different smoother type rather than the default (ILU) need to be selected (e.g damped Jacobi)
Working AMGCL settings in ProjectParameters.json: ` "linear_solver_settings" : { "preconditioner_type" : "amg", "solver_type" : "amgcl", "smoother_type" : "damped_jacobi", "krylov_type" : "gmres", "coarsening_type" : "aggregation", "max_iteration" : 100, "provide_coordinates" : false, "gmres_krylov_space_dimension" : 100, "verbosity" : 1, "tolerance" : 1e-6
},
`
Scope
- FluidDynamicApplication (embedded_navier_stokes) formulation
- TrilionsApplication (builder and solver)
- AMGCL
To Reproduce Run the 3D embedded cylinder example problem shipped with GID using MPI parallelism type and default AMGCL solver settings (i.e ILU0 as smoother type)
Expected behavior The simulation to proceed with default solver settings
Environment
- OS: Ubuntu 18.04
- Branch: master
- Python 3.7
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/KratosMultiphysics/Kratos/issues/8238, or unsubscribe https://github.com/notifications/unsubscribe-auth/AB5PWELSMV3JDKZAQX2JY4LS56J6ZANCNFSM4XIPZORQ .
@RiccardoRossi Thanks for your reply.
it seems that amgcl_ns is not available for the embedded solver
RuntimeError: Error: Trying to construct a Linear solver with solver_type:
"amgcl_ns" which does not exist.
The list of available options (for currently loaded applications) is:
Kratos components
amesos
amgcl
amgcl_schur_complement
aztec
bicgstab
cg
gmres
klu
multi_level
mumps
super_lu_dist
Ammar
The AMGCL Navier-Stokes is available for all problems. You need to specify "solver_type" : "amgcl_ns_solver"
. I'll try to reproduce the example to figure out what is the problem.
you may also try the amgcl_ns_solver with default settings Riccardo …
@RiccardoRossi is this a specific preconfigured AMGCL solver for NS that you would recommend to use generally for CFD problems?
It is a classical block decomposition solver for Navier-Stokes. It should work fairly well for generic CFD problems, and with some luck be around twice faster the current default.
I still have as a todo to interface the mpi-parallel version of it
On Tue, Feb 9, 2021 at 10:51 AM mpentek notifications@github.com wrote:
you may also try the amgcl_ns_solver with default settings Riccardo … <#m2338024466160545182>
@RiccardoRossi https://github.com/RiccardoRossi is this a specific preconfigured AMGCL solver for NS that you would recommend to use generally for CFD problems?
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/KratosMultiphysics/Kratos/issues/8238#issuecomment-775809668, or unsubscribe https://github.com/notifications/unsubscribe-auth/AB5PWEPOUNC7GFBCBXHR2S3S6EASHANCNFSM4XIPZORQ .
It is a classical block decomposition solver for Navier-Stokes. It should work fairly well for generic CFD problems, and with some luck be around twice faster the current default. I still have as a todo to interface the mpi-parallel version of it
That sounds great! Once you have an MPI version, I could test it on some fairly large CFD models on HPC to measure performance.
@ammarkh95 could you plesae test if this is still applicable?
@KratosMultiphysics/fluid-dynamics are there any PRs that might have remedied this meanwhile?
It seems the issue might arise from a certain choice of parameters. In this sense the question is most probably cleared. Will be reopened if the current suggestions do not help.
@ammarkh95
Description When running an embedded fluid dynamics simulation with MPI Parallelism type and default settings for AMGCL, the following run time error is returned: Error: Zero pivot in ILU
Full log:
Initially, I thought the problem at hand might be ill-posed but when running a similar problem with different solvers or using OpenMP the simulation proceeds normally and the results look fine
Work Around To bypass this issue, different smoother type rather than the default (ILU) need to be selected (e.g damped Jacobi)
Working AMGCL settings in ProjectParameters.json:
Scope
To Reproduce Run the 3D embedded cylinder example problem shipped with GID using MPI parallelism type and default AMGCL solver settings (i.e ILU0 as smoother type)
Expected behavior The simulation to proceed with default solver settings
Environment