hypre-space / hypre

Parallel solvers for sparse linear systems featuring multigrid methods.
https://www.llnl.gov/casc/hypre/
Other
697 stars 192 forks source link

Help for the HYPRE solver implementation #198

Open apkumbhar opened 4 years ago

apkumbhar commented 4 years ago

Hi HYPRE team, Warm greeting!

I am Anil Kumbhar from MSC software. We are currently working on a POC to check the efficiency of the HYPRE solver for solving a system of equations generated by one of our solvers, MSC.Marc.

We have successfully run the example ex5f.F under src/examples directory. The same example was also modified to test the HYPRE solver using actual matrix generated by MSC.Marc. The input matrix format used in example is IJ format. We tried the following options available in test examples. 1) PCG solver with parasails pre-conditioner (solver_id=8) 2) PCG solver with DS pre-conditioner (solver_id=50) 3) PCG solver with AMG pre-conditioner (solver_id=1) 4) AMG solver (solver_id=0)

The results are as below

Solver Pre-conditioner Time (sec) Iterations
HYPRE: PCG Parasails 198 7515
HYPRE: PCG Diagonal scaled 621 36813
HYPRE: PCG AMG 2143 6835

Based on our study we found that the PCG solver with Parasails pre-conditioner is better compared to other options. Also, we found that the memory consumed is quite high for the AMG preconditioner.

We are looking below information 1) Which input matrix format is more efficient among available options (Struct, Sstruct or IJ)? 2) Does the AMG solver/preconditioner consume more memory as compared to other preconditioners? 3) Could you share any benchmarking/comparison data for these solver options? 4) Is there a I8 version of HYPRE using I8 MPI?

We are keen on evaluating the HYPRE solver for possible use with MSC.Marc. Your input will be very useful in taking this forward. Thanks in advance.

Thank you, Anil Kumbhar

rfalgout commented 4 years ago

Hi Anil,

Some answers to your questions:

  1. If you can use Struct, that is usually the best option, but it requires having a structured grid and stencil-based matrix. Actually, if you have structure in your problem, use SStruct because it provides more solver options.
  2. AMG (and any of the multigrid solvers) use more memory than most other solvers, however, the convergence should be much faster than what you are seeing. Iteration counts of fewer than 30 is what we usually want to see. Can you tell us more about your specific problem, e.g., what's the underlying PDE, what's the grid, discretization, etc.? AMG may not be well suited for your problem, or there may be better parameter choices that you can make. We also have ILU solvers that may work better for you than Parasails.
  3. We do have a number of papers with performance data on our publications page.
  4. Someone else will have to answer your I8 question.

Hope this helps! THanks!

-Rob

victorapm commented 4 years ago

Hi @apkumbhar,

I just wanted to complement @rfalgout's reply.

By I8 version, do you mean 64-bit integers support? If so, the answer is yes. Hypre can be compiled with 64-bit support in two different ways. First, by using the configure flag --enable-mixedint, which sets HYPRE_BigInt to long long int. Second, by using the configure flag --enable-bigint, which sets both HYPRE_BigInt and HYPRE_Int to long long int. First option involves usage of less memory than the second, however, not all solvers in Hypre work with it (Parasails doesn't). Second option is more general and does not impose restrictions on the solvers. For a full list of the configure flags supported by Hypre, you can do ./configure --help from the src folder.

Best, Victor