Open teseoch opened 4 years ago
Hi @teseoch,
are you passing the saddle-point matrix to BoomerAMG? If yes, this won't work because BoomerAMG is not designed for this kind of problem. However, you still could use BoomerAMG in a Schur complement based framework to precondition an SPD block, for example.
Best, Victor
@victorapm yes, my problem is a saddle-point.
Do you have an example to setup Hypre to work in these settings?
thanks
Hi @teseoch, maybe you can give MGR in hypre a try. MGR isn't designed exactly for saddle-point problem, either, but it can deal with block matrices and give you a starting point.
How is your matrix ordered, i.e. are the unknowns interleaved (v1, p1, v2, p2, ..., vN, pN) or contiguous (v1, v2, ..., vN, p1, p2, ..., pN)? Also, is it possible to construct a simple preconditioner by adding a small perturbation to the diagonal for the 2,2 block?
I can help you setup MGR.
Best, Quan
My matrix is
A B
B^t 0
so contiguous. I would prefer to avoid adding small perturbation to the 2,2 block
thanks, Teseo
Hi @teseoch,
You can try something like the following. It may work, but I won't be surprised if it doesn't. In the latter case, like @victorapm said, constructing a block preconditioner that uses BoomerAMG for appropriate blocks would be better.
// 2x2 block system, assuming velocity index = 0, pressure index = 1
HYPRE_Int mgr_bsize = 2;
// Single-level reduction
HYPRE_Int mgr_nlevels = 1;
// Array to an array of indices as C-points
HYPRE_Int *mgr_cindices = hypre_CTAlloc(HYPRE_Int*, mgr_nlevels, HYPRE_MEMORY_HOST);
// Array of C-point indices
HYPRE_Int *lv1 = hypre_CTAlloc(HYPRE_Int, mgr_bsize, HYPRE_MEMORY_HOST);
// Pressure is C-point, reduce onto 2,2 block
lv1[0] = 1;
mgr_cindices[0] = lv1;
// The number of C-points for each level
HYPRE_Int *mgr_num_cindices = hypre_CTAlloc(HYPRE_Int, mgr_nlevels, HYPRE_MEMORY_HOST);
// Only 1 C-point, i.e. pressure
mgr_num_cindices[0] = 1;
// Array of global indices that indicate where each block starts on each processor
// The blocks are assumed to be contiguous
HYPRE_BigInt *mgr_idx_array = hypre_CTAlloc(HYPRE_BigInt, mgr_bsize, HYPRE_MEMORY_HOST);
mgr_idx_array[0] = block0_global_ibegin; // global index of where the first block starts, e.g. 0 on proc 0
mgr_idx_array[1] = block1_global_ibegin; // global index of where the second block starts, e.g. local_matrix_size/2
HYPRE_Solver mgr_precond;
HYPRE_MGRCreate(&mgr_precond);
// Set the block structure in MGR
HYPRE_MGRSetCpointsByContiguousBlock(mgr_precond, mgr_bsize, mgr_nlevels, mgr_idx_array, mgr_num_cindices, mgr_cindices);
// Apply AMG to the F-point, i.e. velocity; the coarse-grid/Schur complement is solved with AMG by default
HYPRE_MGRSetFRelaxMethod(mgr_precond, 99);
HYPRE_MGRSetNonCpointsToFpoints(mgr_precond, 1);
// Turn off global smoother, which does not support zero diagonal block
HYPRE_MGRSetMaxGlobalsmoothIters(mgr_precond, 0);
I tried and it dosent work. I had to fix this HYPRE_Int *mgr_cindices
to HYPRE_Int **mgr_cindices
. I set mgr_idx_array[0] = 0
and mgr_idx_array[1]=n
, where the velocity block is $n\times n$. I think it is correct.
I am getting this error:
hypre/src/parcsr_ls/par_amg_solve.c", line 321, error code = 256 Warning!!! Coarse grid solve diverges. Factor = 1.09e+00
this is the rest of the code I am using
/* Create solver */
HYPRE_Solver solver, precond;
HYPRE_ParCSRPCGCreate(MPI_COMM_WORLD, &solver);
/* Set some parameters (See Reference Manual for more parameters) */
HYPRE_PCGSetMaxIter(solver, max_iter_); /* max iterations */
HYPRE_PCGSetTol(solver, conv_tol_); /* conv. tolerance */
HYPRE_PCGSetTwoNorm(solver, 1); /* use the two norm as the stopping criteria */
//HYPRE_PCGSetPrintLevel(solver, 2); /* print solve info */
HYPRE_PCGSetLogging(solver, 1); /* needed to get run info later */
HypreMGR_SetDefaultOptions(precond_num_, precond);
/* Set the PCG preconditioner */
HYPRE_PCGSetPrecond(solver, (HYPRE_PtrToSolverFcn)HYPRE_MGRSolve, (HYPRE_PtrToSolverFcn)HYPRE_MGRSetup, precond);
/* Now setup and solve! */
HYPRE_ParCSRPCGSetup(solver, parcsr_A, par_b, par_x);
HYPRE_ParCSRPCGSolve(solver, parcsr_A, par_b, par_x);
/* Run info - needed logging turned on */
HYPRE_PCGGetNumIterations(solver, &num_iterations);
HYPRE_PCGGetFinalRelativeResidualNorm(solver, &final_res_norm);
where HypreMGR_SetDefaultOptions
is the copy-paste of your code with the changes I mentioned above.
Am I doing something wrong?
thank you again, Teseo
Hi @teseoch , thanks for catching the mistake with 'mgr_cindices'. The rest looks correct to me. The error you see is actually because the default coarse-grid solve is only doing 1 BoomerAMG V-cycle. It's not about the overall convergence of MGR.
The warning, however, is informative as it indicates that MGR isn't a good preconditioner for your problem. Also, I thought your problem isn't SPD, is it? You could try using MGR with GMRES, but other than that, I don't have any further suggestion.
Best, Quan
I tried and no luck.
I think that the matrix is Symmetric Positive Indefinite (since I have a block which is zero). I have other solvers (Pardiso, direct, and AMGCL, another AMG library) where it works.
I fiddled with the source code of the amg solve (since I don't know how to change the parameters from outside) and changed the log level to high and increase the number of iterations.
With only one V-cycle i get
Initial L2 norm of residual: 1.910241e+00
=============================================
Iters resid.norm conv.rate rel.res.norm
----- ------------ ---------- ------------
AMG SOLUTION INFO:
relative
residual factor residual
-------- ------ --------
Initial 9.999121e-01 1.000000e+00
Cycle 1 4.792939e-03 0.004793 4.793361e-03
when I change the number of iters to 100 the "first one" converges
Initial L2 norm of residual: 1.910241e+00
=============================================
Iters resid.norm conv.rate rel.res.norm
----- ------------ ---------- ------------
AMG SOLUTION INFO:
relative
residual factor residual
-------- ------ --------
Initial 9.999121e-01 1.000000e+00
Cycle 1 4.792939e-03 0.004793 4.793361e-03
Cycle 2 2.101021e-04 0.043836 2.101206e-04
Cycle 3 1.223114e-05 0.058215 1.223222e-05
Cycle 4 8.104301e-07 0.066260 8.105014e-07
Cycle 5 5.687824e-08 0.070183 5.688324e-08
Average Convergence Factor = 0.035563
Complexity: grid = 1.412121
operator = 1.916834
cycle = 11.500446
However, the second one gets stuck
AMG SOLUTION INFO:
relative
residual factor residual
-------- ------ --------
Initial 6.713496e-04 1.000000e+00
Cycle 1 5.524465e-03 8.228896 8.228896e+00
Cycle 2 6.770102e-03 1.225476 1.008432e+01
Cycle 3 7.673432e-03 1.133429 1.142986e+01
Cycle 4 8.320509e-03 1.084327 1.239371e+01
Cycle 5 8.766925e-03 1.053653 1.305866e+01
Cycle 6 9.062323e-03 1.033695 1.349867e+01
Cycle 7 9.246683e-03 1.020344 1.377328e+01
Cycle 8 9.350586e-03 1.011237 1.392804e+01
Cycle 9 9.396874e-03 1.004950 1.399699e+01
Cycle 10 9.402414e-03 1.000590 1.400524e+01
Cycle 11 9.379582e-03 0.997572 1.397123e+01
The rest of the log is similar, with a factor of 1
any ideas?
Hi @teseoch, it's hard for me to say without more information. Are these statistics from the AMG solves within MGR? If yes, is it the F-relaxation or the coarse grid/schur complement solve? Or are you using BoomerAMG to precondition the whole block matrix?
Hi sorry for the late reply.
This is what I have for the preconditioner
void HypreMGR_SetDefaultOptions(const int precond_num, HYPRE_Solver &mgr_precond)
{
// 2x2 block system, assuming velocity index = 0, pressure index = 1
HYPRE_Int mgr_bsize = 2;
// Single-level reduction
HYPRE_Int mgr_nlevels = 1;
// Array to an array of indices as C-points
HYPRE_Int **mgr_cindices = hypre_CTAlloc(HYPRE_Int *, mgr_nlevels, HYPRE_MEMORY_HOST);
// Array of C-point indices
HYPRE_Int *lv1 = hypre_CTAlloc(HYPRE_Int, mgr_bsize, HYPRE_MEMORY_HOST);
// Pressure is C-point, reduce onto 2,2 block
lv1[0] = 1;
mgr_cindices[0] = lv1;
// The number of C-points for each level
HYPRE_Int *mgr_num_cindices = hypre_CTAlloc(HYPRE_Int, mgr_nlevels, HYPRE_MEMORY_HOST);
// Only 1 C-point, i.e. pressure
mgr_num_cindices[0] = 1;
// Array of global indices that indicate where each block starts on each processor
// The blocks are assumed to be contiguous
HYPRE_BigInt *mgr_idx_array = hypre_CTAlloc(HYPRE_BigInt, mgr_bsize, HYPRE_MEMORY_HOST);
mgr_idx_array[0] = 0; // global index of where the first block starts, e.g. 0 on proc 0
mgr_idx_array[1] = precond_num; // global index of where the second block starts, e.g. local_matrix_size/2
HYPRE_MGRCreate(&mgr_precond);
// Set the block structure in MGR
HYPRE_MGRSetCpointsByContiguousBlock(mgr_precond, mgr_bsize, mgr_nlevels, mgr_idx_array, mgr_num_cindices, mgr_cindices);
// Apply AMG to the F-point, i.e. velocity; the coarse-grid/Schur complement is solved with AMG by default
HYPRE_MGRSetFRelaxMethod(mgr_precond, 99);
HYPRE_MGRSetNonCpointsToFpoints(mgr_precond, 1);
// Turn off global smoother, which does not support zero diagonal block
HYPRE_MGRSetMaxGlobalsmoothIters(mgr_precond, 0);
HYPRE_MGRSetMaxIter(mgr_precond, 100);
}
and for the actual solve
/* Create solver */
HYPRE_Solver solver, precond;
HYPRE_ParCSRGMRESCreate(MPI_COMM_WORLD, &solver);
/* Set some parameters (See Reference Manual for more parameters) */
HYPRE_GMRESSetMaxIter(solver, max_iter_); /* max iterations */
HYPRE_GMRESSetTol(solver, conv_tol_); /* conv. tolerance */
// HYPRE_GMRESSetTwoNorm(solver, 1); /* use the two norm as the stopping criteria */
HYPRE_GMRESSetPrintLevel(solver, 20); /* print solve info */
HYPRE_GMRESSetLogging(solver, 1); /* needed to get run info later */
HypreMGR_SetDefaultOptions(precond_num_, precond);
/* Set the PCG preconditioner */
HYPRE_GMRESSetPrecond(solver, (HYPRE_PtrToSolverFcn)HYPRE_MGRSolve, (HYPRE_PtrToSolverFcn)HYPRE_MGRSetup, precond);
/* Now setup and solve! */
HYPRE_ParCSRGMRESSetup(solver, parcsr_A, par_b, par_x);
HYPRE_ParCSRGMRESSolve(solver, parcsr_A, par_b, par_x);
/* Run info - needed logging turned on */
HYPRE_GMRESGetNumIterations(solver, &num_iterations);
HYPRE_GMRESGetFinalRelativeResidualNorm(solver, &final_res_norm);
// printf("\n");
// printf("Iterations = %lld\n", num_iterations);
// printf("Final Relative Residual Norm = %g\n", final_res_norm);
// printf("\n");
/* Destroy solver and preconditioner */
HYPRE_MGRDestroy(precond);
HYPRE_ParCSRGMRESDestroy(solver);
What I did I modify the source of Hypre to make it verbose and increase the iterations (I am sure it could be done using an HYPRE_xxx
function but I couldn't figure out which one.
Regarding your question, the short answer is that I don't know :D. The long answer is that I think I am using MGR as preconditioner and CSRGMRES as solver. Regarding the print, it is the first thing printed so I guess inside the preconditioner.
thank you.
@teseoch Yes, I think you're using MGR as a preconditioner to GMRES, and the AMG output is from the AMG applied to the velocity block.
Could it be that this 1,1 velocity block is singular? If it is, then the behavior you're seeing makes sense. For the first iteration, the RHS corresponding to the 1,1 block is still in its column range, so BoomerAMG should work. However, for the second GMRES iteration, the RHS, having been updated in MGR, would no longer be in the span of the column space of the velocity block. Basically, the MGR strategy I proposed isn't a good one. Unfortunately, we don't have any tool to deal with a singular 1,1 block currently.
I'm sorry to lead you down this road only to find out that it doesn't work.
Hi,
I am trying to use Hypre to solve a stokes problem with mixed elements.
My matrix is composed of 4 blocks: 1,1 is the velocity matrix 1,2 is the velocity pressure 2,1 is the divergence-free 2,2 is zero
This is my setup (copied from the example):
and i get this error:
I guess i am using the wrong setting since the system is not SPD, what should I change?