Open adler-j opened 7 years ago
I was having a look at bicg
and bicgstab
. They seem to be defined only for square matrices, i.e., where domain and range of the operator are the same. In many cases, this is not what we will have. What one can do is to then consider the normal equations, but then I do not know if this becomes vastly different from cg
since we in fact get a self-adjoint operator. Are we still interested in these methods?
Edit: "For symmetric positive definite systems, the method delivers the same results as the conjugate gradient method, but at twice the cost per iteration." http://mathworld.wolfram.com/BiconjugateGradientMethod.html
For GMRES it is the same: the operator needs to be square since we are going to form the Krylov space {r, Ar, A^2r, ...}
. However, in this case I do not know what happens if one considers the normal equations. Does anyone know?
There are a few linear solvers that are staples in inverse problems and that we, if nothing else, should have for reference.
These include
[ ] #885. The Generalized minimal residual (GMRES) method.
[ ] #886. The Biconjugate gradient (bicg) method.
[ ] #887. The Biconjugate gradient stabilized (bicgstab) method.
Implementing them should merely be a question of following a textbook.