Closed packquickly closed 1 year ago
Is this a problem that you observe specifically with my GMRES implementation, or is it due to the algorithm (which is beyond my control). That it doesn't error upon failing to converge is documented (check ? linsolve
); if you increase the verbosity level, it will print an information message. There is also no need to do x -> matrix*x
, you can just insert the matrix.
It is every implementation of GMRES I have encountered. See also: https://github.com/JuliaLinearAlgebra/IterativeSolvers.jl/issues/338 https://github.com/JuliaSmoothOptimizers/Krylov.jl/issues/729
I guess this tells you something. I am not an expert on the convergence theory of iterative linear solvers, but from practical experience, I would say that, unfortunately, you need some insight into the structure of your problem, and it is not the case that there is one algorithm that works for all use cases (this is still an active field of research, with novel methods still being developed). There is also BiCG, BiCGStab, … SciPy furthermore has an implementation of an algorithm known as "LGMRES", which I would at some point like to implement. Would be interesting to see how your "random" test set works with that algorithm.
Is there anything left to do here; if not I will close this issue?
GMRES often fails to converge when restart is less than the problem dimension, but will return without indicating failure.
This failure (likely due to stagnation) seems to depend on the absolute difference between the problem dimension and the Krylov subspace dimension (restart.) ie. roughly 30% of the time GMRES will fail silently on a problem of dimension 25 if restart = 24, and will still fail 30% of the time with a problem of dimension 100 if restart = 99. For a difference in dimension of 5, the test below returns an incorrect value greater than 90% of the time.
Reproducing code: