Currently the iterative solvers try to reach the prescribed accuracy. If it
stagnates for a (very) large number of iterations, it stops and produces an
error.
The idea is to adopt a more positive viewpoint:
1) If stagnation is detected, stop iterations, but continue with the
calculation of scattering quantities (but produce a warning). This especially
makes sense, when stagnation happens at relative residual of about 10^-4. A bit
more thinking is required for the case of e.g. orientation averaging.
2) The following idea was inspired by book A. Doicu, T. Trautmann, and F.
Schreier, "Numerical regularization for atmospheric inverse problems,"
Springer, Heidelberg (2010).
Stopping iterative solver (early) can be considered as a regularization
procedure. So for a given DDA problem, there exist an optimal eps,
corresponding to the best accuracy of the final solution. For standard cases,
this optimum is much better (close to machine precision) than the default
stopping criterion and is thus irrelevant. However, for cases with very slow
convergence it may well be the opposite.
So the idea is to modify the stopping criterion through a more detailed
analysis of the previous convergence. Convergence rate can be used to estimate
condition number, which in turn can be used to estimate when it is time to stop
iterations. Alternatively, iterations should be stopped when the convergence
rate significantly slows down.
Anyway, this requires a lot of preliminary math analysis. Especially
problematic is to devise criteria for methods like QMR (without guaranteed
convergence) - it is hard to discriminate natural slowing down (due to large
condition number) and quasi-random (near-)breakdowns due to occurrence of
almost zero in a denominator.
Original issue reported on code.google.com by yurkin on 5 Mar 2013 at 10:06
Original issue reported on code.google.com by
yurkin
on 5 Mar 2013 at 10:06