Open ischoegl opened 5 years ago
It's now apparent this could use some work. As Referenced by @12Chao in https://github.com/Cantera/cantera/pull/1305#issuecomment-1146313290 the answer may be to revive the 0D solver in #1021, but @ischoegl could you elaborate a little on how one could use #629 to address this?
@ischoegl could you elaborate a little on how one could use https://github.com/Cantera/cantera/pull/629 to address this?
The 0D solver in #1021 and solver derivatives obtained from #629 are orthogonal approaches. What I had in mind here was to use a convergence criterion based on vanishing derivatives (above ignition) rather than the 'feature scaling' (where I was never able to locate documentation that was sufficiently clear about the background).
PS: I don't think that this would help with the integration failures in https://github.com/Cantera/cantera/pull/1305#issuecomment-1146313290 though ...
PS: I don't think that this would help with the integration failures in https://github.com/Cantera/cantera/pull/1305#issuecomment-1146313290 though ...
Could the solve_steady
functionality mentioned in https://github.com/Cantera/cantera/pull/1021 be a substitute of the advance_to_steady_state
?
Could the
solve_steady
functionality mentioned in #1021 be a substitute of theadvance_to_steady_state
?
in general, I believe so. But it’s a very different approach.
@12Chao As @ischoegl said, #1021 and advance_to_steady_state
are very different approaches to achieve the same goal. The proposal in #1021 uses the existing steady-state solver for 1-D problems, which is a hybrid transient solver. The idea is, in steady state, the problem reduces to a set of equations that can be solved by Newton iteration... however, Newton iteration can be unstable, in the sense that it is very sensitive to the initial guess, and it can diverge if the initial guess isn't "good" enough. We know that time stepping will always bring the solution closer to the steady state, so #1021 tries a direct Newton solution for the equations, and if it fails, takes a few timesteps, then tries the Newton solution again. This repeats until the Newton solver converges. As it happens, the accuracy of the timesteps doesn't have a strong effect on the Newton solver's final solution, so this solver doesn't use CVODES because it doesn't need all the accuracy that CVODES is capable of providing.
On the other hand, advance_to_steady_state
relies on CVODES and repeatedly calls the advance
method until some stopping criteria are satisfied. This works because, as I said, timestepping always goes to the steady state (if there is one), but it is potentially much slower than the Newton solution, because time steps take a long time to solve, and you need to take a lot of them to maintain accuracy.
Hope that helps!
Thanks for the explanation, that's really helpful!
Cantera version
all
Operating System
all
Python/MATLAB version
all
Expected Behavior
The choice of the Feature Scaling approach in
advance_to_steady_state
to determine convergence should use an actual time derivative.Actual Behavior
In the current implementation, convergence is checked for an absolute difference between integrator states. The step size is, however, internally chosen by CVODE and thus neither constant nor defined. As a consequence, convergence is poorly defined in the current code, as it depends on the CVODE integrator state as much as it does on
residual_threshold
.PR #629 introduces an interface to time derivatives calculated by CVODE, which can be used to address this issue.