Closed oscardssmith closed 2 months ago
This will need a test.
Agreed. It's a little tricky to test for since it requires a sufficiently nonlinear problem that actually runs into it. e.g. none of the algconvergence tests caught the issue.
Can we kick out an anonymized form of the model that's being tested with? ODEProblemExpr and then obfuscate variable names? Or modelingtoolkitize
it then ODEProblemExpr?
Theoretically, we probably could, but I would like to have a test for a situation where I actually believe that the solution is correct.
ahh that's good
should we merge this?
Rebase, tests should be passing
rebased.
Some tests need to be adjusted
This does suggest a bug since this should be strictly more accurate for non-adaptive methods...
I think I see the problem with the previous version. Lets see if this passes tests.
Did we add a test to ensure this works moving forward? Maybe we should at least add a benchmark that failed previously but works now?
this is very difficult to target a specific test towards. At best if we can anonymize an integration test that would help.
we do not. while it's easy (in hindsight) to see the issue here, coming up with an example ODE that observes the behavior as previously mentioned is somewhat nontrivial. specifically, we need an ODE where the Newton iteration transitions from converging with a high order (ideally 10 or higher) but which then in the very next step has the error jump substantially (but less than the order of the Newton solver) . furthermore, the accumulated error from this single step needs to throw of the solution significantly.
found by @bradcarman. The early exit here relies on implicit feedback from the solver to prevent it from overly aggressively terminating the nonlinear solve as successful. As such, we disable prev_theta for non-adaptive algorithms.