SciML / OrdinaryDiffEq.jl

High performance ordinary differential equation (ODE) and differential-algebraic equation (DAE) solvers, including neural ordinary differential equations (neural ODEs) and scientific machine learning (SciML)
https://diffeq.sciml.ai/latest/
Other
536 stars 205 forks source link

Don't use prev_theta for non-adaptive solves #2269

Closed oscardssmith closed 2 months ago

oscardssmith commented 3 months ago

found by @bradcarman. The early exit here relies on implicit feedback from the solver to prevent it from overly aggressively terminating the nonlinear solve as successful. As such, we disable prev_theta for non-adaptive algorithms.

ChrisRackauckas commented 3 months ago

This will need a test.

oscardssmith commented 3 months ago

Agreed. It's a little tricky to test for since it requires a sufficiently nonlinear problem that actually runs into it. e.g. none of the algconvergence tests caught the issue.

ChrisRackauckas commented 3 months ago

Can we kick out an anonymized form of the model that's being tested with? ODEProblemExpr and then obfuscate variable names? Or modelingtoolkitize it then ODEProblemExpr?

oscardssmith commented 3 months ago

Theoretically, we probably could, but I would like to have a test for a situation where I actually believe that the solution is correct.

ChrisRackauckas commented 3 months ago

ahh that's good

oscardssmith commented 2 months ago

should we merge this?

ChrisRackauckas commented 2 months ago

Rebase, tests should be passing

oscardssmith commented 2 months ago

rebased.

ChrisRackauckas commented 2 months ago

Some tests need to be adjusted

oscardssmith commented 2 months ago

This does suggest a bug since this should be strictly more accurate for non-adaptive methods...

oscardssmith commented 2 months ago

I think I see the problem with the previous version. Lets see if this passes tests.

bradcarman commented 2 months ago

Did we add a test to ensure this works moving forward? Maybe we should at least add a benchmark that failed previously but works now?

ChrisRackauckas commented 2 months ago

this is very difficult to target a specific test towards. At best if we can anonymize an integration test that would help.

oscardssmith commented 2 months ago

we do not. while it's easy (in hindsight) to see the issue here, coming up with an example ODE that observes the behavior as previously mentioned is somewhat nontrivial. specifically, we need an ODE where the Newton iteration transitions from converging with a high order (ideally 10 or higher) but which then in the very next step has the error jump substantially (but less than the order of the Newton solver) . furthermore, the accumulated error from this single step needs to throw of the solution significantly.