MadNLP / MadNLP.jl

A solver for nonlinear programming
MIT License
169 stars 15 forks source link

Restoration failed when using MadNLP for ODE parameter estimation #264

Open zornsllama opened 1 year ago

zornsllama commented 1 year ago

I'm not sure if this is the best place to ask this question, so please let me know if there's a more appropriate forum!

I have been performing ODE parameter estimation (fitting to experimentally observed data) using NLPModelsIpopt and recently tested MadNLP on the same problems. For moderately sized (90k variables, 800k Hessian nonzeros) test problems with noiseless synthetic data, MadNLP seemed to work well and exhibited reduced compute time relative to Ipopt (using the same linear solver Ma97), so I was hoping to transfer my workflow to MadNLP entirely. However, upon testing MadNLP with larger problems (180k variables, 1.6m Hessian nonzeros), or problems incorporating noisy data (in both the synthetic data + artificial noise and real experimental data cases), the solver either hits max iterations or shows a behavior where the problem appears to be solving well for several hundred steps before suddenly going into restoration and then failing. In contrast, these same problems (and much larger ones) solve to optimality in Ipopt without issue (using the same underlying NLPModel).

I'm curious what's going on, and I am also wondering if there are optimizer options I could modify that might help the issue. Would anyone here be able to point me in a direction to debug this problem?

sshin23 commented 1 year ago

Hey, @zornsllama, thanks for reporting this.

This type of convergence issue is difficult to debug, but if you could provide a simple example to reproduce this, it would be helpful for us to improve the convergence behavior of MadNLP

zornsllama commented 1 year ago

Hi @sshin23. I have attached a zip file containing an example problem -- the script examples/hh_sde_example.jl generates an NLPProblem called vap, which can then be passed to either NLPModelsIpopt or MadNLP. Example logs from both are also contained in the folder examples; you can see that Ipopt solves rather quickly, while MadNLP goes into restoration, and unfortunately fails. The example problem is to perform parameter estimation for a 4D Hodgkin-Huxley equation, discretized via Simpson-Hermite transcription. It is similar to the approach described in your paper here, but incorporating the synchronization-based control given in this paper to regularize convergence. The code in src computes the necessary derivatives using SymPy (this is probably not ideal). Please let me know if you have any questions -- thanks!

example.zip