Closed tepperly closed 3 months ago
Ipopt usually calls TNLP::get_starting_point()
twice. The first time is when computing the scaling parameters. In this case, it doesn't need duals and calls with init_z
and init_lambda
set to false. (so this means that Ipopts automatic scaling depends on the starting point, which may be relevant for you) The second time is for initializing the actual starting point. There you should get init_z
and init_lambda
being true.
ip_data.curr()
should give a representation of the current iterate as an IteratesVector
.
Bonmin uses this in its attempts to warmstart Ipopt (https://github.com/coin-or/Bonmin/blob/master/src/Interfaces/Ipopt/BonIpoptInteriorWarmStarter.cpp#L78C29-L78C44).
Thanks! I believe this resolves the issue.
I am using Ipopt to solve large NLP's where the code to evaluate the objective, gradients, and their derivatives are executed on a parallel supercomputer. I am working on implementing a checkpoint/restart (or checkpoint/warm start in this case) feature to enable restarting Ipopt if the NLP algorithm does not converge before our supercomputing allocation runs out or some other system error occurs. Supercomputer jobs have maximum time limits, and the job is killed when the deadline is reached.
My
TNLP::intermediate_callback()
method saves x, z_L, z_U, and lambda to disk. When I want to warm start, I've loaded these quantities from disk, and they are available whenTNLP::get_starting_point()
is called. Thewarm_start_init_point
parameter is set toyes
. I was somewhat surprised to see thatTNLP::get_starting_point()
is called with theinit_z
andinit_lambda
input parameters set tofalse
. The warm start only uses the saved x values.Are there settings to enable Ipopt to request and use the multiplier information in the
get_starting_point()
call? Or isTNLP::get_warm_start_iterate()
what I should look into? If so, is there a canonical way for an implementer ofTNLP
to get theIteratesVector
during a call toTNLP::intermediate_callback()
?