coin-or / Ipopt

COIN-OR Interior Point Optimizer IPOPT
https://coin-or.github.io/Ipopt
Other
1.44k stars 284 forks source link

objective function increasing instead of decreasing with each iteration #290

Closed svigerske closed 3 years ago

svigerske commented 5 years ago

Issue created by migration from Trac.

Original creator: kamilova

Original creation time: 2017-09-14 12:38:37

Assignee: ipopt-team

Version: 3.12

I am using Ipopt for my NLP, with the BFGS Hessian approximation option activated, since I have no second order information. I based my implementation on the example provided with the Ipopt installation. I am using Fortran 90 and MA27 as my linear solver. Furthermore, I have used the gradient based scaling for the problem.

Example of output is in the attached image.

Please let me know what other information is necessary. I am very pressed for time to get this optimisation running, and it seems like it's a supposedly simple thing but my coding skills have betrayed me.

svigerske commented 5 years ago

screenshot by kamilova created at 2017-09-14 12:39:03

example of output for Ipopt

svigerske commented 5 years ago

Comment by @svigerske created at 2017-09-14 13:29:28

It seems that Ipopt struggels to make improvements in primal and dual feasibility. It is ok if the objective value is not decreasing at that time.

You might want to enable the derivative checker to check whether your gradient implementation is correct. If so, then maybe try finding a better starting point.

svigerske commented 5 years ago

Comment by kamilova created at 2017-09-14 13:34:47

I have the derivative checker activated and I get errors in most of them, but of order e-2. This problem was already solved with a different optimisation method, and it managed to achieve a suboptimal point despite the errors in the gradient, which is why I still thought it should reach some point.

With the previous optimisation, I get values of up to -9.1 e-1, whereas with Ipopt I start with about -8.8 e-1, and end with -6.7 e-1, which is why I thought that it's maximising instead of minimising?

Is there any particular reason it would have this error in Ipopt, but achieve a (much) better point with SQP optimisation (which is not the best method for this NLP but works "sometimes") ?

Replying to [comment:1 stefan]:

It seems that Ipopt struggels to make improvements in primal and dual feasibility. It is ok if the objective value is not decreasing at that time.

You might want to enable the derivative checker to check whether your gradient implementation is correct. If so, then maybe try finding a better starting point.

svigerske commented 3 years ago

I cannot say what happens. I don't think that Ipopt is accidentally maximizing.

Maybe a more detailed log (higher print_level) would help or a way to reproduce this. But since this is now 3.5 years old, this is probably no longer of interest.